Running Suite: Kubernetes e2e suite =================================== Random Seed: 1635556245 - Will randomize all specs Will run 5770 specs Running in parallel across 10 nodes Oct 30 01:10:46.990: INFO: >>> kubeConfig: /root/.kube/config Oct 30 01:10:46.995: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Oct 30 01:10:47.022: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Oct 30 01:10:47.087: INFO: The status of Pod cmk-init-discover-node1-n4mcc is Succeeded, skipping waiting Oct 30 01:10:47.087: INFO: The status of Pod cmk-init-discover-node2-2fmmt is Succeeded, skipping waiting Oct 30 01:10:47.087: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Oct 30 01:10:47.087: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Oct 30 01:10:47.087: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Oct 30 01:10:47.104: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) Oct 30 01:10:47.104: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) Oct 30 01:10:47.104: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) Oct 30 01:10:47.104: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) Oct 30 01:10:47.104: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) Oct 30 01:10:47.104: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) Oct 30 01:10:47.104: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) Oct 30 01:10:47.104: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Oct 30 01:10:47.104: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) Oct 30 01:10:47.104: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) Oct 30 01:10:47.104: INFO: e2e test version: v1.21.5 Oct 30 01:10:47.105: INFO: kube-apiserver version: v1.21.1 Oct 30 01:10:47.105: INFO: >>> kubeConfig: /root/.kube/config Oct 30 01:10:47.111: INFO: Cluster IP family: ipv4 SSSSSSS ------------------------------ Oct 30 01:10:47.109: INFO: >>> kubeConfig: /root/.kube/config Oct 30 01:10:47.131: INFO: Cluster IP family: ipv4 SSSS ------------------------------ Oct 30 01:10:47.116: INFO: >>> kubeConfig: /root/.kube/config Oct 30 01:10:47.138: INFO: Cluster IP family: ipv4 Oct 30 01:10:47.116: INFO: >>> kubeConfig: /root/.kube/config Oct 30 01:10:47.138: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSS ------------------------------ Oct 30 01:10:47.125: INFO: >>> kubeConfig: /root/.kube/config Oct 30 01:10:47.147: INFO: Cluster IP family: ipv4 Oct 30 01:10:47.126: INFO: >>> kubeConfig: /root/.kube/config Oct 30 01:10:47.147: INFO: Cluster IP family: ipv4 SS ------------------------------ Oct 30 01:10:47.125: INFO: >>> kubeConfig: /root/.kube/config Oct 30 01:10:47.150: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSS ------------------------------ Oct 30 01:10:47.128: INFO: >>> kubeConfig: /root/.kube/config Oct 30 01:10:47.155: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSS ------------------------------ Oct 30 01:10:47.143: INFO: >>> kubeConfig: /root/.kube/config Oct 30 01:10:47.167: INFO: Cluster IP family: ipv4 Oct 30 01:10:47.144: INFO: >>> kubeConfig: /root/.kube/config Oct 30 01:10:47.167: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:10:47.258: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition W1030 01:10:47.278548 29 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 30 01:10:47.278: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 30 01:10:47.280: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:10:47.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-3760" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":-1,"completed":1,"skipped":40,"failed":0} S ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:10:47.219: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl W1030 01:10:47.243623 27 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 30 01:10:47.243: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 30 01:10:47.245: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should support --unix-socket=/path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Starting the proxy Oct 30 01:10:47.248: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4089 proxy --unix-socket=/tmp/kubectl-proxy-unix046296668/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:10:47.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4089" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":-1,"completed":1,"skipped":19,"failed":0} S ------------------------------ [BeforeEach] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:10:47.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange W1030 01:10:47.238850 31 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 30 01:10:47.239: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 30 01:10:47.240: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange Oct 30 01:10:47.248: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values Oct 30 01:10:47.251: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Oct 30 01:10:47.252: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange Oct 30 01:10:47.264: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Oct 30 01:10:47.264: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange Oct 30 01:10:47.277: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] Oct 30 01:10:47.277: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted Oct 30 01:10:54.333: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:10:54.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-8230" for this suite. • [SLOW TEST:7.136 seconds] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":-1,"completed":1,"skipped":14,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:10:47.160: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets W1030 01:10:47.179426 23 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 30 01:10:47.179: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 30 01:10:47.181: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-aa480ad5-0d03-45d8-93ca-c4b6a8395b23 STEP: Creating a pod to test consume secrets Oct 30 01:10:47.205: INFO: Waiting up to 5m0s for pod "pod-secrets-7189ee4b-b7e7-4ae0-b8fa-989f6c6e540a" in namespace "secrets-2907" to be "Succeeded or Failed" Oct 30 01:10:47.209: INFO: Pod "pod-secrets-7189ee4b-b7e7-4ae0-b8fa-989f6c6e540a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.726589ms Oct 30 01:10:49.213: INFO: Pod "pod-secrets-7189ee4b-b7e7-4ae0-b8fa-989f6c6e540a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008091629s Oct 30 01:10:51.217: INFO: Pod "pod-secrets-7189ee4b-b7e7-4ae0-b8fa-989f6c6e540a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011610111s Oct 30 01:10:53.221: INFO: Pod "pod-secrets-7189ee4b-b7e7-4ae0-b8fa-989f6c6e540a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015631673s Oct 30 01:10:55.227: INFO: Pod "pod-secrets-7189ee4b-b7e7-4ae0-b8fa-989f6c6e540a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.021339698s STEP: Saw pod success Oct 30 01:10:55.227: INFO: Pod "pod-secrets-7189ee4b-b7e7-4ae0-b8fa-989f6c6e540a" satisfied condition "Succeeded or Failed" Oct 30 01:10:55.229: INFO: Trying to get logs from node node2 pod pod-secrets-7189ee4b-b7e7-4ae0-b8fa-989f6c6e540a container secret-volume-test: STEP: delete the pod Oct 30 01:10:55.254: INFO: Waiting for pod pod-secrets-7189ee4b-b7e7-4ae0-b8fa-989f6c6e540a to disappear Oct 30 01:10:55.256: INFO: Pod pod-secrets-7189ee4b-b7e7-4ae0-b8fa-989f6c6e540a no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:10:55.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2907" for this suite. • [SLOW TEST:8.106 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":3,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:10:47.298: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward api env vars Oct 30 01:10:47.332: INFO: Waiting up to 5m0s for pod "downward-api-41121cc4-382e-41b4-837a-235620abb07e" in namespace "downward-api-4936" to be "Succeeded or Failed" Oct 30 01:10:47.335: INFO: Pod "downward-api-41121cc4-382e-41b4-837a-235620abb07e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.454567ms Oct 30 01:10:49.339: INFO: Pod "downward-api-41121cc4-382e-41b4-837a-235620abb07e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007089868s Oct 30 01:10:51.343: INFO: Pod "downward-api-41121cc4-382e-41b4-837a-235620abb07e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011182808s Oct 30 01:10:53.348: INFO: Pod "downward-api-41121cc4-382e-41b4-837a-235620abb07e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015602537s Oct 30 01:10:55.351: INFO: Pod "downward-api-41121cc4-382e-41b4-837a-235620abb07e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.019037634s Oct 30 01:10:57.354: INFO: Pod "downward-api-41121cc4-382e-41b4-837a-235620abb07e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.022423081s STEP: Saw pod success Oct 30 01:10:57.354: INFO: Pod "downward-api-41121cc4-382e-41b4-837a-235620abb07e" satisfied condition "Succeeded or Failed" Oct 30 01:10:57.357: INFO: Trying to get logs from node node1 pod downward-api-41121cc4-382e-41b4-837a-235620abb07e container dapi-container: STEP: delete the pod Oct 30 01:10:57.686: INFO: Waiting for pod downward-api-41121cc4-382e-41b4-837a-235620abb07e to disappear Oct 30 01:10:57.689: INFO: Pod downward-api-41121cc4-382e-41b4-837a-235620abb07e no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:10:57.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4936" for this suite. • [SLOW TEST:10.399 seconds] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":41,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:10:47.351: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:293 [It] should create and stop a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a replication controller Oct 30 01:10:47.470: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6978 create -f -' Oct 30 01:10:47.911: INFO: stderr: "" Oct 30 01:10:47.911: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Oct 30 01:10:47.911: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6978 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Oct 30 01:10:48.083: INFO: stderr: "" Oct 30 01:10:48.084: INFO: stdout: "update-demo-nautilus-5jg6f update-demo-nautilus-rzb6h " Oct 30 01:10:48.084: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6978 get pods update-demo-nautilus-5jg6f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Oct 30 01:10:48.251: INFO: stderr: "" Oct 30 01:10:48.251: INFO: stdout: "" Oct 30 01:10:48.251: INFO: update-demo-nautilus-5jg6f is created but not running Oct 30 01:10:53.252: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6978 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Oct 30 01:10:53.432: INFO: stderr: "" Oct 30 01:10:53.432: INFO: stdout: "update-demo-nautilus-5jg6f update-demo-nautilus-rzb6h " Oct 30 01:10:53.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6978 get pods update-demo-nautilus-5jg6f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Oct 30 01:10:53.603: INFO: stderr: "" Oct 30 01:10:53.603: INFO: stdout: "" Oct 30 01:10:53.603: INFO: update-demo-nautilus-5jg6f is created but not running Oct 30 01:10:58.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6978 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Oct 30 01:10:58.767: INFO: stderr: "" Oct 30 01:10:58.767: INFO: stdout: "update-demo-nautilus-5jg6f update-demo-nautilus-rzb6h " Oct 30 01:10:58.767: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6978 get pods update-demo-nautilus-5jg6f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Oct 30 01:10:58.937: INFO: stderr: "" Oct 30 01:10:58.937: INFO: stdout: "true" Oct 30 01:10:58.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6978 get pods update-demo-nautilus-5jg6f -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Oct 30 01:10:59.107: INFO: stderr: "" Oct 30 01:10:59.107: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Oct 30 01:10:59.107: INFO: validating pod update-demo-nautilus-5jg6f Oct 30 01:10:59.111: INFO: got data: { "image": "nautilus.jpg" } Oct 30 01:10:59.111: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Oct 30 01:10:59.111: INFO: update-demo-nautilus-5jg6f is verified up and running Oct 30 01:10:59.111: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6978 get pods update-demo-nautilus-rzb6h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Oct 30 01:10:59.265: INFO: stderr: "" Oct 30 01:10:59.265: INFO: stdout: "true" Oct 30 01:10:59.265: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6978 get pods update-demo-nautilus-rzb6h -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Oct 30 01:10:59.434: INFO: stderr: "" Oct 30 01:10:59.434: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Oct 30 01:10:59.434: INFO: validating pod update-demo-nautilus-rzb6h Oct 30 01:10:59.437: INFO: got data: { "image": "nautilus.jpg" } Oct 30 01:10:59.437: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Oct 30 01:10:59.437: INFO: update-demo-nautilus-rzb6h is verified up and running STEP: using delete to clean up resources Oct 30 01:10:59.437: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6978 delete --grace-period=0 --force -f -' Oct 30 01:10:59.568: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Oct 30 01:10:59.568: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Oct 30 01:10:59.568: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6978 get rc,svc -l name=update-demo --no-headers' Oct 30 01:10:59.762: INFO: stderr: "No resources found in kubectl-6978 namespace.\n" Oct 30 01:10:59.762: INFO: stdout: "" Oct 30 01:10:59.762: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6978 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Oct 30 01:10:59.934: INFO: stderr: "" Oct 30 01:10:59.934: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:10:59.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6978" for this suite. • [SLOW TEST:12.592 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:291 should create and stop a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":-1,"completed":2,"skipped":20,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:10:54.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [It] should create a PodDisruptionBudget [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pdb STEP: Waiting for the pdb to be processed STEP: updating the pdb STEP: Waiting for the pdb to be processed STEP: patching the pdb STEP: Waiting for the pdb to be processed STEP: Waiting for the pdb to be deleted [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:11:00.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-5535" for this suite. • [SLOW TEST:6.071 seconds] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should create a PodDisruptionBudget [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] DisruptionController should create a PodDisruptionBudget [Conformance]","total":-1,"completed":2,"skipped":28,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-node] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:10:47.260: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts W1030 01:10:47.280108 32 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 30 01:10:47.280: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 30 01:10:47.281: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Setting up the test STEP: Creating hostNetwork=false pod Oct 30 01:10:47.297: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:10:49.300: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:10:51.301: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:10:53.301: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:10:55.301: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:10:57.300: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:10:59.301: INFO: The status of Pod test-pod is Running (Ready = true) STEP: Creating hostNetwork=true pod Oct 30 01:10:59.316: INFO: The status of Pod test-host-network-pod is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:11:01.321: INFO: The status of Pod test-host-network-pod is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:11:03.321: INFO: The status of Pod test-host-network-pod is Running (Ready = true) STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Oct 30 01:11:03.324: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2856 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 01:11:03.324: INFO: >>> kubeConfig: /root/.kube/config Oct 30 01:11:03.405: INFO: Exec stderr: "" Oct 30 01:11:03.405: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2856 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 01:11:03.405: INFO: >>> kubeConfig: /root/.kube/config Oct 30 01:11:03.486: INFO: Exec stderr: "" Oct 30 01:11:03.486: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2856 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 01:11:03.486: INFO: >>> kubeConfig: /root/.kube/config Oct 30 01:11:03.562: INFO: Exec stderr: "" Oct 30 01:11:03.562: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2856 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 01:11:03.562: INFO: >>> kubeConfig: /root/.kube/config Oct 30 01:11:03.644: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Oct 30 01:11:03.644: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2856 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 01:11:03.644: INFO: >>> kubeConfig: /root/.kube/config Oct 30 01:11:03.724: INFO: Exec stderr: "" Oct 30 01:11:03.724: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2856 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 01:11:03.724: INFO: >>> kubeConfig: /root/.kube/config Oct 30 01:11:03.801: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Oct 30 01:11:03.801: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2856 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 01:11:03.801: INFO: >>> kubeConfig: /root/.kube/config Oct 30 01:11:03.883: INFO: Exec stderr: "" Oct 30 01:11:03.883: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2856 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 01:11:03.883: INFO: >>> kubeConfig: /root/.kube/config Oct 30 01:11:03.961: INFO: Exec stderr: "" Oct 30 01:11:03.961: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2856 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 01:11:03.961: INFO: >>> kubeConfig: /root/.kube/config Oct 30 01:11:04.066: INFO: Exec stderr: "" Oct 30 01:11:04.066: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2856 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 01:11:04.066: INFO: >>> kubeConfig: /root/.kube/config Oct 30 01:11:04.140: INFO: Exec stderr: "" [AfterEach] [sig-node] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:11:04.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-2856" for this suite. • [SLOW TEST:16.889 seconds] [sig-node] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":35,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:10:47.205: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath W1030 01:10:47.233636 37 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 30 01:10:47.233: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 30 01:10:47.235: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod pod-subpath-test-projected-fnf5 STEP: Creating a pod to test atomic-volume-subpath Oct 30 01:10:47.258: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-fnf5" in namespace "subpath-3628" to be "Succeeded or Failed" Oct 30 01:10:47.261: INFO: Pod "pod-subpath-test-projected-fnf5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.377443ms Oct 30 01:10:49.266: INFO: Pod "pod-subpath-test-projected-fnf5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00750959s Oct 30 01:10:51.271: INFO: Pod "pod-subpath-test-projected-fnf5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012936391s Oct 30 01:10:53.279: INFO: Pod "pod-subpath-test-projected-fnf5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.020650798s Oct 30 01:10:55.285: INFO: Pod "pod-subpath-test-projected-fnf5": Phase="Running", Reason="", readiness=true. Elapsed: 8.026829153s Oct 30 01:10:57.289: INFO: Pod "pod-subpath-test-projected-fnf5": Phase="Running", Reason="", readiness=true. Elapsed: 10.030610827s Oct 30 01:10:59.292: INFO: Pod "pod-subpath-test-projected-fnf5": Phase="Running", Reason="", readiness=true. Elapsed: 12.034334306s Oct 30 01:11:01.297: INFO: Pod "pod-subpath-test-projected-fnf5": Phase="Running", Reason="", readiness=true. Elapsed: 14.03901986s Oct 30 01:11:03.305: INFO: Pod "pod-subpath-test-projected-fnf5": Phase="Running", Reason="", readiness=true. Elapsed: 16.047206669s Oct 30 01:11:05.312: INFO: Pod "pod-subpath-test-projected-fnf5": Phase="Running", Reason="", readiness=true. Elapsed: 18.053715562s Oct 30 01:11:07.316: INFO: Pod "pod-subpath-test-projected-fnf5": Phase="Running", Reason="", readiness=true. Elapsed: 20.058041762s Oct 30 01:11:09.319: INFO: Pod "pod-subpath-test-projected-fnf5": Phase="Running", Reason="", readiness=true. Elapsed: 22.061291178s Oct 30 01:11:11.322: INFO: Pod "pod-subpath-test-projected-fnf5": Phase="Running", Reason="", readiness=true. Elapsed: 24.063829669s Oct 30 01:11:13.326: INFO: Pod "pod-subpath-test-projected-fnf5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.067645227s STEP: Saw pod success Oct 30 01:11:13.326: INFO: Pod "pod-subpath-test-projected-fnf5" satisfied condition "Succeeded or Failed" Oct 30 01:11:13.328: INFO: Trying to get logs from node node1 pod pod-subpath-test-projected-fnf5 container test-container-subpath-projected-fnf5: STEP: delete the pod Oct 30 01:11:13.381: INFO: Waiting for pod pod-subpath-test-projected-fnf5 to disappear Oct 30 01:11:13.383: INFO: Pod pod-subpath-test-projected-fnf5 no longer exists STEP: Deleting pod pod-subpath-test-projected-fnf5 Oct 30 01:11:13.384: INFO: Deleting pod "pod-subpath-test-projected-fnf5" in namespace "subpath-3628" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:11:13.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3628" for this suite. • [SLOW TEST:26.189 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:10:55.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. Oct 30 01:10:55.343: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:10:57.346: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:10:59.348: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the pod with lifecycle hook Oct 30 01:10:59.364: INFO: The status of Pod pod-with-poststart-http-hook is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:11:01.370: INFO: The status of Pod pod-with-poststart-http-hook is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:11:03.369: INFO: The status of Pod pod-with-poststart-http-hook is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:11:05.368: INFO: The status of Pod pod-with-poststart-http-hook is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:11:07.369: INFO: The status of Pod pod-with-poststart-http-hook is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:11:09.368: INFO: The status of Pod pod-with-poststart-http-hook is Running (Ready = true) STEP: check poststart hook STEP: delete the pod with lifecycle hook Oct 30 01:11:09.382: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Oct 30 01:11:09.384: INFO: Pod pod-with-poststart-http-hook still exists Oct 30 01:11:11.386: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Oct 30 01:11:11.390: INFO: Pod pod-with-poststart-http-hook still exists Oct 30 01:11:13.386: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Oct 30 01:11:13.389: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:11:13.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5253" for this suite. • [SLOW TEST:18.086 seconds] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43 should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":-1,"completed":1,"skipped":20,"failed":0} S ------------------------------ {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":29,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:10:47.160: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath W1030 01:10:47.180883 35 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 30 01:10:47.181: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 30 01:10:47.182: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod pod-subpath-test-secret-g5gc STEP: Creating a pod to test atomic-volume-subpath Oct 30 01:10:47.213: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-g5gc" in namespace "subpath-9316" to be "Succeeded or Failed" Oct 30 01:10:47.218: INFO: Pod "pod-subpath-test-secret-g5gc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.393231ms Oct 30 01:10:49.222: INFO: Pod "pod-subpath-test-secret-g5gc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008874607s Oct 30 01:10:51.226: INFO: Pod "pod-subpath-test-secret-g5gc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012505946s Oct 30 01:10:53.229: INFO: Pod "pod-subpath-test-secret-g5gc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015571331s Oct 30 01:10:55.237: INFO: Pod "pod-subpath-test-secret-g5gc": Phase="Running", Reason="", readiness=true. Elapsed: 8.02352936s Oct 30 01:10:57.241: INFO: Pod "pod-subpath-test-secret-g5gc": Phase="Running", Reason="", readiness=true. Elapsed: 10.027443224s Oct 30 01:10:59.246: INFO: Pod "pod-subpath-test-secret-g5gc": Phase="Running", Reason="", readiness=true. Elapsed: 12.032991083s Oct 30 01:11:01.250: INFO: Pod "pod-subpath-test-secret-g5gc": Phase="Running", Reason="", readiness=true. Elapsed: 14.036682572s Oct 30 01:11:03.256: INFO: Pod "pod-subpath-test-secret-g5gc": Phase="Running", Reason="", readiness=true. Elapsed: 16.043024851s Oct 30 01:11:05.260: INFO: Pod "pod-subpath-test-secret-g5gc": Phase="Running", Reason="", readiness=true. Elapsed: 18.047239418s Oct 30 01:11:07.263: INFO: Pod "pod-subpath-test-secret-g5gc": Phase="Running", Reason="", readiness=true. Elapsed: 20.050294589s Oct 30 01:11:09.267: INFO: Pod "pod-subpath-test-secret-g5gc": Phase="Running", Reason="", readiness=true. Elapsed: 22.053609873s Oct 30 01:11:11.271: INFO: Pod "pod-subpath-test-secret-g5gc": Phase="Running", Reason="", readiness=true. Elapsed: 24.05742798s Oct 30 01:11:13.274: INFO: Pod "pod-subpath-test-secret-g5gc": Phase="Running", Reason="", readiness=true. Elapsed: 26.061299879s Oct 30 01:11:15.278: INFO: Pod "pod-subpath-test-secret-g5gc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.064380169s STEP: Saw pod success Oct 30 01:11:15.278: INFO: Pod "pod-subpath-test-secret-g5gc" satisfied condition "Succeeded or Failed" Oct 30 01:11:15.280: INFO: Trying to get logs from node node1 pod pod-subpath-test-secret-g5gc container test-container-subpath-secret-g5gc: STEP: delete the pod Oct 30 01:11:15.293: INFO: Waiting for pod pod-subpath-test-secret-g5gc to disappear Oct 30 01:11:15.295: INFO: Pod pod-subpath-test-secret-g5gc no longer exists STEP: Deleting pod pod-subpath-test-secret-g5gc Oct 30 01:11:15.295: INFO: Deleting pod "pod-subpath-test-secret-g5gc" in namespace "subpath-9316" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:11:15.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9316" for this suite. • [SLOW TEST:28.145 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":-1,"completed":1,"skipped":9,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:11:15.344: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name secret-emptykey-test-513f9fd6-6d2e-4a2e-a898-5a345f98475b [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:11:15.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5529" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Secrets should fail to create secret due to empty secret key [Conformance]","total":-1,"completed":2,"skipped":30,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:10:57.754: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 30 01:10:58.103: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 30 01:11:00.113: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153058, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153058, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153058, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153058, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 30 01:11:02.116: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153058, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153058, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153058, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153058, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 30 01:11:04.117: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153058, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153058, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153058, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153058, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 30 01:11:06.117: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153058, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153058, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153058, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153058, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 30 01:11:09.123: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 01:11:09.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:11:16.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9116" for this suite. STEP: Destroying namespace "webhook-9116-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:19.009 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":-1,"completed":3,"skipped":71,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:11:00.478: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication Oct 30 01:11:00.732: INFO: role binding webhook-auth-reader already exists STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 30 01:11:00.744: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 30 01:11:02.753: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153060, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153060, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153060, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153060, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 30 01:11:04.757: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153060, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153060, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153060, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153060, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 30 01:11:06.757: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153060, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153060, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153060, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153060, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 30 01:11:08.757: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153060, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153060, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153060, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153060, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 30 01:11:11.763: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Oct 30 01:11:17.790: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=webhook-7472 attach --namespace=webhook-7472 to-be-attached-pod -i -c=container1' Oct 30 01:11:17.968: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:11:17.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7472" for this suite. STEP: Destroying namespace "webhook-7472-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:17.523 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":-1,"completed":3,"skipped":37,"failed":0} [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:11:18.004: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename endpointslice STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 [It] should support creating EndpointSlice API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting /apis STEP: getting /apis/discovery.k8s.io STEP: getting /apis/discovery.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Oct 30 01:11:18.039: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching Oct 30 01:11:18.043: INFO: starting watch STEP: patching STEP: updating Oct 30 01:11:18.052: INFO: waiting for watch events with expected annotations Oct 30 01:11:18.052: INFO: saw patched and updated annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:11:18.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslice-8349" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","total":-1,"completed":4,"skipped":37,"failed":0} S ------------------------------ [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:10:59.983: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:157 [It] should call prestop when killing a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating server pod server in namespace prestop-2637 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-2637 STEP: Deleting pre-stop pod Oct 30 01:11:19.051: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:11:19.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-2637" for this suite. • [SLOW TEST:19.082 seconds] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should call prestop when killing a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":-1,"completed":3,"skipped":40,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:11:15.382: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0644 on tmpfs Oct 30 01:11:15.436: INFO: Waiting up to 5m0s for pod "pod-5af2ae8b-03e1-4059-a1e6-7498f15acca4" in namespace "emptydir-2557" to be "Succeeded or Failed" Oct 30 01:11:15.438: INFO: Pod "pod-5af2ae8b-03e1-4059-a1e6-7498f15acca4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.347738ms Oct 30 01:11:17.442: INFO: Pod "pod-5af2ae8b-03e1-4059-a1e6-7498f15acca4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006160682s Oct 30 01:11:19.444: INFO: Pod "pod-5af2ae8b-03e1-4059-a1e6-7498f15acca4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008565438s STEP: Saw pod success Oct 30 01:11:19.444: INFO: Pod "pod-5af2ae8b-03e1-4059-a1e6-7498f15acca4" satisfied condition "Succeeded or Failed" Oct 30 01:11:19.447: INFO: Trying to get logs from node node2 pod pod-5af2ae8b-03e1-4059-a1e6-7498f15acca4 container test-container: STEP: delete the pod Oct 30 01:11:19.460: INFO: Waiting for pod pod-5af2ae8b-03e1-4059-a1e6-7498f15acca4 to disappear Oct 30 01:11:19.462: INFO: Pod pod-5af2ae8b-03e1-4059-a1e6-7498f15acca4 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:11:19.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2557" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":34,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:11:13.426: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-dc1d82ef-6154-4b50-a41d-1d676a314809 STEP: Creating a pod to test consume configMaps Oct 30 01:11:13.467: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c6dfd185-4288-4033-9498-0c84ee3c18ae" in namespace "projected-6876" to be "Succeeded or Failed" Oct 30 01:11:13.472: INFO: Pod "pod-projected-configmaps-c6dfd185-4288-4033-9498-0c84ee3c18ae": Phase="Pending", Reason="", readiness=false. Elapsed: 5.468322ms Oct 30 01:11:15.476: INFO: Pod "pod-projected-configmaps-c6dfd185-4288-4033-9498-0c84ee3c18ae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009163127s Oct 30 01:11:17.479: INFO: Pod "pod-projected-configmaps-c6dfd185-4288-4033-9498-0c84ee3c18ae": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012690139s Oct 30 01:11:19.483: INFO: Pod "pod-projected-configmaps-c6dfd185-4288-4033-9498-0c84ee3c18ae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016178122s STEP: Saw pod success Oct 30 01:11:19.483: INFO: Pod "pod-projected-configmaps-c6dfd185-4288-4033-9498-0c84ee3c18ae" satisfied condition "Succeeded or Failed" Oct 30 01:11:19.485: INFO: Trying to get logs from node node1 pod pod-projected-configmaps-c6dfd185-4288-4033-9498-0c84ee3c18ae container projected-configmap-volume-test: STEP: delete the pod Oct 30 01:11:19.497: INFO: Waiting for pod pod-projected-configmaps-c6dfd185-4288-4033-9498-0c84ee3c18ae to disappear Oct 30 01:11:19.499: INFO: Pod pod-projected-configmaps-c6dfd185-4288-4033-9498-0c84ee3c18ae no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:11:19.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6876" for this suite. • [SLOW TEST:6.081 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ SS ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":35,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:11:13.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Oct 30 01:11:13.457: INFO: Waiting up to 5m0s for pod "downwardapi-volume-888afddb-a088-44e1-b066-d6469ffe113f" in namespace "projected-9626" to be "Succeeded or Failed" Oct 30 01:11:13.461: INFO: Pod "downwardapi-volume-888afddb-a088-44e1-b066-d6469ffe113f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.847619ms Oct 30 01:11:15.464: INFO: Pod "downwardapi-volume-888afddb-a088-44e1-b066-d6469ffe113f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007145535s Oct 30 01:11:17.467: INFO: Pod "downwardapi-volume-888afddb-a088-44e1-b066-d6469ffe113f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009885355s Oct 30 01:11:19.471: INFO: Pod "downwardapi-volume-888afddb-a088-44e1-b066-d6469ffe113f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013831158s Oct 30 01:11:21.475: INFO: Pod "downwardapi-volume-888afddb-a088-44e1-b066-d6469ffe113f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.01770755s STEP: Saw pod success Oct 30 01:11:21.475: INFO: Pod "downwardapi-volume-888afddb-a088-44e1-b066-d6469ffe113f" satisfied condition "Succeeded or Failed" Oct 30 01:11:21.477: INFO: Trying to get logs from node node1 pod downwardapi-volume-888afddb-a088-44e1-b066-d6469ffe113f container client-container: STEP: delete the pod Oct 30 01:11:21.488: INFO: Waiting for pod downwardapi-volume-888afddb-a088-44e1-b066-d6469ffe113f to disappear Oct 30 01:11:21.491: INFO: Pod downwardapi-volume-888afddb-a088-44e1-b066-d6469ffe113f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:11:21.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9626" for this suite. • [SLOW TEST:8.074 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":42,"failed":0} SSS ------------------------------ [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:10:47.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename endpointslice W1030 01:10:47.176858 28 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 30 01:10:47.177: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 30 01:10:47.180: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 [It] should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: referencing a single matching pod STEP: referencing matching pods with named port STEP: creating empty Endpoints and EndpointSlices for no matching Pods STEP: recreating EndpointSlices after they've been deleted Oct 30 01:11:12.279: INFO: EndpointSlice for Service endpointslice-9604/example-named-port not found [AfterEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:11:22.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslice-9604" for this suite. • [SLOW TEST:35.143 seconds] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","total":-1,"completed":1,"skipped":13,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:11:16.822: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 01:11:16.841: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:11:22.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2700" for this suite. • [SLOW TEST:6.040 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 creating/deleting custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":-1,"completed":4,"skipped":100,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:11:22.947: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check if v1 is in available api versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: validating api versions Oct 30 01:11:22.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8846 api-versions' Oct 30 01:11:23.090: INFO: stderr: "" Oct 30 01:11:23.090: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ncustom.metrics.k8s.io/v1beta1\ndiscovery.k8s.io/v1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nflowcontrol.apiserver.k8s.io/v1beta1\nintel.com/v1\nk8s.cni.cncf.io/v1\nmonitoring.coreos.com/v1\nmonitoring.coreos.com/v1alpha1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1\nnode.k8s.io/v1beta1\npolicy/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\ntelemetry.intel.com/v1alpha1\nv1\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:11:23.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8846" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":-1,"completed":5,"skipped":149,"failed":0} S ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:11:19.080: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting a starting resourceVersion STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:11:24.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4247" for this suite. • [SLOW TEST:5.406 seconds] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":-1,"completed":4,"skipped":46,"failed":0} SSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:11:19.570: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should run through the lifecycle of Pods and PodStatus [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a Pod with a static label STEP: watching for Pod to be ready Oct 30 01:11:19.615: INFO: observed Pod pod-test in namespace pods-1316 in phase Pending with labels: map[test-pod-static:true] & conditions [] Oct 30 01:11:19.617: INFO: observed Pod pod-test in namespace pods-1316 in phase Pending with labels: map[test-pod-static:true] & conditions [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:11:19 +0000 UTC }] Oct 30 01:11:19.625: INFO: observed Pod pod-test in namespace pods-1316 in phase Pending with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:11:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:11:19 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:11:19 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:11:19 +0000 UTC }] Oct 30 01:11:21.018: INFO: observed Pod pod-test in namespace pods-1316 in phase Pending with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:11:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:11:19 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:11:19 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:11:19 +0000 UTC }] Oct 30 01:11:25.192: INFO: Found Pod pod-test in namespace pods-1316 in phase Running with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:11:19 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:11:22 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:11:22 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:11:19 +0000 UTC }] STEP: patching the Pod with a new Label and updated data Oct 30 01:11:25.208: INFO: observed event type ADDED STEP: getting the Pod and ensuring that it's patched STEP: getting the PodStatus STEP: replacing the Pod's status Ready condition to False STEP: check the Pod again to ensure its Ready conditions are False STEP: deleting the Pod via a Collection with a LabelSelector STEP: watching for the Pod to be deleted Oct 30 01:11:25.227: INFO: observed event type ADDED Oct 30 01:11:25.227: INFO: observed event type MODIFIED Oct 30 01:11:25.227: INFO: observed event type MODIFIED Oct 30 01:11:25.227: INFO: observed event type MODIFIED Oct 30 01:11:25.227: INFO: observed event type MODIFIED Oct 30 01:11:25.227: INFO: observed event type MODIFIED Oct 30 01:11:25.227: INFO: observed event type MODIFIED Oct 30 01:11:25.227: INFO: observed event type MODIFIED [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:11:25.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1316" for this suite. • [SLOW TEST:5.665 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should run through the lifecycle of Pods and PodStatus [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]","total":-1,"completed":3,"skipped":70,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:11:04.154: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1548 [It] should update a single-container pod's image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 Oct 30 01:11:04.175: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1411 run e2e-test-httpd-pod --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 --labels=run=e2e-test-httpd-pod' Oct 30 01:11:04.323: INFO: stderr: "" Oct 30 01:11:04.323: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Oct 30 01:11:09.374: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1411 get pod e2e-test-httpd-pod -o json' Oct 30 01:11:09.551: INFO: stderr: "" Oct 30 01:11:09.551: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"annotations\": {\n \"k8s.v1.cni.cncf.io/network-status\": \"[{\\n \\\"name\\\": \\\"default-cni-network\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.244.4.111\\\"\\n ],\\n \\\"mac\\\": \\\"7a:f5:15:7d:3d:e3\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\n \"k8s.v1.cni.cncf.io/networks-status\": \"[{\\n \\\"name\\\": \\\"default-cni-network\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.244.4.111\\\"\\n ],\\n \\\"mac\\\": \\\"7a:f5:15:7d:3d:e3\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\n \"kubernetes.io/psp\": \"collectd\"\n },\n \"creationTimestamp\": \"2021-10-30T01:11:04Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-1411\",\n \"resourceVersion\": \"79698\",\n \"uid\": \"16a18bbc-6a75-43f0-823e-1b8a567e5505\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\",\n \"imagePullPolicy\": \"Always\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"kube-api-access-w6277\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"node2\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"kube-api-access-w6277\",\n \"projected\": {\n \"defaultMode\": 420,\n \"sources\": [\n {\n \"serviceAccountToken\": {\n \"expirationSeconds\": 3607,\n \"path\": \"token\"\n }\n },\n {\n \"configMap\": {\n \"items\": [\n {\n \"key\": \"ca.crt\",\n \"path\": \"ca.crt\"\n }\n ],\n \"name\": \"kube-root-ca.crt\"\n }\n },\n {\n \"downwardAPI\": {\n \"items\": [\n {\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"metadata.namespace\"\n },\n \"path\": \"namespace\"\n }\n ]\n }\n }\n ]\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-10-30T01:11:04Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-10-30T01:11:07Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-10-30T01:11:07Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-10-30T01:11:04Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"docker://af744f3532847e162c2a317058fcfed1b58160ea9efed83f0361954bfeaa5891\",\n \"image\": \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\",\n \"imageID\": \"docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2021-10-30T01:11:06Z\"\n }\n }\n }\n ],\n \"hostIP\": \"10.10.190.208\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.4.111\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.4.111\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2021-10-30T01:11:04Z\"\n }\n}\n" STEP: replace the image in the pod Oct 30 01:11:09.551: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1411 replace -f -' Oct 30 01:11:09.865: INFO: stderr: "" Oct 30 01:11:09.865: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image k8s.gcr.io/e2e-test-images/busybox:1.29-1 [AfterEach] Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1552 Oct 30 01:11:09.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1411 delete pods e2e-test-httpd-pod' Oct 30 01:11:25.604: INFO: stderr: "" Oct 30 01:11:25.604: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:11:25.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1411" for this suite. • [SLOW TEST:21.459 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1545 should update a single-container pod's image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":-1,"completed":2,"skipped":36,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:11:21.506: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] A set of valid responses are returned for both pod and service ProxyWithPath [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 01:11:21.526: INFO: Creating pod... Oct 30 01:11:21.539: INFO: Pod Quantity: 1 Status: Pending Oct 30 01:11:22.542: INFO: Pod Quantity: 1 Status: Pending Oct 30 01:11:23.542: INFO: Pod Quantity: 1 Status: Pending Oct 30 01:11:24.546: INFO: Pod Quantity: 1 Status: Pending Oct 30 01:11:25.542: INFO: Pod Quantity: 1 Status: Pending Oct 30 01:11:26.543: INFO: Pod Quantity: 1 Status: Pending Oct 30 01:11:27.543: INFO: Pod Quantity: 1 Status: Pending Oct 30 01:11:28.542: INFO: Pod Quantity: 1 Status: Pending Oct 30 01:11:29.542: INFO: Pod Status: Running Oct 30 01:11:29.542: INFO: Creating service... Oct 30 01:11:29.547: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-1505/pods/agnhost/proxy/some/path/with/DELETE Oct 30 01:11:29.550: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE Oct 30 01:11:29.550: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-1505/pods/agnhost/proxy/some/path/with/GET Oct 30 01:11:29.553: INFO: http.Client request:GET | StatusCode:200 | Response:foo | Method:GET Oct 30 01:11:29.553: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-1505/pods/agnhost/proxy/some/path/with/HEAD Oct 30 01:11:29.555: INFO: http.Client request:HEAD | StatusCode:200 Oct 30 01:11:29.555: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-1505/pods/agnhost/proxy/some/path/with/OPTIONS Oct 30 01:11:29.559: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS Oct 30 01:11:29.559: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-1505/pods/agnhost/proxy/some/path/with/PATCH Oct 30 01:11:29.561: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH Oct 30 01:11:29.561: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-1505/pods/agnhost/proxy/some/path/with/POST Oct 30 01:11:29.563: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST Oct 30 01:11:29.563: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-1505/pods/agnhost/proxy/some/path/with/PUT Oct 30 01:11:29.565: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT Oct 30 01:11:29.565: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-1505/services/test-service/proxy/some/path/with/DELETE Oct 30 01:11:29.571: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE Oct 30 01:11:29.571: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-1505/services/test-service/proxy/some/path/with/GET Oct 30 01:11:29.579: INFO: http.Client request:GET | StatusCode:200 | Response:foo | Method:GET Oct 30 01:11:29.579: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-1505/services/test-service/proxy/some/path/with/HEAD Oct 30 01:11:29.586: INFO: http.Client request:HEAD | StatusCode:200 Oct 30 01:11:29.586: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-1505/services/test-service/proxy/some/path/with/OPTIONS Oct 30 01:11:29.594: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS Oct 30 01:11:29.594: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-1505/services/test-service/proxy/some/path/with/PATCH Oct 30 01:11:29.597: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH Oct 30 01:11:29.597: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-1505/services/test-service/proxy/some/path/with/POST Oct 30 01:11:29.600: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST Oct 30 01:11:29.600: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-1505/services/test-service/proxy/some/path/with/PUT Oct 30 01:11:29.603: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT [AfterEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:11:29.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-1505" for this suite. • [SLOW TEST:8.104 seconds] [sig-network] Proxy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:74 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]","total":-1,"completed":4,"skipped":45,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:11:23.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-map-d89c82c8-b3f8-4dc4-a23d-060e1622ccee STEP: Creating a pod to test consume configMaps Oct 30 01:11:23.135: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8882af8f-85be-435f-99f5-cf0cb49d0742" in namespace "projected-4786" to be "Succeeded or Failed" Oct 30 01:11:23.137: INFO: Pod "pod-projected-configmaps-8882af8f-85be-435f-99f5-cf0cb49d0742": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046125ms Oct 30 01:11:25.140: INFO: Pod "pod-projected-configmaps-8882af8f-85be-435f-99f5-cf0cb49d0742": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0047552s Oct 30 01:11:27.144: INFO: Pod "pod-projected-configmaps-8882af8f-85be-435f-99f5-cf0cb49d0742": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009087798s Oct 30 01:11:29.148: INFO: Pod "pod-projected-configmaps-8882af8f-85be-435f-99f5-cf0cb49d0742": Phase="Pending", Reason="", readiness=false. Elapsed: 6.012904521s Oct 30 01:11:31.152: INFO: Pod "pod-projected-configmaps-8882af8f-85be-435f-99f5-cf0cb49d0742": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.016947097s STEP: Saw pod success Oct 30 01:11:31.152: INFO: Pod "pod-projected-configmaps-8882af8f-85be-435f-99f5-cf0cb49d0742" satisfied condition "Succeeded or Failed" Oct 30 01:11:31.156: INFO: Trying to get logs from node node1 pod pod-projected-configmaps-8882af8f-85be-435f-99f5-cf0cb49d0742 container agnhost-container: STEP: delete the pod Oct 30 01:11:31.178: INFO: Waiting for pod pod-projected-configmaps-8882af8f-85be-435f-99f5-cf0cb49d0742 to disappear Oct 30 01:11:31.180: INFO: Pod pod-projected-configmaps-8882af8f-85be-435f-99f5-cf0cb49d0742 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:11:31.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4786" for this suite. • [SLOW TEST:8.085 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":150,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:11:25.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 30 01:11:25.602: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created Oct 30 01:11:27.610: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153085, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153085, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153085, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153085, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 30 01:11:29.614: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153085, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153085, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153085, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153085, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 30 01:11:32.622: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:11:32.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9133" for this suite. STEP: Destroying namespace "webhook-9133-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.512 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":-1,"completed":4,"skipped":77,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:11:32.804: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check if kubectl diff finds a difference for Deployments [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create deployment with httpd image Oct 30 01:11:32.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6041 create -f -' Oct 30 01:11:33.184: INFO: stderr: "" Oct 30 01:11:33.184: INFO: stdout: "deployment.apps/httpd-deployment created\n" STEP: verify diff finds difference between live and declared image Oct 30 01:11:33.184: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6041 diff -f -' Oct 30 01:11:33.546: INFO: rc: 1 Oct 30 01:11:33.547: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6041 delete -f -' Oct 30 01:11:33.678: INFO: stderr: "" Oct 30 01:11:33.678: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:11:33.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6041" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":-1,"completed":5,"skipped":98,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:11:25.656: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward api env vars Oct 30 01:11:25.688: INFO: Waiting up to 5m0s for pod "downward-api-1f9fcc46-8047-4dda-a895-905c5a18f73d" in namespace "downward-api-1705" to be "Succeeded or Failed" Oct 30 01:11:25.690: INFO: Pod "downward-api-1f9fcc46-8047-4dda-a895-905c5a18f73d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.191936ms Oct 30 01:11:27.695: INFO: Pod "downward-api-1f9fcc46-8047-4dda-a895-905c5a18f73d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006619941s Oct 30 01:11:29.698: INFO: Pod "downward-api-1f9fcc46-8047-4dda-a895-905c5a18f73d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010235981s Oct 30 01:11:31.704: INFO: Pod "downward-api-1f9fcc46-8047-4dda-a895-905c5a18f73d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015714292s Oct 30 01:11:33.707: INFO: Pod "downward-api-1f9fcc46-8047-4dda-a895-905c5a18f73d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.01923437s STEP: Saw pod success Oct 30 01:11:33.707: INFO: Pod "downward-api-1f9fcc46-8047-4dda-a895-905c5a18f73d" satisfied condition "Succeeded or Failed" Oct 30 01:11:33.710: INFO: Trying to get logs from node node1 pod downward-api-1f9fcc46-8047-4dda-a895-905c5a18f73d container dapi-container: STEP: delete the pod Oct 30 01:11:33.726: INFO: Waiting for pod downward-api-1f9fcc46-8047-4dda-a895-905c5a18f73d to disappear Oct 30 01:11:33.728: INFO: Pod downward-api-1f9fcc46-8047-4dda-a895-905c5a18f73d no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:11:33.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1705" for this suite. • [SLOW TEST:8.080 seconds] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":60,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:11:33.780: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Pods Set QOS Class /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:150 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:11:33.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3395" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Pods Extended Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":-1,"completed":4,"skipped":82,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:11:19.550: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:11:36.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4047" for this suite. • [SLOW TEST:17.062 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":-1,"completed":4,"skipped":75,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:11:31.203: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Oct 30 01:11:31.228: INFO: Pod name pod-release: Found 0 pods out of 1 Oct 30 01:11:36.231: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:11:37.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9930" for this suite. • [SLOW TEST:6.051 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":-1,"completed":7,"skipped":156,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:11:37.365: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should run through the lifecycle of a ServiceAccount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a ServiceAccount STEP: watching for the ServiceAccount to be added STEP: patching the ServiceAccount STEP: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector) STEP: deleting the ServiceAccount [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:11:37.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-7702" for this suite. • ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":-1,"completed":8,"skipped":220,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:11:29.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-map-04e9ae5d-be99-4f9c-9371-b1cb231be205 STEP: Creating a pod to test consume configMaps Oct 30 01:11:29.685: INFO: Waiting up to 5m0s for pod "pod-configmaps-92691632-1390-45a6-a0df-1fc0ddbebea0" in namespace "configmap-2566" to be "Succeeded or Failed" Oct 30 01:11:29.687: INFO: Pod "pod-configmaps-92691632-1390-45a6-a0df-1fc0ddbebea0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080824ms Oct 30 01:11:31.691: INFO: Pod "pod-configmaps-92691632-1390-45a6-a0df-1fc0ddbebea0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005386276s Oct 30 01:11:33.694: INFO: Pod "pod-configmaps-92691632-1390-45a6-a0df-1fc0ddbebea0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009157677s Oct 30 01:11:35.699: INFO: Pod "pod-configmaps-92691632-1390-45a6-a0df-1fc0ddbebea0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014105206s Oct 30 01:11:37.702: INFO: Pod "pod-configmaps-92691632-1390-45a6-a0df-1fc0ddbebea0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.016991799s STEP: Saw pod success Oct 30 01:11:37.702: INFO: Pod "pod-configmaps-92691632-1390-45a6-a0df-1fc0ddbebea0" satisfied condition "Succeeded or Failed" Oct 30 01:11:37.705: INFO: Trying to get logs from node node2 pod pod-configmaps-92691632-1390-45a6-a0df-1fc0ddbebea0 container agnhost-container: STEP: delete the pod Oct 30 01:11:37.717: INFO: Waiting for pod pod-configmaps-92691632-1390-45a6-a0df-1fc0ddbebea0 to disappear Oct 30 01:11:37.719: INFO: Pod pod-configmaps-92691632-1390-45a6-a0df-1fc0ddbebea0 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:11:37.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2566" for this suite. • [SLOW TEST:8.073 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":68,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:11:24.494: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [It] should observe PodDisruptionBudget status updated [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Waiting for the pdb to be processed STEP: Waiting for all pods to be running Oct 30 01:11:26.552: INFO: running pods: 0 < 3 Oct 30 01:11:28.555: INFO: running pods: 0 < 3 Oct 30 01:11:30.555: INFO: running pods: 0 < 3 Oct 30 01:11:32.554: INFO: running pods: 0 < 3 Oct 30 01:11:34.555: INFO: running pods: 1 < 3 Oct 30 01:11:36.556: INFO: running pods: 1 < 3 [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:11:38.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-9987" for this suite. • [SLOW TEST:14.076 seconds] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should observe PodDisruptionBudget status updated [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] DisruptionController should observe PodDisruptionBudget status updated [Conformance]","total":-1,"completed":5,"skipped":49,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:10:47.194: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services W1030 01:10:47.226590 30 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 30 01:10:47.226: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 30 01:10:47.228: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-8294 Oct 30 01:10:47.246: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:10:49.249: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:10:51.250: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:10:53.250: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:10:55.251: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true) Oct 30 01:10:55.254: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8294 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Oct 30 01:10:55.957: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n" Oct 30 01:10:55.957: INFO: stdout: "iptables" Oct 30 01:10:55.957: INFO: proxyMode: iptables Oct 30 01:10:55.967: INFO: Waiting for pod kube-proxy-mode-detector to disappear Oct 30 01:10:55.969: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-clusterip-timeout in namespace services-8294 STEP: creating replication controller affinity-clusterip-timeout in namespace services-8294 I1030 01:10:55.979890 30 runners.go:190] Created replication controller with name: affinity-clusterip-timeout, namespace: services-8294, replica count: 3 I1030 01:10:59.031385 30 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1030 01:11:02.031630 30 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1030 01:11:05.032329 30 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 30 01:11:05.036: INFO: Creating new exec pod Oct 30 01:11:10.054: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8294 exec execpod-affinitys98kh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80' Oct 30 01:11:10.394: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-timeout 80\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\n" Oct 30 01:11:10.394: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Oct 30 01:11:10.394: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8294 exec execpod-affinitys98kh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.44.11 80' Oct 30 01:11:10.739: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.44.11 80\nConnection to 10.233.44.11 80 port [tcp/http] succeeded!\n" Oct 30 01:11:10.739: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Oct 30 01:11:10.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8294 exec execpod-affinitys98kh -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.233.44.11:80/ ; done' Oct 30 01:11:11.036: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.44.11:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.44.11:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.44.11:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.44.11:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.44.11:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.44.11:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.44.11:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.44.11:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.44.11:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.44.11:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.44.11:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.44.11:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.44.11:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.44.11:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.44.11:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.44.11:80/\n" Oct 30 01:11:11.036: INFO: stdout: "\naffinity-clusterip-timeout-vvhkb\naffinity-clusterip-timeout-vvhkb\naffinity-clusterip-timeout-vvhkb\naffinity-clusterip-timeout-vvhkb\naffinity-clusterip-timeout-vvhkb\naffinity-clusterip-timeout-vvhkb\naffinity-clusterip-timeout-vvhkb\naffinity-clusterip-timeout-vvhkb\naffinity-clusterip-timeout-vvhkb\naffinity-clusterip-timeout-vvhkb\naffinity-clusterip-timeout-vvhkb\naffinity-clusterip-timeout-vvhkb\naffinity-clusterip-timeout-vvhkb\naffinity-clusterip-timeout-vvhkb\naffinity-clusterip-timeout-vvhkb\naffinity-clusterip-timeout-vvhkb" Oct 30 01:11:11.036: INFO: Received response from host: affinity-clusterip-timeout-vvhkb Oct 30 01:11:11.036: INFO: Received response from host: affinity-clusterip-timeout-vvhkb Oct 30 01:11:11.036: INFO: Received response from host: affinity-clusterip-timeout-vvhkb Oct 30 01:11:11.036: INFO: Received response from host: affinity-clusterip-timeout-vvhkb Oct 30 01:11:11.036: INFO: Received response from host: affinity-clusterip-timeout-vvhkb Oct 30 01:11:11.036: INFO: Received response from host: affinity-clusterip-timeout-vvhkb Oct 30 01:11:11.036: INFO: Received response from host: affinity-clusterip-timeout-vvhkb Oct 30 01:11:11.036: INFO: Received response from host: affinity-clusterip-timeout-vvhkb Oct 30 01:11:11.036: INFO: Received response from host: affinity-clusterip-timeout-vvhkb Oct 30 01:11:11.036: INFO: Received response from host: affinity-clusterip-timeout-vvhkb Oct 30 01:11:11.036: INFO: Received response from host: affinity-clusterip-timeout-vvhkb Oct 30 01:11:11.036: INFO: Received response from host: affinity-clusterip-timeout-vvhkb Oct 30 01:11:11.036: INFO: Received response from host: affinity-clusterip-timeout-vvhkb Oct 30 01:11:11.036: INFO: Received response from host: affinity-clusterip-timeout-vvhkb Oct 30 01:11:11.036: INFO: Received response from host: affinity-clusterip-timeout-vvhkb Oct 30 01:11:11.037: INFO: Received response from host: affinity-clusterip-timeout-vvhkb Oct 30 01:11:11.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8294 exec execpod-affinitys98kh -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.233.44.11:80/' Oct 30 01:11:12.052: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.233.44.11:80/\n" Oct 30 01:11:12.052: INFO: stdout: "affinity-clusterip-timeout-vvhkb" Oct 30 01:11:32.052: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8294 exec execpod-affinitys98kh -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.233.44.11:80/' Oct 30 01:11:32.518: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.233.44.11:80/\n" Oct 30 01:11:32.518: INFO: stdout: "affinity-clusterip-timeout-ljcjf" Oct 30 01:11:32.518: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-timeout in namespace services-8294, will wait for the garbage collector to delete the pods Oct 30 01:11:32.583: INFO: Deleting ReplicationController affinity-clusterip-timeout took: 3.722962ms Oct 30 01:11:32.684: INFO: Terminating ReplicationController affinity-clusterip-timeout pods took: 101.09384ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:11:40.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8294" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:53.209 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":1,"skipped":13,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:11:40.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should delete a collection of pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create set of pods Oct 30 01:11:40.463: INFO: created test-pod-1 Oct 30 01:11:40.471: INFO: created test-pod-2 Oct 30 01:11:40.480: INFO: created test-pod-3 STEP: waiting for all 3 pods to be located STEP: waiting for all pods to be deleted [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:11:40.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1482" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Pods should delete a collection of pods [Conformance]","total":-1,"completed":2,"skipped":27,"failed":0} SSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:11:33.866: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Oct 30 01:11:33.896: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9e9156fe-83e7-4eda-a338-044dc685c8e7" in namespace "projected-693" to be "Succeeded or Failed" Oct 30 01:11:33.899: INFO: Pod "downwardapi-volume-9e9156fe-83e7-4eda-a338-044dc685c8e7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.951624ms Oct 30 01:11:35.902: INFO: Pod "downwardapi-volume-9e9156fe-83e7-4eda-a338-044dc685c8e7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00608896s Oct 30 01:11:37.905: INFO: Pod "downwardapi-volume-9e9156fe-83e7-4eda-a338-044dc685c8e7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008622414s Oct 30 01:11:39.909: INFO: Pod "downwardapi-volume-9e9156fe-83e7-4eda-a338-044dc685c8e7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.012556472s Oct 30 01:11:41.912: INFO: Pod "downwardapi-volume-9e9156fe-83e7-4eda-a338-044dc685c8e7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.016428648s STEP: Saw pod success Oct 30 01:11:41.912: INFO: Pod "downwardapi-volume-9e9156fe-83e7-4eda-a338-044dc685c8e7" satisfied condition "Succeeded or Failed" Oct 30 01:11:41.915: INFO: Trying to get logs from node node2 pod downwardapi-volume-9e9156fe-83e7-4eda-a338-044dc685c8e7 container client-container: STEP: delete the pod Oct 30 01:11:41.926: INFO: Waiting for pod downwardapi-volume-9e9156fe-83e7-4eda-a338-044dc685c8e7 to disappear Oct 30 01:11:41.928: INFO: Pod downwardapi-volume-9e9156fe-83e7-4eda-a338-044dc685c8e7 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:11:41.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-693" for this suite. • [SLOW TEST:8.070 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":105,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:11:37.753: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 01:11:37.789: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:11:43.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-6788" for this suite. • [SLOW TEST:5.576 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":-1,"completed":6,"skipped":81,"failed":0} SS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:11:18.080: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 01:11:18.111: INFO: The status of Pod test-webserver-ddc68d61-cad3-4ec6-b1e7-3cb6373968e8 is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:11:20.114: INFO: The status of Pod test-webserver-ddc68d61-cad3-4ec6-b1e7-3cb6373968e8 is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:11:22.114: INFO: The status of Pod test-webserver-ddc68d61-cad3-4ec6-b1e7-3cb6373968e8 is Running (Ready = false) Oct 30 01:11:24.116: INFO: The status of Pod test-webserver-ddc68d61-cad3-4ec6-b1e7-3cb6373968e8 is Running (Ready = false) Oct 30 01:11:26.115: INFO: The status of Pod test-webserver-ddc68d61-cad3-4ec6-b1e7-3cb6373968e8 is Running (Ready = false) Oct 30 01:11:28.117: INFO: The status of Pod test-webserver-ddc68d61-cad3-4ec6-b1e7-3cb6373968e8 is Running (Ready = false) Oct 30 01:11:30.117: INFO: The status of Pod test-webserver-ddc68d61-cad3-4ec6-b1e7-3cb6373968e8 is Running (Ready = false) Oct 30 01:11:32.114: INFO: The status of Pod test-webserver-ddc68d61-cad3-4ec6-b1e7-3cb6373968e8 is Running (Ready = false) Oct 30 01:11:34.115: INFO: The status of Pod test-webserver-ddc68d61-cad3-4ec6-b1e7-3cb6373968e8 is Running (Ready = false) Oct 30 01:11:36.115: INFO: The status of Pod test-webserver-ddc68d61-cad3-4ec6-b1e7-3cb6373968e8 is Running (Ready = false) Oct 30 01:11:38.114: INFO: The status of Pod test-webserver-ddc68d61-cad3-4ec6-b1e7-3cb6373968e8 is Running (Ready = false) Oct 30 01:11:40.116: INFO: The status of Pod test-webserver-ddc68d61-cad3-4ec6-b1e7-3cb6373968e8 is Running (Ready = false) Oct 30 01:11:42.114: INFO: The status of Pod test-webserver-ddc68d61-cad3-4ec6-b1e7-3cb6373968e8 is Running (Ready = false) Oct 30 01:11:44.117: INFO: The status of Pod test-webserver-ddc68d61-cad3-4ec6-b1e7-3cb6373968e8 is Running (Ready = true) Oct 30 01:11:44.118: INFO: Container started at 2021-10-30 01:11:21 +0000 UTC, pod became ready at 2021-10-30 01:11:38 +0000 UTC [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:11:44.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4034" for this suite. • [SLOW TEST:26.046 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":38,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:11:36.635: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward api env vars Oct 30 01:11:36.668: INFO: Waiting up to 5m0s for pod "downward-api-efa5b4f7-bcba-41be-865e-8a06f0eb3bff" in namespace "downward-api-215" to be "Succeeded or Failed" Oct 30 01:11:36.670: INFO: Pod "downward-api-efa5b4f7-bcba-41be-865e-8a06f0eb3bff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.334714ms Oct 30 01:11:38.673: INFO: Pod "downward-api-efa5b4f7-bcba-41be-865e-8a06f0eb3bff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005017849s Oct 30 01:11:40.678: INFO: Pod "downward-api-efa5b4f7-bcba-41be-865e-8a06f0eb3bff": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009697181s Oct 30 01:11:42.680: INFO: Pod "downward-api-efa5b4f7-bcba-41be-865e-8a06f0eb3bff": Phase="Pending", Reason="", readiness=false. Elapsed: 6.012287621s Oct 30 01:11:44.684: INFO: Pod "downward-api-efa5b4f7-bcba-41be-865e-8a06f0eb3bff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.016067522s STEP: Saw pod success Oct 30 01:11:44.684: INFO: Pod "downward-api-efa5b4f7-bcba-41be-865e-8a06f0eb3bff" satisfied condition "Succeeded or Failed" Oct 30 01:11:44.686: INFO: Trying to get logs from node node1 pod downward-api-efa5b4f7-bcba-41be-865e-8a06f0eb3bff container dapi-container: STEP: delete the pod Oct 30 01:11:45.212: INFO: Waiting for pod downward-api-efa5b4f7-bcba-41be-865e-8a06f0eb3bff to disappear Oct 30 01:11:45.214: INFO: Pod downward-api-efa5b4f7-bcba-41be-865e-8a06f0eb3bff no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:11:45.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-215" for this suite. • [SLOW TEST:8.586 seconds] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":87,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:11:33.728: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: submitting the pod to kubernetes Oct 30 01:11:33.761: INFO: The status of Pod pod-update-activedeadlineseconds-52b9db37-5636-4ca4-a0dd-95220f906bba is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:11:35.763: INFO: The status of Pod pod-update-activedeadlineseconds-52b9db37-5636-4ca4-a0dd-95220f906bba is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:11:37.764: INFO: The status of Pod pod-update-activedeadlineseconds-52b9db37-5636-4ca4-a0dd-95220f906bba is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:11:39.765: INFO: The status of Pod pod-update-activedeadlineseconds-52b9db37-5636-4ca4-a0dd-95220f906bba is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:11:41.765: INFO: The status of Pod pod-update-activedeadlineseconds-52b9db37-5636-4ca4-a0dd-95220f906bba is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:11:43.766: INFO: The status of Pod pod-update-activedeadlineseconds-52b9db37-5636-4ca4-a0dd-95220f906bba is Running (Ready = true) STEP: verifying the pod is in kubernetes STEP: updating the pod Oct 30 01:11:44.285: INFO: Successfully updated pod "pod-update-activedeadlineseconds-52b9db37-5636-4ca4-a0dd-95220f906bba" Oct 30 01:11:44.285: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-52b9db37-5636-4ca4-a0dd-95220f906bba" in namespace "pods-4556" to be "terminated due to deadline exceeded" Oct 30 01:11:44.287: INFO: Pod "pod-update-activedeadlineseconds-52b9db37-5636-4ca4-a0dd-95220f906bba": Phase="Running", Reason="", readiness=true. Elapsed: 2.076473ms Oct 30 01:11:46.291: INFO: Pod "pod-update-activedeadlineseconds-52b9db37-5636-4ca4-a0dd-95220f906bba": Phase="Running", Reason="", readiness=true. Elapsed: 2.005703805s Oct 30 01:11:48.295: INFO: Pod "pod-update-activedeadlineseconds-52b9db37-5636-4ca4-a0dd-95220f906bba": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 4.010358124s Oct 30 01:11:48.295: INFO: Pod "pod-update-activedeadlineseconds-52b9db37-5636-4ca4-a0dd-95220f906bba" satisfied condition "terminated due to deadline exceeded" [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:11:48.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4556" for this suite. • [SLOW TEST:14.576 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:11:37.430: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Creating a NodePort Service STEP: Not allowing a LoadBalancer Service with NodePort to be created that exceeds remaining quota STEP: Ensuring resource quota status captures service creation STEP: Deleting Services STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:11:48.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6435" for this suite. • [SLOW TEST:11.096 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":-1,"completed":9,"skipped":229,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:11:40.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 30 01:11:40.900: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 30 01:11:42.907: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153100, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153100, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153100, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153100, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 30 01:11:44.911: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153100, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153100, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153100, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153100, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 30 01:11:46.910: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153100, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153100, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153100, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153100, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 30 01:11:49.917: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:11:49.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7080" for this suite. STEP: Destroying namespace "webhook-7080-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.441 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":-1,"completed":3,"skipped":30,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:11:41.962: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64 [It] should support sysctls [MinimumKubeletVersion:1.21] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod with the kernel.shm_rmid_forced sysctl STEP: Watching for error events or started pod STEP: Waiting for pod completion STEP: Checking that the pod succeeded STEP: Getting logs from the pod STEP: Checking that the sysctl is actually updated [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:11:50.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-3546" for this suite. • [SLOW TEST:8.060 seconds] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should support sysctls [MinimumKubeletVersion:1.21] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":6,"skipped":119,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:11:43.335: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name projected-secret-test-map-d8bb6d31-6c2a-44f6-b34d-af2b92fdc1f9 STEP: Creating a pod to test consume secrets Oct 30 01:11:43.384: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4de1491d-fc70-4608-a8df-26c5342652cd" in namespace "projected-1398" to be "Succeeded or Failed" Oct 30 01:11:43.386: INFO: Pod "pod-projected-secrets-4de1491d-fc70-4608-a8df-26c5342652cd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.644821ms Oct 30 01:11:45.398: INFO: Pod "pod-projected-secrets-4de1491d-fc70-4608-a8df-26c5342652cd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014439187s Oct 30 01:11:47.401: INFO: Pod "pod-projected-secrets-4de1491d-fc70-4608-a8df-26c5342652cd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017117909s Oct 30 01:11:49.404: INFO: Pod "pod-projected-secrets-4de1491d-fc70-4608-a8df-26c5342652cd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.02035418s Oct 30 01:11:51.408: INFO: Pod "pod-projected-secrets-4de1491d-fc70-4608-a8df-26c5342652cd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.024008794s STEP: Saw pod success Oct 30 01:11:51.408: INFO: Pod "pod-projected-secrets-4de1491d-fc70-4608-a8df-26c5342652cd" satisfied condition "Succeeded or Failed" Oct 30 01:11:51.410: INFO: Trying to get logs from node node2 pod pod-projected-secrets-4de1491d-fc70-4608-a8df-26c5342652cd container projected-secret-volume-test: STEP: delete the pod Oct 30 01:11:51.431: INFO: Waiting for pod pod-projected-secrets-4de1491d-fc70-4608-a8df-26c5342652cd to disappear Oct 30 01:11:51.434: INFO: Pod pod-projected-secrets-4de1491d-fc70-4608-a8df-26c5342652cd no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:11:51.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1398" for this suite. • [SLOW TEST:8.107 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":83,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:11:44.139: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 01:11:44.168: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-6819cd15-6e09-44f6-ba96-e443dbb0ee74" in namespace "security-context-test-891" to be "Succeeded or Failed" Oct 30 01:11:44.171: INFO: Pod "alpine-nnp-false-6819cd15-6e09-44f6-ba96-e443dbb0ee74": Phase="Pending", Reason="", readiness=false. Elapsed: 2.333743ms Oct 30 01:11:46.173: INFO: Pod "alpine-nnp-false-6819cd15-6e09-44f6-ba96-e443dbb0ee74": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005085325s Oct 30 01:11:48.176: INFO: Pod "alpine-nnp-false-6819cd15-6e09-44f6-ba96-e443dbb0ee74": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008159092s Oct 30 01:11:50.180: INFO: Pod "alpine-nnp-false-6819cd15-6e09-44f6-ba96-e443dbb0ee74": Phase="Pending", Reason="", readiness=false. Elapsed: 6.011374025s Oct 30 01:11:52.183: INFO: Pod "alpine-nnp-false-6819cd15-6e09-44f6-ba96-e443dbb0ee74": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.014578933s Oct 30 01:11:52.183: INFO: Pod "alpine-nnp-false-6819cd15-6e09-44f6-ba96-e443dbb0ee74" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:11:52.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-891" for this suite. • [SLOW TEST:8.058 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when creating containers with AllowPrivilegeEscalation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:296 should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":44,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:11:51.527: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Oct 30 01:11:51.556: INFO: Created pod &Pod{ObjectMeta:{test-dns-nameservers dns-6200 31c576ec-3057-4f52-998e-16291f0f213a 81493 0 2021-10-30 01:11:51 +0000 UTC map[] map[kubernetes.io/psp:collectd] [] [] [{e2e.test Update v1 2021-10-30 01:11:51 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-rtw6m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rtw6m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 30 01:11:51.559: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:11:53.562: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:11:55.563: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:11:57.563: INFO: The status of Pod test-dns-nameservers is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... Oct 30 01:11:57.564: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-6200 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 01:11:57.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Verifying customized DNS server is configured on pod... Oct 30 01:11:57.665: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-6200 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 01:11:57.665: INFO: >>> kubeConfig: /root/.kube/config Oct 30 01:11:57.762: INFO: Deleting pod test-dns-nameservers... [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:11:57.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6200" for this suite. • [SLOW TEST:6.250 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should support configurable pod DNS nameservers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":-1,"completed":8,"skipped":132,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:11:57.804: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename runtimeclass STEP: Waiting for a default service account to be provisioned in namespace [It] should support RuntimeClasses API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting /apis STEP: getting /apis/node.k8s.io STEP: getting /apis/node.k8s.io/v1 STEP: creating STEP: watching Oct 30 01:11:57.840: INFO: starting watch STEP: getting STEP: listing STEP: patching STEP: updating Oct 30 01:11:57.862: INFO: waiting for watch events with expected annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:11:57.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "runtimeclass-2574" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] RuntimeClass should support RuntimeClasses API operations [Conformance]","total":-1,"completed":9,"skipped":146,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:11:48.558: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating the pod Oct 30 01:11:48.594: INFO: The status of Pod labelsupdateabf217e8-de7e-4028-89ec-e456e7f6c4ce is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:11:50.597: INFO: The status of Pod labelsupdateabf217e8-de7e-4028-89ec-e456e7f6c4ce is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:11:52.597: INFO: The status of Pod labelsupdateabf217e8-de7e-4028-89ec-e456e7f6c4ce is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:11:54.599: INFO: The status of Pod labelsupdateabf217e8-de7e-4028-89ec-e456e7f6c4ce is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:11:56.597: INFO: The status of Pod labelsupdateabf217e8-de7e-4028-89ec-e456e7f6c4ce is Running (Ready = true) Oct 30 01:11:57.114: INFO: Successfully updated pod "labelsupdateabf217e8-de7e-4028-89ec-e456e7f6c4ce" [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:11:59.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-995" for this suite. • [SLOW TEST:10.577 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":246,"failed":0} SSSSSS ------------------------------ {"msg":"PASSED [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":121,"failed":0} [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:11:48.306: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 01:11:48.326: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Oct 30 01:11:56.353: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9018 --namespace=crd-publish-openapi-9018 create -f -' Oct 30 01:11:56.854: INFO: stderr: "" Oct 30 01:11:56.854: INFO: stdout: "e2e-test-crd-publish-openapi-4773-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Oct 30 01:11:56.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9018 --namespace=crd-publish-openapi-9018 delete e2e-test-crd-publish-openapi-4773-crds test-cr' Oct 30 01:11:57.016: INFO: stderr: "" Oct 30 01:11:57.016: INFO: stdout: "e2e-test-crd-publish-openapi-4773-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Oct 30 01:11:57.016: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9018 --namespace=crd-publish-openapi-9018 apply -f -' Oct 30 01:11:57.320: INFO: stderr: "" Oct 30 01:11:57.320: INFO: stdout: "e2e-test-crd-publish-openapi-4773-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Oct 30 01:11:57.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9018 --namespace=crd-publish-openapi-9018 delete e2e-test-crd-publish-openapi-4773-crds test-cr' Oct 30 01:11:57.470: INFO: stderr: "" Oct 30 01:11:57.470: INFO: stdout: "e2e-test-crd-publish-openapi-4773-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Oct 30 01:11:57.470: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9018 explain e2e-test-crd-publish-openapi-4773-crds' Oct 30 01:11:57.806: INFO: stderr: "" Oct 30 01:11:57.806: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4773-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:12:01.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9018" for this suite. • [SLOW TEST:13.059 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":-1,"completed":7,"skipped":121,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:12:01.395: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Oct 30 01:12:01.426: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c9d8bfb1-e318-4838-b1c1-6acef149c8ee" in namespace "projected-4720" to be "Succeeded or Failed" Oct 30 01:12:01.428: INFO: Pod "downwardapi-volume-c9d8bfb1-e318-4838-b1c1-6acef149c8ee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040923ms Oct 30 01:12:03.432: INFO: Pod "downwardapi-volume-c9d8bfb1-e318-4838-b1c1-6acef149c8ee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005487925s Oct 30 01:12:05.436: INFO: Pod "downwardapi-volume-c9d8bfb1-e318-4838-b1c1-6acef149c8ee": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010277382s Oct 30 01:12:07.440: INFO: Pod "downwardapi-volume-c9d8bfb1-e318-4838-b1c1-6acef149c8ee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013347835s STEP: Saw pod success Oct 30 01:12:07.440: INFO: Pod "downwardapi-volume-c9d8bfb1-e318-4838-b1c1-6acef149c8ee" satisfied condition "Succeeded or Failed" Oct 30 01:12:07.442: INFO: Trying to get logs from node node1 pod downwardapi-volume-c9d8bfb1-e318-4838-b1c1-6acef149c8ee container client-container: STEP: delete the pod Oct 30 01:12:07.454: INFO: Waiting for pod downwardapi-volume-c9d8bfb1-e318-4838-b1c1-6acef149c8ee to disappear Oct 30 01:12:07.456: INFO: Pod downwardapi-volume-c9d8bfb1-e318-4838-b1c1-6acef149c8ee no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:12:07.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4720" for this suite. • [SLOW TEST:6.069 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":138,"failed":0} SSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:11:57.910: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating the pod Oct 30 01:11:57.948: INFO: The status of Pod annotationupdate3fa3e31b-c928-498a-9d2e-dfb22373b957 is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:11:59.953: INFO: The status of Pod annotationupdate3fa3e31b-c928-498a-9d2e-dfb22373b957 is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:12:01.952: INFO: The status of Pod annotationupdate3fa3e31b-c928-498a-9d2e-dfb22373b957 is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:12:03.953: INFO: The status of Pod annotationupdate3fa3e31b-c928-498a-9d2e-dfb22373b957 is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:12:05.952: INFO: The status of Pod annotationupdate3fa3e31b-c928-498a-9d2e-dfb22373b957 is Running (Ready = true) Oct 30 01:12:06.469: INFO: Successfully updated pod "annotationupdate3fa3e31b-c928-498a-9d2e-dfb22373b957" [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:12:10.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-139" for this suite. • [SLOW TEST:12.587 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":156,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:12:07.472: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Oct 30 01:12:07.505: INFO: Waiting up to 5m0s for pod "downwardapi-volume-484e7596-c8fe-4210-b327-1a59e848ec2d" in namespace "downward-api-109" to be "Succeeded or Failed" Oct 30 01:12:07.507: INFO: Pod "downwardapi-volume-484e7596-c8fe-4210-b327-1a59e848ec2d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.565887ms Oct 30 01:12:09.510: INFO: Pod "downwardapi-volume-484e7596-c8fe-4210-b327-1a59e848ec2d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005154914s Oct 30 01:12:11.514: INFO: Pod "downwardapi-volume-484e7596-c8fe-4210-b327-1a59e848ec2d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009684624s Oct 30 01:12:13.518: INFO: Pod "downwardapi-volume-484e7596-c8fe-4210-b327-1a59e848ec2d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013266718s STEP: Saw pod success Oct 30 01:12:13.518: INFO: Pod "downwardapi-volume-484e7596-c8fe-4210-b327-1a59e848ec2d" satisfied condition "Succeeded or Failed" Oct 30 01:12:13.520: INFO: Trying to get logs from node node2 pod downwardapi-volume-484e7596-c8fe-4210-b327-1a59e848ec2d container client-container: STEP: delete the pod Oct 30 01:12:13.532: INFO: Waiting for pod downwardapi-volume-484e7596-c8fe-4210-b327-1a59e848ec2d to disappear Oct 30 01:12:13.534: INFO: Pod downwardapi-volume-484e7596-c8fe-4210-b327-1a59e848ec2d no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:12:13.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-109" for this suite. • [SLOW TEST:6.071 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":141,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:11:50.034: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-684.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-684.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-684.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-684.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Oct 30 01:11:58.131: INFO: DNS probes using dns-test-d75de63e-b568-489b-a26c-0cac9ce90682 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-684.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-684.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-684.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-684.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Oct 30 01:12:08.175: INFO: DNS probes using dns-test-d2f07b6e-6e97-4c1f-a1d5-2870c6f7361b succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-684.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-684.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-684.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-684.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Oct 30 01:12:14.222: INFO: DNS probes using dns-test-2dad6606-a013-4f85-bc1c-068a3c88c6a5 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:12:14.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-684" for this suite. • [SLOW TEST:24.208 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for ExternalName services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:10:47.225: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap W1030 01:10:47.248104 39 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 30 01:10:47.248: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 30 01:10:47.250: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-upd-9925fa84-ae08-4888-a307-8d8dd2423548 STEP: Creating the pod Oct 30 01:10:47.272: INFO: The status of Pod pod-configmaps-2a7bed0e-14cd-477d-b81c-7e626a1b7bb9 is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:10:49.275: INFO: The status of Pod pod-configmaps-2a7bed0e-14cd-477d-b81c-7e626a1b7bb9 is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:10:51.276: INFO: The status of Pod pod-configmaps-2a7bed0e-14cd-477d-b81c-7e626a1b7bb9 is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:10:53.276: INFO: The status of Pod pod-configmaps-2a7bed0e-14cd-477d-b81c-7e626a1b7bb9 is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:10:55.276: INFO: The status of Pod pod-configmaps-2a7bed0e-14cd-477d-b81c-7e626a1b7bb9 is Running (Ready = true) STEP: Updating configmap configmap-test-upd-9925fa84-ae08-4888-a307-8d8dd2423548 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:12:15.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-236" for this suite. • [SLOW TEST:88.082 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":28,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:12:10.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating the pod Oct 30 01:12:10.547: INFO: The status of Pod annotationupdate5944ef4e-4f4d-4992-8a84-b90355be6754 is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:12:12.550: INFO: The status of Pod annotationupdate5944ef4e-4f4d-4992-8a84-b90355be6754 is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:12:14.552: INFO: The status of Pod annotationupdate5944ef4e-4f4d-4992-8a84-b90355be6754 is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:12:16.552: INFO: The status of Pod annotationupdate5944ef4e-4f4d-4992-8a84-b90355be6754 is Running (Ready = true) Oct 30 01:12:17.070: INFO: Successfully updated pod "annotationupdate5944ef4e-4f4d-4992-8a84-b90355be6754" [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:12:21.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9904" for this suite. • [SLOW TEST:10.587 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":164,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:12:15.364: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-548cc708-da3a-40f6-906f-44b2703d10e4 STEP: Creating a pod to test consume configMaps Oct 30 01:12:15.397: INFO: Waiting up to 5m0s for pod "pod-configmaps-95c2e200-39e8-43ff-bcae-dbfab71a5b79" in namespace "configmap-2107" to be "Succeeded or Failed" Oct 30 01:12:15.401: INFO: Pod "pod-configmaps-95c2e200-39e8-43ff-bcae-dbfab71a5b79": Phase="Pending", Reason="", readiness=false. Elapsed: 4.151539ms Oct 30 01:12:17.404: INFO: Pod "pod-configmaps-95c2e200-39e8-43ff-bcae-dbfab71a5b79": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007073634s Oct 30 01:12:19.408: INFO: Pod "pod-configmaps-95c2e200-39e8-43ff-bcae-dbfab71a5b79": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010868178s Oct 30 01:12:21.413: INFO: Pod "pod-configmaps-95c2e200-39e8-43ff-bcae-dbfab71a5b79": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015192802s STEP: Saw pod success Oct 30 01:12:21.413: INFO: Pod "pod-configmaps-95c2e200-39e8-43ff-bcae-dbfab71a5b79" satisfied condition "Succeeded or Failed" Oct 30 01:12:21.415: INFO: Trying to get logs from node node2 pod pod-configmaps-95c2e200-39e8-43ff-bcae-dbfab71a5b79 container agnhost-container: STEP: delete the pod Oct 30 01:12:21.549: INFO: Waiting for pod pod-configmaps-95c2e200-39e8-43ff-bcae-dbfab71a5b79 to disappear Oct 30 01:12:21.551: INFO: Pod pod-configmaps-95c2e200-39e8-43ff-bcae-dbfab71a5b79 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:12:21.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2107" for this suite. • [SLOW TEST:6.195 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":57,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:11:49.977: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-8640 STEP: creating service affinity-clusterip-transition in namespace services-8640 STEP: creating replication controller affinity-clusterip-transition in namespace services-8640 I1030 01:11:50.008516 30 runners.go:190] Created replication controller with name: affinity-clusterip-transition, namespace: services-8640, replica count: 3 I1030 01:11:53.059738 30 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1030 01:11:56.060016 30 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1030 01:11:59.062122 30 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 30 01:11:59.066: INFO: Creating new exec pod Oct 30 01:12:08.081: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8640 exec execpod-affinitykvxm8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Oct 30 01:12:08.366: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" Oct 30 01:12:08.367: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Oct 30 01:12:08.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8640 exec execpod-affinitykvxm8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.43.86 80' Oct 30 01:12:08.787: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.43.86 80\nConnection to 10.233.43.86 80 port [tcp/http] succeeded!\n" Oct 30 01:12:08.787: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Oct 30 01:12:08.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8640 exec execpod-affinitykvxm8 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.233.43.86:80/ ; done' Oct 30 01:12:09.313: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.43.86:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.43.86:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.43.86:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.43.86:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.43.86:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.43.86:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.43.86:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.43.86:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.43.86:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.43.86:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.43.86:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.43.86:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.43.86:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.43.86:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.43.86:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.43.86:80/\n" Oct 30 01:12:09.314: INFO: stdout: "\naffinity-clusterip-transition-mbl78\naffinity-clusterip-transition-pv965\naffinity-clusterip-transition-pv965\naffinity-clusterip-transition-mbl78\naffinity-clusterip-transition-mbl78\naffinity-clusterip-transition-qj8vt\naffinity-clusterip-transition-mbl78\naffinity-clusterip-transition-pv965\naffinity-clusterip-transition-pv965\naffinity-clusterip-transition-pv965\naffinity-clusterip-transition-qj8vt\naffinity-clusterip-transition-mbl78\naffinity-clusterip-transition-pv965\naffinity-clusterip-transition-qj8vt\naffinity-clusterip-transition-qj8vt\naffinity-clusterip-transition-pv965" Oct 30 01:12:09.314: INFO: Received response from host: affinity-clusterip-transition-mbl78 Oct 30 01:12:09.314: INFO: Received response from host: affinity-clusterip-transition-pv965 Oct 30 01:12:09.314: INFO: Received response from host: affinity-clusterip-transition-pv965 Oct 30 01:12:09.314: INFO: Received response from host: affinity-clusterip-transition-mbl78 Oct 30 01:12:09.314: INFO: Received response from host: affinity-clusterip-transition-mbl78 Oct 30 01:12:09.314: INFO: Received response from host: affinity-clusterip-transition-qj8vt Oct 30 01:12:09.314: INFO: Received response from host: affinity-clusterip-transition-mbl78 Oct 30 01:12:09.314: INFO: Received response from host: affinity-clusterip-transition-pv965 Oct 30 01:12:09.314: INFO: Received response from host: affinity-clusterip-transition-pv965 Oct 30 01:12:09.314: INFO: Received response from host: affinity-clusterip-transition-pv965 Oct 30 01:12:09.314: INFO: Received response from host: affinity-clusterip-transition-qj8vt Oct 30 01:12:09.314: INFO: Received response from host: affinity-clusterip-transition-mbl78 Oct 30 01:12:09.314: INFO: Received response from host: affinity-clusterip-transition-pv965 Oct 30 01:12:09.314: INFO: Received response from host: affinity-clusterip-transition-qj8vt Oct 30 01:12:09.314: INFO: Received response from host: affinity-clusterip-transition-qj8vt Oct 30 01:12:09.314: INFO: Received response from host: affinity-clusterip-transition-pv965 Oct 30 01:12:09.322: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8640 exec execpod-affinitykvxm8 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.233.43.86:80/ ; done' Oct 30 01:12:09.654: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.43.86:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.43.86:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.43.86:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.43.86:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.43.86:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.43.86:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.43.86:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.43.86:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.43.86:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.43.86:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.43.86:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.43.86:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.43.86:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.43.86:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.43.86:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.43.86:80/\n" Oct 30 01:12:09.654: INFO: stdout: "\naffinity-clusterip-transition-mbl78\naffinity-clusterip-transition-mbl78\naffinity-clusterip-transition-mbl78\naffinity-clusterip-transition-mbl78\naffinity-clusterip-transition-mbl78\naffinity-clusterip-transition-mbl78\naffinity-clusterip-transition-mbl78\naffinity-clusterip-transition-mbl78\naffinity-clusterip-transition-mbl78\naffinity-clusterip-transition-mbl78\naffinity-clusterip-transition-mbl78\naffinity-clusterip-transition-mbl78\naffinity-clusterip-transition-mbl78\naffinity-clusterip-transition-mbl78\naffinity-clusterip-transition-mbl78\naffinity-clusterip-transition-mbl78" Oct 30 01:12:09.654: INFO: Received response from host: affinity-clusterip-transition-mbl78 Oct 30 01:12:09.654: INFO: Received response from host: affinity-clusterip-transition-mbl78 Oct 30 01:12:09.654: INFO: Received response from host: affinity-clusterip-transition-mbl78 Oct 30 01:12:09.654: INFO: Received response from host: affinity-clusterip-transition-mbl78 Oct 30 01:12:09.654: INFO: Received response from host: affinity-clusterip-transition-mbl78 Oct 30 01:12:09.654: INFO: Received response from host: affinity-clusterip-transition-mbl78 Oct 30 01:12:09.654: INFO: Received response from host: affinity-clusterip-transition-mbl78 Oct 30 01:12:09.654: INFO: Received response from host: affinity-clusterip-transition-mbl78 Oct 30 01:12:09.654: INFO: Received response from host: affinity-clusterip-transition-mbl78 Oct 30 01:12:09.654: INFO: Received response from host: affinity-clusterip-transition-mbl78 Oct 30 01:12:09.654: INFO: Received response from host: affinity-clusterip-transition-mbl78 Oct 30 01:12:09.654: INFO: Received response from host: affinity-clusterip-transition-mbl78 Oct 30 01:12:09.654: INFO: Received response from host: affinity-clusterip-transition-mbl78 Oct 30 01:12:09.654: INFO: Received response from host: affinity-clusterip-transition-mbl78 Oct 30 01:12:09.654: INFO: Received response from host: affinity-clusterip-transition-mbl78 Oct 30 01:12:09.654: INFO: Received response from host: affinity-clusterip-transition-mbl78 Oct 30 01:12:09.654: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-8640, will wait for the garbage collector to delete the pods Oct 30 01:12:09.717: INFO: Deleting ReplicationController affinity-clusterip-transition took: 3.336101ms Oct 30 01:12:09.818: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 100.998529ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:12:23.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8640" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:33.058 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":4,"skipped":40,"failed":0} [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:12:23.038: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should support proxy with --port 0 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: starting the proxy server Oct 30 01:12:23.058: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3519 proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:12:23.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3519" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":-1,"completed":5,"skipped":40,"failed":0} SSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":-1,"completed":7,"skipped":124,"failed":0} [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:12:14.244: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should contain environment variables for services [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 01:12:14.276: INFO: The status of Pod server-envvars-704ac2cb-60f4-4b35-bd94-0c63a1725a49 is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:12:16.278: INFO: The status of Pod server-envvars-704ac2cb-60f4-4b35-bd94-0c63a1725a49 is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:12:18.279: INFO: The status of Pod server-envvars-704ac2cb-60f4-4b35-bd94-0c63a1725a49 is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:12:20.280: INFO: The status of Pod server-envvars-704ac2cb-60f4-4b35-bd94-0c63a1725a49 is Running (Ready = true) Oct 30 01:12:20.299: INFO: Waiting up to 5m0s for pod "client-envvars-bec0cf34-950e-45d0-a294-429a125a5ad9" in namespace "pods-3158" to be "Succeeded or Failed" Oct 30 01:12:20.301: INFO: Pod "client-envvars-bec0cf34-950e-45d0-a294-429a125a5ad9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.427082ms Oct 30 01:12:22.305: INFO: Pod "client-envvars-bec0cf34-950e-45d0-a294-429a125a5ad9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006244894s Oct 30 01:12:24.309: INFO: Pod "client-envvars-bec0cf34-950e-45d0-a294-429a125a5ad9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010347018s Oct 30 01:12:26.314: INFO: Pod "client-envvars-bec0cf34-950e-45d0-a294-429a125a5ad9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015368876s STEP: Saw pod success Oct 30 01:12:26.314: INFO: Pod "client-envvars-bec0cf34-950e-45d0-a294-429a125a5ad9" satisfied condition "Succeeded or Failed" Oct 30 01:12:26.318: INFO: Trying to get logs from node node1 pod client-envvars-bec0cf34-950e-45d0-a294-429a125a5ad9 container env3cont: STEP: delete the pod Oct 30 01:12:26.335: INFO: Waiting for pod client-envvars-bec0cf34-950e-45d0-a294-429a125a5ad9 to disappear Oct 30 01:12:26.337: INFO: Pod client-envvars-bec0cf34-950e-45d0-a294-429a125a5ad9 no longer exists [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:12:26.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3158" for this suite. • [SLOW TEST:12.101 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should contain environment variables for services [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":124,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:12:23.198: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Oct 30 01:12:23.232: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a791a152-3dbd-413a-8f24-89cd8cebf06b" in namespace "downward-api-7940" to be "Succeeded or Failed" Oct 30 01:12:23.234: INFO: Pod "downwardapi-volume-a791a152-3dbd-413a-8f24-89cd8cebf06b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.201821ms Oct 30 01:12:25.238: INFO: Pod "downwardapi-volume-a791a152-3dbd-413a-8f24-89cd8cebf06b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006101299s Oct 30 01:12:27.242: INFO: Pod "downwardapi-volume-a791a152-3dbd-413a-8f24-89cd8cebf06b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009983345s STEP: Saw pod success Oct 30 01:12:27.242: INFO: Pod "downwardapi-volume-a791a152-3dbd-413a-8f24-89cd8cebf06b" satisfied condition "Succeeded or Failed" Oct 30 01:12:27.244: INFO: Trying to get logs from node node1 pod downwardapi-volume-a791a152-3dbd-413a-8f24-89cd8cebf06b container client-container: STEP: delete the pod Oct 30 01:12:27.258: INFO: Waiting for pod downwardapi-volume-a791a152-3dbd-413a-8f24-89cd8cebf06b to disappear Oct 30 01:12:27.260: INFO: Pod downwardapi-volume-a791a152-3dbd-413a-8f24-89cd8cebf06b no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:12:27.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7940" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":52,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:12:21.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [It] should update/patch PodDisruptionBudget status [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Waiting for the pdb to be processed STEP: Updating PodDisruptionBudget status STEP: Waiting for all pods to be running Oct 30 01:12:23.629: INFO: running pods: 0 < 1 Oct 30 01:12:25.632: INFO: running pods: 0 < 1 STEP: locating a running pod STEP: Waiting for the pdb to be processed STEP: Patching PodDisruptionBudget status STEP: Waiting for the pdb to be processed [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:12:27.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-2386" for this suite. • [SLOW TEST:6.088 seconds] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update/patch PodDisruptionBudget status [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] DisruptionController should update/patch PodDisruptionBudget status [Conformance]","total":-1,"completed":3,"skipped":72,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:12:27.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap configmap-349/configmap-test-7fd7c290-91db-46b1-9da1-877f59280290 STEP: Creating a pod to test consume configMaps Oct 30 01:12:27.367: INFO: Waiting up to 5m0s for pod "pod-configmaps-0cf90adb-fe99-4270-a1b1-20a6100e8cb6" in namespace "configmap-349" to be "Succeeded or Failed" Oct 30 01:12:27.369: INFO: Pod "pod-configmaps-0cf90adb-fe99-4270-a1b1-20a6100e8cb6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.388801ms Oct 30 01:12:29.373: INFO: Pod "pod-configmaps-0cf90adb-fe99-4270-a1b1-20a6100e8cb6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00540903s Oct 30 01:12:31.376: INFO: Pod "pod-configmaps-0cf90adb-fe99-4270-a1b1-20a6100e8cb6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008526151s STEP: Saw pod success Oct 30 01:12:31.376: INFO: Pod "pod-configmaps-0cf90adb-fe99-4270-a1b1-20a6100e8cb6" satisfied condition "Succeeded or Failed" Oct 30 01:12:31.378: INFO: Trying to get logs from node node2 pod pod-configmaps-0cf90adb-fe99-4270-a1b1-20a6100e8cb6 container env-test: STEP: delete the pod Oct 30 01:12:31.388: INFO: Waiting for pod pod-configmaps-0cf90adb-fe99-4270-a1b1-20a6100e8cb6 to disappear Oct 30 01:12:31.390: INFO: Pod pod-configmaps-0cf90adb-fe99-4270-a1b1-20a6100e8cb6 no longer exists [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:12:31.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-349" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":88,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:12:31.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap configmap-7666/configmap-test-9d7fdd53-4ed9-4e78-939c-f5426b4fdf95 STEP: Creating a pod to test consume configMaps Oct 30 01:12:31.456: INFO: Waiting up to 5m0s for pod "pod-configmaps-c9b246ea-c622-4682-8968-7385f3bed3ca" in namespace "configmap-7666" to be "Succeeded or Failed" Oct 30 01:12:31.458: INFO: Pod "pod-configmaps-c9b246ea-c622-4682-8968-7385f3bed3ca": Phase="Pending", Reason="", readiness=false. Elapsed: 1.869602ms Oct 30 01:12:33.462: INFO: Pod "pod-configmaps-c9b246ea-c622-4682-8968-7385f3bed3ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005872327s Oct 30 01:12:35.465: INFO: Pod "pod-configmaps-c9b246ea-c622-4682-8968-7385f3bed3ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0097606s STEP: Saw pod success Oct 30 01:12:35.465: INFO: Pod "pod-configmaps-c9b246ea-c622-4682-8968-7385f3bed3ca" satisfied condition "Succeeded or Failed" Oct 30 01:12:35.470: INFO: Trying to get logs from node node1 pod pod-configmaps-c9b246ea-c622-4682-8968-7385f3bed3ca container env-test: STEP: delete the pod Oct 30 01:12:35.486: INFO: Waiting for pod pod-configmaps-c9b246ea-c622-4682-8968-7385f3bed3ca to disappear Oct 30 01:12:35.488: INFO: Pod pod-configmaps-c9b246ea-c622-4682-8968-7385f3bed3ca no longer exists [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:12:35.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7666" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":99,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:12:26.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:12:37.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3923" for this suite. • [SLOW TEST:11.060 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":-1,"completed":9,"skipped":142,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:12:27.726: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9631 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9631;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9631 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9631;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9631.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9631.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9631.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9631.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9631.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-9631.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9631.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-9631.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9631.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-9631.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9631.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-9631.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9631.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 118.17.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.17.118_udp@PTR;check="$$(dig +tcp +noall +answer +search 118.17.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.17.118_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9631 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9631;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9631 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9631;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9631.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9631.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9631.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9631.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9631.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-9631.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9631.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-9631.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9631.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-9631.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9631.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-9631.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9631.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 118.17.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.17.118_udp@PTR;check="$$(dig +tcp +noall +answer +search 118.17.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.17.118_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Oct 30 01:12:33.782: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9631/dns-test-11266bf9-91e9-47cc-bc3e-bd948a1ca666: the server could not find the requested resource (get pods dns-test-11266bf9-91e9-47cc-bc3e-bd948a1ca666) Oct 30 01:12:33.785: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9631/dns-test-11266bf9-91e9-47cc-bc3e-bd948a1ca666: the server could not find the requested resource (get pods dns-test-11266bf9-91e9-47cc-bc3e-bd948a1ca666) Oct 30 01:12:33.787: INFO: Unable to read wheezy_udp@dns-test-service.dns-9631 from pod dns-9631/dns-test-11266bf9-91e9-47cc-bc3e-bd948a1ca666: the server could not find the requested resource (get pods dns-test-11266bf9-91e9-47cc-bc3e-bd948a1ca666) Oct 30 01:12:33.790: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9631 from pod dns-9631/dns-test-11266bf9-91e9-47cc-bc3e-bd948a1ca666: the server could not find the requested resource (get pods dns-test-11266bf9-91e9-47cc-bc3e-bd948a1ca666) Oct 30 01:12:33.794: INFO: Unable to read wheezy_udp@dns-test-service.dns-9631.svc from pod dns-9631/dns-test-11266bf9-91e9-47cc-bc3e-bd948a1ca666: the server could not find the requested resource (get pods dns-test-11266bf9-91e9-47cc-bc3e-bd948a1ca666) Oct 30 01:12:33.796: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9631.svc from pod dns-9631/dns-test-11266bf9-91e9-47cc-bc3e-bd948a1ca666: the server could not find the requested resource (get pods dns-test-11266bf9-91e9-47cc-bc3e-bd948a1ca666) Oct 30 01:12:33.799: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9631.svc from pod dns-9631/dns-test-11266bf9-91e9-47cc-bc3e-bd948a1ca666: the server could not find the requested resource (get pods dns-test-11266bf9-91e9-47cc-bc3e-bd948a1ca666) Oct 30 01:12:33.802: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9631.svc from pod dns-9631/dns-test-11266bf9-91e9-47cc-bc3e-bd948a1ca666: the server could not find the requested resource (get pods dns-test-11266bf9-91e9-47cc-bc3e-bd948a1ca666) Oct 30 01:12:33.821: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9631/dns-test-11266bf9-91e9-47cc-bc3e-bd948a1ca666: the server could not find the requested resource (get pods dns-test-11266bf9-91e9-47cc-bc3e-bd948a1ca666) Oct 30 01:12:33.823: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9631/dns-test-11266bf9-91e9-47cc-bc3e-bd948a1ca666: the server could not find the requested resource (get pods dns-test-11266bf9-91e9-47cc-bc3e-bd948a1ca666) Oct 30 01:12:33.826: INFO: Unable to read jessie_udp@dns-test-service.dns-9631 from pod dns-9631/dns-test-11266bf9-91e9-47cc-bc3e-bd948a1ca666: the server could not find the requested resource (get pods dns-test-11266bf9-91e9-47cc-bc3e-bd948a1ca666) Oct 30 01:12:33.829: INFO: Unable to read jessie_tcp@dns-test-service.dns-9631 from pod dns-9631/dns-test-11266bf9-91e9-47cc-bc3e-bd948a1ca666: the server could not find the requested resource (get pods dns-test-11266bf9-91e9-47cc-bc3e-bd948a1ca666) Oct 30 01:12:33.831: INFO: Unable to read jessie_udp@dns-test-service.dns-9631.svc from pod dns-9631/dns-test-11266bf9-91e9-47cc-bc3e-bd948a1ca666: the server could not find the requested resource (get pods dns-test-11266bf9-91e9-47cc-bc3e-bd948a1ca666) Oct 30 01:12:33.833: INFO: Unable to read jessie_tcp@dns-test-service.dns-9631.svc from pod dns-9631/dns-test-11266bf9-91e9-47cc-bc3e-bd948a1ca666: the server could not find the requested resource (get pods dns-test-11266bf9-91e9-47cc-bc3e-bd948a1ca666) Oct 30 01:12:33.836: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9631.svc from pod dns-9631/dns-test-11266bf9-91e9-47cc-bc3e-bd948a1ca666: the server could not find the requested resource (get pods dns-test-11266bf9-91e9-47cc-bc3e-bd948a1ca666) Oct 30 01:12:33.838: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9631.svc from pod dns-9631/dns-test-11266bf9-91e9-47cc-bc3e-bd948a1ca666: the server could not find the requested resource (get pods dns-test-11266bf9-91e9-47cc-bc3e-bd948a1ca666) Oct 30 01:12:33.858: INFO: Lookups using dns-9631/dns-test-11266bf9-91e9-47cc-bc3e-bd948a1ca666 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9631 wheezy_tcp@dns-test-service.dns-9631 wheezy_udp@dns-test-service.dns-9631.svc wheezy_tcp@dns-test-service.dns-9631.svc wheezy_udp@_http._tcp.dns-test-service.dns-9631.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9631.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9631 jessie_tcp@dns-test-service.dns-9631 jessie_udp@dns-test-service.dns-9631.svc jessie_tcp@dns-test-service.dns-9631.svc jessie_udp@_http._tcp.dns-test-service.dns-9631.svc jessie_tcp@_http._tcp.dns-test-service.dns-9631.svc] Oct 30 01:12:38.935: INFO: DNS probes using dns-9631/dns-test-11266bf9-91e9-47cc-bc3e-bd948a1ca666 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:12:38.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9631" for this suite. • [SLOW TEST:11.241 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":-1,"completed":4,"skipped":103,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:12:39.007: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:12:39.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-1215" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":-1,"completed":5,"skipped":124,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:12:35.534: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0644 on tmpfs Oct 30 01:12:35.566: INFO: Waiting up to 5m0s for pod "pod-dcebfa28-b33c-4326-8cc9-a4b3e24f4806" in namespace "emptydir-8693" to be "Succeeded or Failed" Oct 30 01:12:35.568: INFO: Pod "pod-dcebfa28-b33c-4326-8cc9-a4b3e24f4806": Phase="Pending", Reason="", readiness=false. Elapsed: 2.314643ms Oct 30 01:12:37.571: INFO: Pod "pod-dcebfa28-b33c-4326-8cc9-a4b3e24f4806": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005319574s Oct 30 01:12:39.575: INFO: Pod "pod-dcebfa28-b33c-4326-8cc9-a4b3e24f4806": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009066606s STEP: Saw pod success Oct 30 01:12:39.575: INFO: Pod "pod-dcebfa28-b33c-4326-8cc9-a4b3e24f4806" satisfied condition "Succeeded or Failed" Oct 30 01:12:39.578: INFO: Trying to get logs from node node1 pod pod-dcebfa28-b33c-4326-8cc9-a4b3e24f4806 container test-container: STEP: delete the pod Oct 30 01:12:39.592: INFO: Waiting for pod pod-dcebfa28-b33c-4326-8cc9-a4b3e24f4806 to disappear Oct 30 01:12:39.594: INFO: Pod pod-dcebfa28-b33c-4326-8cc9-a4b3e24f4806 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:12:39.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8693" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":119,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:12:39.629: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should delete a collection of events [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create set of events STEP: get a list of Events with a label in the current namespace STEP: delete a list of events Oct 30 01:12:39.662: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity [AfterEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:12:39.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-2782" for this suite. • ------------------------------ {"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":-1,"completed":10,"skipped":134,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:12:37.451: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Oct 30 01:12:37.484: INFO: Waiting up to 5m0s for pod "downwardapi-volume-24ce54c6-ada4-426f-8d88-b2aeb7857cf0" in namespace "downward-api-3156" to be "Succeeded or Failed" Oct 30 01:12:37.487: INFO: Pod "downwardapi-volume-24ce54c6-ada4-426f-8d88-b2aeb7857cf0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.665823ms Oct 30 01:12:39.490: INFO: Pod "downwardapi-volume-24ce54c6-ada4-426f-8d88-b2aeb7857cf0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005883608s Oct 30 01:12:41.493: INFO: Pod "downwardapi-volume-24ce54c6-ada4-426f-8d88-b2aeb7857cf0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008899157s STEP: Saw pod success Oct 30 01:12:41.493: INFO: Pod "downwardapi-volume-24ce54c6-ada4-426f-8d88-b2aeb7857cf0" satisfied condition "Succeeded or Failed" Oct 30 01:12:41.496: INFO: Trying to get logs from node node1 pod downwardapi-volume-24ce54c6-ada4-426f-8d88-b2aeb7857cf0 container client-container: STEP: delete the pod Oct 30 01:12:41.509: INFO: Waiting for pod downwardapi-volume-24ce54c6-ada4-426f-8d88-b2aeb7857cf0 to disappear Oct 30 01:12:41.512: INFO: Pod downwardapi-volume-24ce54c6-ada4-426f-8d88-b2aeb7857cf0 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:12:41.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3156" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":147,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:11:52.209: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-433, will wait for the garbage collector to delete the pods Oct 30 01:11:58.293: INFO: Deleting Job.batch foo took: 3.471602ms Oct 30 01:11:58.393: INFO: Terminating Job.batch foo pods took: 100.696598ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:12:42.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-433" for this suite. • [SLOW TEST:50.696 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":-1,"completed":7,"skipped":49,"failed":0} SS ------------------------------ [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:12:42.912: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-116bb373-94b6-40fc-94cf-d18241869e4e STEP: Creating a pod to test consume secrets Oct 30 01:12:42.950: INFO: Waiting up to 5m0s for pod "pod-secrets-7c97eb22-1cc2-455d-a70f-511b2a63e6bd" in namespace "secrets-9181" to be "Succeeded or Failed" Oct 30 01:12:42.952: INFO: Pod "pod-secrets-7c97eb22-1cc2-455d-a70f-511b2a63e6bd": Phase="Pending", Reason="", readiness=false. Elapsed: 1.930701ms Oct 30 01:12:44.956: INFO: Pod "pod-secrets-7c97eb22-1cc2-455d-a70f-511b2a63e6bd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006549511s Oct 30 01:12:46.960: INFO: Pod "pod-secrets-7c97eb22-1cc2-455d-a70f-511b2a63e6bd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010114054s STEP: Saw pod success Oct 30 01:12:46.960: INFO: Pod "pod-secrets-7c97eb22-1cc2-455d-a70f-511b2a63e6bd" satisfied condition "Succeeded or Failed" Oct 30 01:12:46.962: INFO: Trying to get logs from node node1 pod pod-secrets-7c97eb22-1cc2-455d-a70f-511b2a63e6bd container secret-env-test: STEP: delete the pod Oct 30 01:12:46.976: INFO: Waiting for pod pod-secrets-7c97eb22-1cc2-455d-a70f-511b2a63e6bd to disappear Oct 30 01:12:46.978: INFO: Pod pod-secrets-7c97eb22-1cc2-455d-a70f-511b2a63e6bd no longer exists [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:12:46.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9181" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":51,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:12:41.530: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Oct 30 01:12:41.563: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2056 029b357d-a058-4528-8b89-4ebf1011377b 82932 0 2021-10-30 01:12:41 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-10-30 01:12:41 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Oct 30 01:12:41.564: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2056 029b357d-a058-4528-8b89-4ebf1011377b 82933 0 2021-10-30 01:12:41 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-10-30 01:12:41 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Oct 30 01:12:41.564: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2056 029b357d-a058-4528-8b89-4ebf1011377b 82934 0 2021-10-30 01:12:41 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-10-30 01:12:41 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Oct 30 01:12:51.585: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2056 029b357d-a058-4528-8b89-4ebf1011377b 83122 0 2021-10-30 01:12:41 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-10-30 01:12:41 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Oct 30 01:12:51.585: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2056 029b357d-a058-4528-8b89-4ebf1011377b 83123 0 2021-10-30 01:12:41 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-10-30 01:12:41 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} Oct 30 01:12:51.585: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2056 029b357d-a058-4528-8b89-4ebf1011377b 83124 0 2021-10-30 01:12:41 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-10-30 01:12:41 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:12:51.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2056" for this suite. • [SLOW TEST:10.065 seconds] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":-1,"completed":11,"skipped":151,"failed":0} SS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:12:39.708: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check if kubectl can dry-run update Pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 Oct 30 01:12:39.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5941 run e2e-test-httpd-pod --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 --labels=run=e2e-test-httpd-pod' Oct 30 01:12:39.884: INFO: stderr: "" Oct 30 01:12:39.884: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: replace the image in the pod with server-side dry-run Oct 30 01:12:39.885: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5941 patch pod e2e-test-httpd-pod -p {"spec":{"containers":[{"name": "e2e-test-httpd-pod","image": "k8s.gcr.io/e2e-test-images/busybox:1.29-1"}]}} --dry-run=server' Oct 30 01:12:40.326: INFO: stderr: "" Oct 30 01:12:40.326: INFO: stdout: "pod/e2e-test-httpd-pod patched\n" STEP: verifying the pod e2e-test-httpd-pod has the right image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 Oct 30 01:12:40.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5941 delete pods e2e-test-httpd-pod' Oct 30 01:12:52.953: INFO: stderr: "" Oct 30 01:12:52.953: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:12:52.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5941" for this suite. • [SLOW TEST:13.252 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl server-side dry-run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:903 should check if kubectl can dry-run update Pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":-1,"completed":11,"skipped":135,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:12:39.053: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. Oct 30 01:12:39.094: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:12:41.097: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:12:43.097: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:12:45.097: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the pod with lifecycle hook Oct 30 01:12:45.111: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:12:47.114: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:12:49.116: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = true) STEP: check poststart hook STEP: delete the pod with lifecycle hook Oct 30 01:12:49.129: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Oct 30 01:12:49.132: INFO: Pod pod-with-poststart-exec-hook still exists Oct 30 01:12:51.133: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Oct 30 01:12:51.136: INFO: Pod pod-with-poststart-exec-hook still exists Oct 30 01:12:53.133: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Oct 30 01:12:53.136: INFO: Pod pod-with-poststart-exec-hook still exists Oct 30 01:12:55.134: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Oct 30 01:12:55.137: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:12:55.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9613" for this suite. • [SLOW TEST:16.093 seconds] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43 should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":131,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:12:55.159: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Oct 30 01:12:55.191: INFO: Waiting up to 5m0s for pod "downwardapi-volume-60248123-f78e-46e0-a799-9e166a2f77b8" in namespace "downward-api-2972" to be "Succeeded or Failed" Oct 30 01:12:55.194: INFO: Pod "downwardapi-volume-60248123-f78e-46e0-a799-9e166a2f77b8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.026728ms Oct 30 01:12:57.196: INFO: Pod "downwardapi-volume-60248123-f78e-46e0-a799-9e166a2f77b8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005854402s Oct 30 01:12:59.200: INFO: Pod "downwardapi-volume-60248123-f78e-46e0-a799-9e166a2f77b8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009773039s STEP: Saw pod success Oct 30 01:12:59.200: INFO: Pod "downwardapi-volume-60248123-f78e-46e0-a799-9e166a2f77b8" satisfied condition "Succeeded or Failed" Oct 30 01:12:59.203: INFO: Trying to get logs from node node1 pod downwardapi-volume-60248123-f78e-46e0-a799-9e166a2f77b8 container client-container: STEP: delete the pod Oct 30 01:12:59.216: INFO: Waiting for pod downwardapi-volume-60248123-f78e-46e0-a799-9e166a2f77b8 to disappear Oct 30 01:12:59.218: INFO: Pod downwardapi-volume-60248123-f78e-46e0-a799-9e166a2f77b8 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:12:59.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2972" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":137,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:11:22.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W1030 01:12:02.392523 28 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 30 01:13:04.409: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. Oct 30 01:13:04.409: INFO: Deleting pod "simpletest.rc-5kqvx" in namespace "gc-5349" Oct 30 01:13:04.414: INFO: Deleting pod "simpletest.rc-89knc" in namespace "gc-5349" Oct 30 01:13:04.421: INFO: Deleting pod "simpletest.rc-h78bk" in namespace "gc-5349" Oct 30 01:13:04.431: INFO: Deleting pod "simpletest.rc-jb6zw" in namespace "gc-5349" Oct 30 01:13:04.439: INFO: Deleting pod "simpletest.rc-kf76d" in namespace "gc-5349" Oct 30 01:13:04.447: INFO: Deleting pod "simpletest.rc-l65r9" in namespace "gc-5349" Oct 30 01:13:04.454: INFO: Deleting pod "simpletest.rc-qdrdh" in namespace "gc-5349" Oct 30 01:13:04.459: INFO: Deleting pod "simpletest.rc-t7k68" in namespace "gc-5349" Oct 30 01:13:04.465: INFO: Deleting pod "simpletest.rc-w5v6g" in namespace "gc-5349" Oct 30 01:13:04.470: INFO: Deleting pod "simpletest.rc-wk5ps" in namespace "gc-5349" [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:13:04.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5349" for this suite. • [SLOW TEST:102.146 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":-1,"completed":2,"skipped":35,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:11:59.146: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: Gathering metrics W1030 01:12:05.206998 29 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 30 01:13:07.223: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:13:07.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7436" for this suite. • [SLOW TEST:68.086 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":-1,"completed":11,"skipped":252,"failed":0} S ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:13:07.237: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should guarantee kube-root-ca.crt exist in any namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 01:13:07.261: INFO: Got root ca configmap in namespace "svcaccounts-7516" Oct 30 01:13:07.264: INFO: Deleted root ca configmap in namespace "svcaccounts-7516" STEP: waiting for a new root ca configmap created Oct 30 01:13:07.767: INFO: Recreated root ca configmap in namespace "svcaccounts-7516" Oct 30 01:13:07.771: INFO: Updated root ca configmap in namespace "svcaccounts-7516" STEP: waiting for the root ca configmap reconciled Oct 30 01:13:08.274: INFO: Reconciled root ca configmap in namespace "svcaccounts-7516" [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:13:08.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-7516" for this suite. • ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in any namespace [Conformance]","total":-1,"completed":12,"skipped":253,"failed":0} SSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:12:47.058: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: set up a multi version CRD Oct 30 01:12:47.080: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:13:09.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1654" for this suite. • [SLOW TEST:22.778 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":-1,"completed":9,"skipped":94,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:13:08.296: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0644 on node default medium Oct 30 01:13:08.331: INFO: Waiting up to 5m0s for pod "pod-6595c77a-6d20-49be-9ca8-cd6c0336a79b" in namespace "emptydir-5793" to be "Succeeded or Failed" Oct 30 01:13:08.334: INFO: Pod "pod-6595c77a-6d20-49be-9ca8-cd6c0336a79b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.648476ms Oct 30 01:13:10.337: INFO: Pod "pod-6595c77a-6d20-49be-9ca8-cd6c0336a79b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006018076s Oct 30 01:13:12.340: INFO: Pod "pod-6595c77a-6d20-49be-9ca8-cd6c0336a79b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009462116s STEP: Saw pod success Oct 30 01:13:12.340: INFO: Pod "pod-6595c77a-6d20-49be-9ca8-cd6c0336a79b" satisfied condition "Succeeded or Failed" Oct 30 01:13:12.343: INFO: Trying to get logs from node node1 pod pod-6595c77a-6d20-49be-9ca8-cd6c0336a79b container test-container: STEP: delete the pod Oct 30 01:13:12.824: INFO: Waiting for pod pod-6595c77a-6d20-49be-9ca8-cd6c0336a79b to disappear Oct 30 01:13:12.826: INFO: Pod pod-6595c77a-6d20-49be-9ca8-cd6c0336a79b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:13:12.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5793" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":256,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:13:09.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod Oct 30 01:13:09.897: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:13:15.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5974" for this suite. • [SLOW TEST:5.763 seconds] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":-1,"completed":10,"skipped":117,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:13:04.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should be submitted and removed [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Oct 30 01:13:04.533: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying pod deletion was observed [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:13:15.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9153" for this suite. • [SLOW TEST:11.140 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be submitted and removed [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-node] Pods should be submitted and removed [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":52,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:13:15.692: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 30 01:13:16.244: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 30 01:13:18.258: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153196, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153196, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153196, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153196, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 30 01:13:21.271: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:13:21.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4336" for this suite. STEP: Destroying namespace "webhook-4336-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.653 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":-1,"completed":4,"skipped":75,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:12:59.256: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod liveness-ee2bf73f-3244-4272-adaf-4af03bee7fee in namespace container-probe-7166 Oct 30 01:13:03.296: INFO: Started pod liveness-ee2bf73f-3244-4272-adaf-4af03bee7fee in namespace container-probe-7166 STEP: checking the pod's current state and verifying that restartCount is present Oct 30 01:13:03.298: INFO: Initial restart count of pod liveness-ee2bf73f-3244-4272-adaf-4af03bee7fee is 0 Oct 30 01:13:21.337: INFO: Restart count of pod container-probe-7166/liveness-ee2bf73f-3244-4272-adaf-4af03bee7fee is now 1 (18.038236464s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:13:21.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7166" for this suite. • [SLOW TEST:22.100 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":154,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:13:21.418: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be immutable if `immutable` field is set [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:13:21.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-812" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":9,"skipped":191,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:12:53.062: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:293 [It] should scale a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a replication controller Oct 30 01:12:53.085: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4919 create -f -' Oct 30 01:12:53.393: INFO: stderr: "" Oct 30 01:12:53.393: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Oct 30 01:12:53.393: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4919 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Oct 30 01:12:53.563: INFO: stderr: "" Oct 30 01:12:53.563: INFO: stdout: "update-demo-nautilus-bxdwm update-demo-nautilus-xnh75 " Oct 30 01:12:53.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4919 get pods update-demo-nautilus-bxdwm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Oct 30 01:12:53.708: INFO: stderr: "" Oct 30 01:12:53.708: INFO: stdout: "" Oct 30 01:12:53.708: INFO: update-demo-nautilus-bxdwm is created but not running Oct 30 01:12:58.710: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4919 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Oct 30 01:12:58.886: INFO: stderr: "" Oct 30 01:12:58.887: INFO: stdout: "update-demo-nautilus-bxdwm update-demo-nautilus-xnh75 " Oct 30 01:12:58.887: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4919 get pods update-demo-nautilus-bxdwm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Oct 30 01:12:59.046: INFO: stderr: "" Oct 30 01:12:59.046: INFO: stdout: "true" Oct 30 01:12:59.046: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4919 get pods update-demo-nautilus-bxdwm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Oct 30 01:12:59.205: INFO: stderr: "" Oct 30 01:12:59.205: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Oct 30 01:12:59.205: INFO: validating pod update-demo-nautilus-bxdwm Oct 30 01:12:59.208: INFO: got data: { "image": "nautilus.jpg" } Oct 30 01:12:59.208: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Oct 30 01:12:59.208: INFO: update-demo-nautilus-bxdwm is verified up and running Oct 30 01:12:59.208: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4919 get pods update-demo-nautilus-xnh75 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Oct 30 01:12:59.360: INFO: stderr: "" Oct 30 01:12:59.360: INFO: stdout: "true" Oct 30 01:12:59.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4919 get pods update-demo-nautilus-xnh75 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Oct 30 01:12:59.497: INFO: stderr: "" Oct 30 01:12:59.497: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Oct 30 01:12:59.497: INFO: validating pod update-demo-nautilus-xnh75 Oct 30 01:12:59.501: INFO: got data: { "image": "nautilus.jpg" } Oct 30 01:12:59.501: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Oct 30 01:12:59.501: INFO: update-demo-nautilus-xnh75 is verified up and running STEP: scaling down the replication controller Oct 30 01:12:59.510: INFO: scanned /root for discovery docs: Oct 30 01:12:59.510: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4919 scale rc update-demo-nautilus --replicas=1 --timeout=5m' Oct 30 01:12:59.721: INFO: stderr: "" Oct 30 01:12:59.721: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Oct 30 01:12:59.721: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4919 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Oct 30 01:12:59.874: INFO: stderr: "" Oct 30 01:12:59.874: INFO: stdout: "update-demo-nautilus-bxdwm update-demo-nautilus-xnh75 " STEP: Replicas for name=update-demo: expected=1 actual=2 Oct 30 01:13:04.875: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4919 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Oct 30 01:13:05.051: INFO: stderr: "" Oct 30 01:13:05.051: INFO: stdout: "update-demo-nautilus-bxdwm update-demo-nautilus-xnh75 " STEP: Replicas for name=update-demo: expected=1 actual=2 Oct 30 01:13:10.054: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4919 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Oct 30 01:13:10.231: INFO: stderr: "" Oct 30 01:13:10.231: INFO: stdout: "update-demo-nautilus-bxdwm update-demo-nautilus-xnh75 " STEP: Replicas for name=update-demo: expected=1 actual=2 Oct 30 01:13:15.234: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4919 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Oct 30 01:13:15.412: INFO: stderr: "" Oct 30 01:13:15.412: INFO: stdout: "update-demo-nautilus-xnh75 " Oct 30 01:13:15.412: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4919 get pods update-demo-nautilus-xnh75 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Oct 30 01:13:15.577: INFO: stderr: "" Oct 30 01:13:15.577: INFO: stdout: "true" Oct 30 01:13:15.577: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4919 get pods update-demo-nautilus-xnh75 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Oct 30 01:13:15.736: INFO: stderr: "" Oct 30 01:13:15.736: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Oct 30 01:13:15.736: INFO: validating pod update-demo-nautilus-xnh75 Oct 30 01:13:15.741: INFO: got data: { "image": "nautilus.jpg" } Oct 30 01:13:15.741: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Oct 30 01:13:15.741: INFO: update-demo-nautilus-xnh75 is verified up and running STEP: scaling up the replication controller Oct 30 01:13:15.749: INFO: scanned /root for discovery docs: Oct 30 01:13:15.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4919 scale rc update-demo-nautilus --replicas=2 --timeout=5m' Oct 30 01:13:15.955: INFO: stderr: "" Oct 30 01:13:15.955: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Oct 30 01:13:15.955: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4919 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Oct 30 01:13:16.133: INFO: stderr: "" Oct 30 01:13:16.133: INFO: stdout: "update-demo-nautilus-kttlf update-demo-nautilus-xnh75 " Oct 30 01:13:16.133: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4919 get pods update-demo-nautilus-kttlf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Oct 30 01:13:16.292: INFO: stderr: "" Oct 30 01:13:16.292: INFO: stdout: "" Oct 30 01:13:16.292: INFO: update-demo-nautilus-kttlf is created but not running Oct 30 01:13:21.293: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4919 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Oct 30 01:13:21.461: INFO: stderr: "" Oct 30 01:13:21.461: INFO: stdout: "update-demo-nautilus-kttlf update-demo-nautilus-xnh75 " Oct 30 01:13:21.461: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4919 get pods update-demo-nautilus-kttlf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Oct 30 01:13:21.632: INFO: stderr: "" Oct 30 01:13:21.632: INFO: stdout: "true" Oct 30 01:13:21.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4919 get pods update-demo-nautilus-kttlf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Oct 30 01:13:21.799: INFO: stderr: "" Oct 30 01:13:21.799: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Oct 30 01:13:21.799: INFO: validating pod update-demo-nautilus-kttlf Oct 30 01:13:21.803: INFO: got data: { "image": "nautilus.jpg" } Oct 30 01:13:21.804: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Oct 30 01:13:21.804: INFO: update-demo-nautilus-kttlf is verified up and running Oct 30 01:13:21.804: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4919 get pods update-demo-nautilus-xnh75 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Oct 30 01:13:21.973: INFO: stderr: "" Oct 30 01:13:21.973: INFO: stdout: "true" Oct 30 01:13:21.973: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4919 get pods update-demo-nautilus-xnh75 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Oct 30 01:13:22.120: INFO: stderr: "" Oct 30 01:13:22.120: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Oct 30 01:13:22.120: INFO: validating pod update-demo-nautilus-xnh75 Oct 30 01:13:22.125: INFO: got data: { "image": "nautilus.jpg" } Oct 30 01:13:22.125: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Oct 30 01:13:22.125: INFO: update-demo-nautilus-xnh75 is verified up and running STEP: using delete to clean up resources Oct 30 01:13:22.125: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4919 delete --grace-period=0 --force -f -' Oct 30 01:13:22.246: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Oct 30 01:13:22.246: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Oct 30 01:13:22.246: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4919 get rc,svc -l name=update-demo --no-headers' Oct 30 01:13:22.445: INFO: stderr: "No resources found in kubectl-4919 namespace.\n" Oct 30 01:13:22.445: INFO: stdout: "" Oct 30 01:13:22.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4919 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Oct 30 01:13:22.624: INFO: stderr: "" Oct 30 01:13:22.624: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:13:22.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4919" for this suite. • [SLOW TEST:29.571 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:291 should scale a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":-1,"completed":12,"skipped":195,"failed":0} SS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:13:21.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 01:13:21.411: INFO: Creating deployment "test-recreate-deployment" Oct 30 01:13:21.414: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Oct 30 01:13:21.420: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Oct 30 01:13:23.427: INFO: Waiting deployment "test-recreate-deployment" to complete Oct 30 01:13:23.429: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153201, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153201, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153201, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153201, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6cb8b65c46\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 30 01:13:25.433: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153201, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153201, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153201, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153201, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6cb8b65c46\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 30 01:13:27.433: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Oct 30 01:13:27.440: INFO: Updating deployment test-recreate-deployment Oct 30 01:13:27.440: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 Oct 30 01:13:27.483: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-1013 4a3c5d61-de0e-488f-8a6c-cdebccb8337d 84056 2 2021-10-30 01:13:21 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2021-10-30 01:13:27 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-10-30 01:13:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc000721d68 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2021-10-30 01:13:27 +0000 UTC,LastTransitionTime:2021-10-30 01:13:27 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-85d47dcb4" is progressing.,LastUpdateTime:2021-10-30 01:13:27 +0000 UTC,LastTransitionTime:2021-10-30 01:13:21 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Oct 30 01:13:27.486: INFO: New ReplicaSet "test-recreate-deployment-85d47dcb4" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-85d47dcb4 deployment-1013 04cdd285-9480-4440-9014-c215c1625e42 84054 1 2021-10-30 01:13:27 +0000 UTC map[name:sample-pod-3 pod-template-hash:85d47dcb4] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 4a3c5d61-de0e-488f-8a6c-cdebccb8337d 0xc000322a10 0xc000322a11}] [] [{kube-controller-manager Update apps/v1 2021-10-30 01:13:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4a3c5d61-de0e-488f-8a6c-cdebccb8337d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 85d47dcb4,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:85d47dcb4] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc000322af8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Oct 30 01:13:27.486: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Oct 30 01:13:27.486: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-6cb8b65c46 deployment-1013 18d8afb4-6f0d-489b-9b3e-44974fab82d9 84044 2 2021-10-30 01:13:21 +0000 UTC map[name:sample-pod-3 pod-template-hash:6cb8b65c46] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 4a3c5d61-de0e-488f-8a6c-cdebccb8337d 0xc0003227d7 0xc0003227d8}] [] [{kube-controller-manager Update apps/v1 2021-10-30 01:13:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4a3c5d61-de0e-488f-8a6c-cdebccb8337d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6cb8b65c46,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:6cb8b65c46] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc000322928 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Oct 30 01:13:27.489: INFO: Pod "test-recreate-deployment-85d47dcb4-wlb66" is not available: &Pod{ObjectMeta:{test-recreate-deployment-85d47dcb4-wlb66 test-recreate-deployment-85d47dcb4- deployment-1013 3de2b1ce-f48c-4cb9-af72-49db9a640834 84057 0 2021-10-30 01:13:27 +0000 UTC map[name:sample-pod-3 pod-template-hash:85d47dcb4] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-recreate-deployment-85d47dcb4 04cdd285-9480-4440-9014-c215c1625e42 0xc00032369f 0xc0003236b0}] [] [{kube-controller-manager Update v1 2021-10-30 01:13:27 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"04cdd285-9480-4440-9014-c215c1625e42\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-10-30 01:13:27 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-skppq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-skppq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:13:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:13:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:13:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:13:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:,StartTime:2021-10-30 01:13:27 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:13:27.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1013" for this suite. • [SLOW TEST:6.106 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":5,"skipped":100,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:13:21.492: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Oct 30 01:13:21.524: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fe8a1301-6743-4eaa-9edb-9780c62bc9c8" in namespace "projected-5465" to be "Succeeded or Failed" Oct 30 01:13:21.526: INFO: Pod "downwardapi-volume-fe8a1301-6743-4eaa-9edb-9780c62bc9c8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028992ms Oct 30 01:13:23.531: INFO: Pod "downwardapi-volume-fe8a1301-6743-4eaa-9edb-9780c62bc9c8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006859631s Oct 30 01:13:25.534: INFO: Pod "downwardapi-volume-fe8a1301-6743-4eaa-9edb-9780c62bc9c8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010719547s Oct 30 01:13:27.538: INFO: Pod "downwardapi-volume-fe8a1301-6743-4eaa-9edb-9780c62bc9c8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013819825s STEP: Saw pod success Oct 30 01:13:27.538: INFO: Pod "downwardapi-volume-fe8a1301-6743-4eaa-9edb-9780c62bc9c8" satisfied condition "Succeeded or Failed" Oct 30 01:13:27.541: INFO: Trying to get logs from node node2 pod downwardapi-volume-fe8a1301-6743-4eaa-9edb-9780c62bc9c8 container client-container: STEP: delete the pod Oct 30 01:13:27.555: INFO: Waiting for pod downwardapi-volume-fe8a1301-6743-4eaa-9edb-9780c62bc9c8 to disappear Oct 30 01:13:27.557: INFO: Pod downwardapi-volume-fe8a1301-6743-4eaa-9edb-9780c62bc9c8 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:13:27.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5465" for this suite. • [SLOW TEST:6.073 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":203,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:13:12.951: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:13:29.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4887" for this suite. • [SLOW TEST:16.106 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":-1,"completed":14,"skipped":324,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:13:29.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check if Kubernetes control plane services is included in cluster-info [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: validating cluster-info Oct 30 01:13:29.087: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-667 cluster-info' Oct 30 01:13:29.236: INFO: stderr: "" Oct 30 01:13:29.236: INFO: stdout: "\x1b[0;32mKubernetes control plane\x1b[0m is running at \x1b[0;33mhttps://10.10.190.202:6443\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:13:29.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-667" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info [Conformance]","total":-1,"completed":15,"skipped":328,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:13:27.540: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0666 on node default medium Oct 30 01:13:27.572: INFO: Waiting up to 5m0s for pod "pod-a46c21ea-d7cd-46a7-8850-f1e17c6d1e42" in namespace "emptydir-2478" to be "Succeeded or Failed" Oct 30 01:13:27.575: INFO: Pod "pod-a46c21ea-d7cd-46a7-8850-f1e17c6d1e42": Phase="Pending", Reason="", readiness=false. Elapsed: 2.400387ms Oct 30 01:13:29.578: INFO: Pod "pod-a46c21ea-d7cd-46a7-8850-f1e17c6d1e42": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005886787s Oct 30 01:13:31.582: INFO: Pod "pod-a46c21ea-d7cd-46a7-8850-f1e17c6d1e42": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.00958231s STEP: Saw pod success Oct 30 01:13:31.582: INFO: Pod "pod-a46c21ea-d7cd-46a7-8850-f1e17c6d1e42" satisfied condition "Succeeded or Failed" Oct 30 01:13:31.584: INFO: Trying to get logs from node node2 pod pod-a46c21ea-d7cd-46a7-8850-f1e17c6d1e42 container test-container: STEP: delete the pod Oct 30 01:13:31.602: INFO: Waiting for pod pod-a46c21ea-d7cd-46a7-8850-f1e17c6d1e42 to disappear Oct 30 01:13:31.605: INFO: Pod pod-a46c21ea-d7cd-46a7-8850-f1e17c6d1e42 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:13:31.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2478" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":121,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:13:27.585: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should get a host IP [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating pod Oct 30 01:13:27.624: INFO: The status of Pod pod-hostip-63df3424-35b1-4e55-beb5-ec63b08b9e07 is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:13:29.629: INFO: The status of Pod pod-hostip-63df3424-35b1-4e55-beb5-ec63b08b9e07 is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:13:31.627: INFO: The status of Pod pod-hostip-63df3424-35b1-4e55-beb5-ec63b08b9e07 is Running (Ready = true) Oct 30 01:13:31.631: INFO: Pod pod-hostip-63df3424-35b1-4e55-beb5-ec63b08b9e07 has hostIP: 10.10.190.207 [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:13:31.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8459" for this suite. •SSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:13:22.638: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-8564 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-8564 STEP: creating replication controller externalsvc in namespace services-8564 I1030 01:13:22.675147 30 runners.go:190] Created replication controller with name: externalsvc, namespace: services-8564, replica count: 2 I1030 01:13:25.726464 30 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1030 01:13:28.728028 30 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Oct 30 01:13:28.742: INFO: Creating new exec pod Oct 30 01:13:32.757: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8564 exec execpodslmkg -- /bin/sh -x -c nslookup clusterip-service.services-8564.svc.cluster.local' Oct 30 01:13:33.363: INFO: stderr: "+ nslookup clusterip-service.services-8564.svc.cluster.local\n" Oct 30 01:13:33.363: INFO: stdout: "Server:\t\t10.233.0.3\nAddress:\t10.233.0.3#53\n\nclusterip-service.services-8564.svc.cluster.local\tcanonical name = externalsvc.services-8564.svc.cluster.local.\nName:\texternalsvc.services-8564.svc.cluster.local\nAddress: 10.233.59.249\n\n" STEP: deleting ReplicationController externalsvc in namespace services-8564, will wait for the garbage collector to delete the pods Oct 30 01:13:33.422: INFO: Deleting ReplicationController externalsvc took: 4.175597ms Oct 30 01:13:33.522: INFO: Terminating ReplicationController externalsvc pods took: 100.180124ms Oct 30 01:13:37.631: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:13:37.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8564" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:15.006 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":-1,"completed":13,"skipped":197,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:11:38.600: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-2103 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a new StatefulSet Oct 30 01:11:38.630: INFO: Found 0 stateful pods, waiting for 3 Oct 30 01:11:48.635: INFO: Found 2 stateful pods, waiting for 3 Oct 30 01:11:58.636: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Oct 30 01:11:58.636: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Oct 30 01:11:58.636: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 to k8s.gcr.io/e2e-test-images/httpd:2.4.39-1 Oct 30 01:11:58.660: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Oct 30 01:12:08.689: INFO: Updating stateful set ss2 Oct 30 01:12:08.694: INFO: Waiting for Pod statefulset-2103/ss2-2 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 STEP: Restoring Pods to the correct revision when they are deleted Oct 30 01:12:18.715: INFO: Found 1 stateful pods, waiting for 3 Oct 30 01:12:28.719: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Oct 30 01:12:28.719: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Oct 30 01:12:28.719: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Oct 30 01:12:28.742: INFO: Updating stateful set ss2 Oct 30 01:12:28.747: INFO: Waiting for Pod statefulset-2103/ss2-1 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Oct 30 01:12:38.756: INFO: Waiting for Pod statefulset-2103/ss2-1 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Oct 30 01:12:48.770: INFO: Updating stateful set ss2 Oct 30 01:12:48.775: INFO: Waiting for StatefulSet statefulset-2103/ss2 to complete update Oct 30 01:12:48.775: INFO: Waiting for Pod statefulset-2103/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Oct 30 01:12:58.783: INFO: Waiting for StatefulSet statefulset-2103/ss2 to complete update Oct 30 01:12:58.783: INFO: Waiting for Pod statefulset-2103/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 Oct 30 01:13:08.781: INFO: Deleting all statefulset in ns statefulset-2103 Oct 30 01:13:08.783: INFO: Scaling statefulset ss2 to 0 Oct 30 01:13:38.798: INFO: Waiting for statefulset status.replicas updated to 0 Oct 30 01:13:38.801: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:13:38.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2103" for this suite. • [SLOW TEST:120.219 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":-1,"completed":6,"skipped":63,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:13:37.670: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-map-fc4d4564-b30a-4861-8cc6-e577db4576d1 STEP: Creating a pod to test consume configMaps Oct 30 01:13:37.706: INFO: Waiting up to 5m0s for pod "pod-configmaps-88d64377-5adc-4912-a5d9-4ac9bb0cf9e1" in namespace "configmap-821" to be "Succeeded or Failed" Oct 30 01:13:37.710: INFO: Pod "pod-configmaps-88d64377-5adc-4912-a5d9-4ac9bb0cf9e1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.760206ms Oct 30 01:13:39.714: INFO: Pod "pod-configmaps-88d64377-5adc-4912-a5d9-4ac9bb0cf9e1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007814123s Oct 30 01:13:41.716: INFO: Pod "pod-configmaps-88d64377-5adc-4912-a5d9-4ac9bb0cf9e1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01031084s STEP: Saw pod success Oct 30 01:13:41.716: INFO: Pod "pod-configmaps-88d64377-5adc-4912-a5d9-4ac9bb0cf9e1" satisfied condition "Succeeded or Failed" Oct 30 01:13:41.719: INFO: Trying to get logs from node node2 pod pod-configmaps-88d64377-5adc-4912-a5d9-4ac9bb0cf9e1 container agnhost-container: STEP: delete the pod Oct 30 01:13:41.730: INFO: Waiting for pod pod-configmaps-88d64377-5adc-4912-a5d9-4ac9bb0cf9e1 to disappear Oct 30 01:13:41.732: INFO: Pod pod-configmaps-88d64377-5adc-4912-a5d9-4ac9bb0cf9e1 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:13:41.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-821" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":210,"failed":0} SSS ------------------------------ [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:13:31.647: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:13:45.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-2594" for this suite. • [SLOW TEST:14.036 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":-1,"completed":7,"skipped":136,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:13:41.746: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name projected-secret-test-map-648c7693-8391-4b5a-8781-837c79f6a41f STEP: Creating a pod to test consume secrets Oct 30 01:13:41.782: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-65b306e2-a2bc-43b0-84c2-d029cfa79fcf" in namespace "projected-8391" to be "Succeeded or Failed" Oct 30 01:13:41.788: INFO: Pod "pod-projected-secrets-65b306e2-a2bc-43b0-84c2-d029cfa79fcf": Phase="Pending", Reason="", readiness=false. Elapsed: 5.482708ms Oct 30 01:13:43.791: INFO: Pod "pod-projected-secrets-65b306e2-a2bc-43b0-84c2-d029cfa79fcf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008553793s Oct 30 01:13:45.794: INFO: Pod "pod-projected-secrets-65b306e2-a2bc-43b0-84c2-d029cfa79fcf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01192371s STEP: Saw pod success Oct 30 01:13:45.794: INFO: Pod "pod-projected-secrets-65b306e2-a2bc-43b0-84c2-d029cfa79fcf" satisfied condition "Succeeded or Failed" Oct 30 01:13:45.797: INFO: Trying to get logs from node node2 pod pod-projected-secrets-65b306e2-a2bc-43b0-84c2-d029cfa79fcf container projected-secret-volume-test: STEP: delete the pod Oct 30 01:13:45.810: INFO: Waiting for pod pod-projected-secrets-65b306e2-a2bc-43b0-84c2-d029cfa79fcf to disappear Oct 30 01:13:45.812: INFO: Pod pod-projected-secrets-65b306e2-a2bc-43b0-84c2-d029cfa79fcf no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:13:45.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8391" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":213,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-node] Pods should get a host IP [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":214,"failed":0} [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:13:31.640: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Performing setup for networking test in namespace pod-network-test-851 STEP: creating a selector STEP: Creating the service pods in kubernetes Oct 30 01:13:31.659: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Oct 30 01:13:31.692: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:13:33.696: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:13:35.695: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 30 01:13:37.695: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 30 01:13:39.696: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 30 01:13:41.696: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 30 01:13:43.697: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 30 01:13:45.695: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 30 01:13:47.697: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 30 01:13:49.696: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 30 01:13:51.697: INFO: The status of Pod netserver-0 is Running (Ready = true) Oct 30 01:13:51.702: INFO: The status of Pod netserver-1 is Running (Ready = false) Oct 30 01:13:53.707: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Oct 30 01:13:57.745: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 Oct 30 01:13:57.745: INFO: Going to poll 10.244.3.65 on port 8080 at least 0 times, with a maximum of 34 tries before failing Oct 30 01:13:57.747: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.3.65:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-851 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 01:13:57.747: INFO: >>> kubeConfig: /root/.kube/config Oct 30 01:13:57.935: INFO: Found all 1 expected endpoints: [netserver-0] Oct 30 01:13:57.935: INFO: Going to poll 10.244.4.167 on port 8080 at least 0 times, with a maximum of 34 tries before failing Oct 30 01:13:57.938: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.4.167:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-851 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 01:13:57.938: INFO: >>> kubeConfig: /root/.kube/config Oct 30 01:13:58.134: INFO: Found all 1 expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:13:58.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-851" for this suite. • [SLOW TEST:26.502 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":214,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:13:58.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a volume subpath [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test substitution in volume subpath Oct 30 01:13:58.198: INFO: Waiting up to 5m0s for pod "var-expansion-a16c008b-c494-41a0-ac81-826858c47097" in namespace "var-expansion-5562" to be "Succeeded or Failed" Oct 30 01:13:58.201: INFO: Pod "var-expansion-a16c008b-c494-41a0-ac81-826858c47097": Phase="Pending", Reason="", readiness=false. Elapsed: 2.556865ms Oct 30 01:14:00.205: INFO: Pod "var-expansion-a16c008b-c494-41a0-ac81-826858c47097": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006753791s Oct 30 01:14:02.208: INFO: Pod "var-expansion-a16c008b-c494-41a0-ac81-826858c47097": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009972345s STEP: Saw pod success Oct 30 01:14:02.208: INFO: Pod "var-expansion-a16c008b-c494-41a0-ac81-826858c47097" satisfied condition "Succeeded or Failed" Oct 30 01:14:02.211: INFO: Trying to get logs from node node2 pod var-expansion-a16c008b-c494-41a0-ac81-826858c47097 container dapi-container: STEP: delete the pod Oct 30 01:14:02.318: INFO: Waiting for pod var-expansion-a16c008b-c494-41a0-ac81-826858c47097 to disappear Oct 30 01:14:02.321: INFO: Pod var-expansion-a16c008b-c494-41a0-ac81-826858c47097 no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:14:02.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5562" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a volume subpath [Conformance]","total":-1,"completed":13,"skipped":225,"failed":0} SSSSS ------------------------------ [BeforeEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:13:45.947: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-h6m6w in namespace proxy-1288 I1030 01:13:45.981000 30 runners.go:190] Created replication controller with name: proxy-service-h6m6w, namespace: proxy-1288, replica count: 1 I1030 01:13:47.032276 30 runners.go:190] proxy-service-h6m6w Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1030 01:13:48.032608 30 runners.go:190] proxy-service-h6m6w Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1030 01:13:49.033356 30 runners.go:190] proxy-service-h6m6w Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1030 01:13:50.033919 30 runners.go:190] proxy-service-h6m6w Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1030 01:13:51.034297 30 runners.go:190] proxy-service-h6m6w Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1030 01:13:52.034892 30 runners.go:190] proxy-service-h6m6w Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1030 01:13:53.035158 30 runners.go:190] proxy-service-h6m6w Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1030 01:13:54.036455 30 runners.go:190] proxy-service-h6m6w Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1030 01:13:55.036774 30 runners.go:190] proxy-service-h6m6w Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1030 01:13:56.037444 30 runners.go:190] proxy-service-h6m6w Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1030 01:13:57.038189 30 runners.go:190] proxy-service-h6m6w Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 30 01:13:57.041: INFO: setup took 11.069397198s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Oct 30 01:13:57.050: INFO: (0) /api/v1/namespaces/proxy-1288/services/proxy-service-h6m6w:portname1/proxy/: foo (200; 8.841148ms) Oct 30 01:13:57.050: INFO: (0) /api/v1/namespaces/proxy-1288/pods/proxy-service-h6m6w-kp8hw:162/proxy/: bar (200; 8.967721ms) Oct 30 01:13:57.050: INFO: (0) /api/v1/namespaces/proxy-1288/pods/http:proxy-service-h6m6w-kp8hw:162/proxy/: bar (200; 8.981647ms) Oct 30 01:13:57.050: INFO: (0) /api/v1/namespaces/proxy-1288/pods/http:proxy-service-h6m6w-kp8hw:160/proxy/: foo (200; 8.961757ms) Oct 30 01:13:57.050: INFO: (0) /api/v1/namespaces/proxy-1288/services/proxy-service-h6m6w:portname2/proxy/: bar (200; 9.224428ms) Oct 30 01:13:57.050: INFO: (0) /api/v1/namespaces/proxy-1288/pods/http:proxy-service-h6m6w-kp8hw:1080/proxy/: ... (200; 9.122679ms) Oct 30 01:13:57.050: INFO: (0) /api/v1/namespaces/proxy-1288/pods/proxy-service-h6m6w-kp8hw:160/proxy/: foo (200; 8.968623ms) Oct 30 01:13:57.050: INFO: (0) /api/v1/namespaces/proxy-1288/services/http:proxy-service-h6m6w:portname2/proxy/: bar (200; 9.085183ms) Oct 30 01:13:57.050: INFO: (0) /api/v1/namespaces/proxy-1288/services/http:proxy-service-h6m6w:portname1/proxy/: foo (200; 9.157227ms) Oct 30 01:13:57.050: INFO: (0) /api/v1/namespaces/proxy-1288/pods/proxy-service-h6m6w-kp8hw/proxy/: test (200; 9.157257ms) Oct 30 01:13:57.050: INFO: (0) /api/v1/namespaces/proxy-1288/pods/proxy-service-h6m6w-kp8hw:1080/proxy/: test<... (200; 9.382382ms) Oct 30 01:13:57.054: INFO: (0) /api/v1/namespaces/proxy-1288/pods/https:proxy-service-h6m6w-kp8hw:462/proxy/: tls qux (200; 13.135952ms) Oct 30 01:13:57.054: INFO: (0) /api/v1/namespaces/proxy-1288/services/https:proxy-service-h6m6w:tlsportname2/proxy/: tls qux (200; 13.112354ms) Oct 30 01:13:57.054: INFO: (0) /api/v1/namespaces/proxy-1288/pods/https:proxy-service-h6m6w-kp8hw:443/proxy/: test (200; 2.292501ms) Oct 30 01:13:57.057: INFO: (1) /api/v1/namespaces/proxy-1288/pods/https:proxy-service-h6m6w-kp8hw:460/proxy/: tls baz (200; 2.400562ms) Oct 30 01:13:57.057: INFO: (1) /api/v1/namespaces/proxy-1288/pods/proxy-service-h6m6w-kp8hw:1080/proxy/: test<... (200; 2.676387ms) Oct 30 01:13:57.057: INFO: (1) /api/v1/namespaces/proxy-1288/pods/proxy-service-h6m6w-kp8hw:160/proxy/: foo (200; 2.720866ms) Oct 30 01:13:57.057: INFO: (1) /api/v1/namespaces/proxy-1288/pods/http:proxy-service-h6m6w-kp8hw:1080/proxy/: ... (200; 3.004097ms) Oct 30 01:13:57.058: INFO: (1) /api/v1/namespaces/proxy-1288/pods/http:proxy-service-h6m6w-kp8hw:162/proxy/: bar (200; 3.066753ms) Oct 30 01:13:57.058: INFO: (1) /api/v1/namespaces/proxy-1288/pods/https:proxy-service-h6m6w-kp8hw:462/proxy/: tls qux (200; 3.132346ms) Oct 30 01:13:57.058: INFO: (1) /api/v1/namespaces/proxy-1288/services/https:proxy-service-h6m6w:tlsportname2/proxy/: tls qux (200; 3.166262ms) Oct 30 01:13:57.058: INFO: (1) /api/v1/namespaces/proxy-1288/pods/proxy-service-h6m6w-kp8hw:162/proxy/: bar (200; 3.376752ms) Oct 30 01:13:57.058: INFO: (1) /api/v1/namespaces/proxy-1288/pods/https:proxy-service-h6m6w-kp8hw:443/proxy/: test<... (200; 3.445014ms) Oct 30 01:13:57.062: INFO: (2) /api/v1/namespaces/proxy-1288/pods/proxy-service-h6m6w-kp8hw:160/proxy/: foo (200; 3.318318ms) Oct 30 01:13:57.062: INFO: (2) /api/v1/namespaces/proxy-1288/pods/http:proxy-service-h6m6w-kp8hw:160/proxy/: foo (200; 3.282553ms) Oct 30 01:13:57.062: INFO: (2) /api/v1/namespaces/proxy-1288/pods/http:proxy-service-h6m6w-kp8hw:1080/proxy/: ... (200; 3.290472ms) Oct 30 01:13:57.062: INFO: (2) /api/v1/namespaces/proxy-1288/pods/proxy-service-h6m6w-kp8hw:162/proxy/: bar (200; 3.422705ms) Oct 30 01:13:57.062: INFO: (2) /api/v1/namespaces/proxy-1288/pods/https:proxy-service-h6m6w-kp8hw:460/proxy/: tls baz (200; 3.286044ms) Oct 30 01:13:57.062: INFO: (2) /api/v1/namespaces/proxy-1288/services/proxy-service-h6m6w:portname1/proxy/: foo (200; 3.338625ms) Oct 30 01:13:57.063: INFO: (2) /api/v1/namespaces/proxy-1288/pods/https:proxy-service-h6m6w-kp8hw:443/proxy/: test (200; 3.757983ms) Oct 30 01:13:57.063: INFO: (2) /api/v1/namespaces/proxy-1288/services/https:proxy-service-h6m6w:tlsportname2/proxy/: tls qux (200; 3.973711ms) Oct 30 01:13:57.064: INFO: (2) /api/v1/namespaces/proxy-1288/services/https:proxy-service-h6m6w:tlsportname1/proxy/: tls baz (200; 4.528316ms) Oct 30 01:13:57.064: INFO: (2) /api/v1/namespaces/proxy-1288/services/proxy-service-h6m6w:portname2/proxy/: bar (200; 4.452782ms) Oct 30 01:13:57.064: INFO: (2) /api/v1/namespaces/proxy-1288/services/http:proxy-service-h6m6w:portname2/proxy/: bar (200; 4.495722ms) Oct 30 01:13:57.066: INFO: (3) /api/v1/namespaces/proxy-1288/pods/proxy-service-h6m6w-kp8hw:160/proxy/: foo (200; 2.476011ms) Oct 30 01:13:57.066: INFO: (3) /api/v1/namespaces/proxy-1288/pods/http:proxy-service-h6m6w-kp8hw:1080/proxy/: ... (200; 2.497554ms) Oct 30 01:13:57.066: INFO: (3) /api/v1/namespaces/proxy-1288/pods/proxy-service-h6m6w-kp8hw:162/proxy/: bar (200; 2.711884ms) Oct 30 01:13:57.067: INFO: (3) /api/v1/namespaces/proxy-1288/pods/http:proxy-service-h6m6w-kp8hw:162/proxy/: bar (200; 2.763591ms) Oct 30 01:13:57.067: INFO: (3) /api/v1/namespaces/proxy-1288/pods/http:proxy-service-h6m6w-kp8hw:160/proxy/: foo (200; 2.955738ms) Oct 30 01:13:57.067: INFO: (3) /api/v1/namespaces/proxy-1288/pods/https:proxy-service-h6m6w-kp8hw:460/proxy/: tls baz (200; 3.089209ms) Oct 30 01:13:57.067: INFO: (3) /api/v1/namespaces/proxy-1288/pods/proxy-service-h6m6w-kp8hw:1080/proxy/: test<... (200; 3.171165ms) Oct 30 01:13:57.067: INFO: (3) /api/v1/namespaces/proxy-1288/pods/https:proxy-service-h6m6w-kp8hw:443/proxy/: test (200; 3.601004ms) Oct 30 01:13:57.067: INFO: (3) /api/v1/namespaces/proxy-1288/services/https:proxy-service-h6m6w:tlsportname1/proxy/: tls baz (200; 3.587989ms) Oct 30 01:13:57.068: INFO: (3) /api/v1/namespaces/proxy-1288/services/proxy-service-h6m6w:portname2/proxy/: bar (200; 3.755435ms) Oct 30 01:13:57.068: INFO: (3) /api/v1/namespaces/proxy-1288/services/https:proxy-service-h6m6w:tlsportname2/proxy/: tls qux (200; 4.006042ms) Oct 30 01:13:57.068: INFO: (3) /api/v1/namespaces/proxy-1288/services/http:proxy-service-h6m6w:portname1/proxy/: foo (200; 4.019115ms) Oct 30 01:13:57.068: INFO: (3) /api/v1/namespaces/proxy-1288/services/http:proxy-service-h6m6w:portname2/proxy/: bar (200; 4.073842ms) Oct 30 01:13:57.068: INFO: (3) /api/v1/namespaces/proxy-1288/services/proxy-service-h6m6w:portname1/proxy/: foo (200; 4.210396ms) Oct 30 01:13:57.071: INFO: (4) /api/v1/namespaces/proxy-1288/pods/http:proxy-service-h6m6w-kp8hw:1080/proxy/: ... (200; 2.57532ms) Oct 30 01:13:57.071: INFO: (4) /api/v1/namespaces/proxy-1288/pods/proxy-service-h6m6w-kp8hw:1080/proxy/: test<... (200; 2.493195ms) Oct 30 01:13:57.071: INFO: (4) /api/v1/namespaces/proxy-1288/pods/proxy-service-h6m6w-kp8hw/proxy/: test (200; 2.597105ms) Oct 30 01:13:57.071: INFO: (4) /api/v1/namespaces/proxy-1288/pods/proxy-service-h6m6w-kp8hw:160/proxy/: foo (200; 2.468205ms) Oct 30 01:13:57.071: INFO: (4) /api/v1/namespaces/proxy-1288/pods/http:proxy-service-h6m6w-kp8hw:160/proxy/: foo (200; 2.770776ms) Oct 30 01:13:57.071: INFO: (4) /api/v1/namespaces/proxy-1288/pods/https:proxy-service-h6m6w-kp8hw:462/proxy/: tls qux (200; 3.008528ms) Oct 30 01:13:57.071: INFO: (4) /api/v1/namespaces/proxy-1288/pods/https:proxy-service-h6m6w-kp8hw:443/proxy/: test (200; 2.862301ms) Oct 30 01:13:57.076: INFO: (5) /api/v1/namespaces/proxy-1288/pods/proxy-service-h6m6w-kp8hw:1080/proxy/: test<... (200; 3.024458ms) Oct 30 01:13:57.076: INFO: (5) /api/v1/namespaces/proxy-1288/services/https:proxy-service-h6m6w:tlsportname2/proxy/: tls qux (200; 3.263346ms) Oct 30 01:13:57.076: INFO: (5) /api/v1/namespaces/proxy-1288/pods/http:proxy-service-h6m6w-kp8hw:1080/proxy/: ... (200; 3.186159ms) Oct 30 01:13:57.076: INFO: (5) /api/v1/namespaces/proxy-1288/pods/proxy-service-h6m6w-kp8hw:160/proxy/: foo (200; 3.142747ms) Oct 30 01:13:57.076: INFO: (5) /api/v1/namespaces/proxy-1288/services/proxy-service-h6m6w:portname2/proxy/: bar (200; 3.655092ms) Oct 30 01:13:57.076: INFO: (5) /api/v1/namespaces/proxy-1288/services/proxy-service-h6m6w:portname1/proxy/: foo (200; 3.826961ms) Oct 30 01:13:57.076: INFO: (5) /api/v1/namespaces/proxy-1288/services/http:proxy-service-h6m6w:portname2/proxy/: bar (200; 3.675783ms) Oct 30 01:13:57.077: INFO: (5) /api/v1/namespaces/proxy-1288/services/http:proxy-service-h6m6w:portname1/proxy/: foo (200; 4.145646ms) Oct 30 01:13:57.077: INFO: (5) /api/v1/namespaces/proxy-1288/services/https:proxy-service-h6m6w:tlsportname1/proxy/: tls baz (200; 4.322774ms) Oct 30 01:13:57.079: INFO: (6) /api/v1/namespaces/proxy-1288/pods/https:proxy-service-h6m6w-kp8hw:462/proxy/: tls qux (200; 2.238226ms) Oct 30 01:13:57.080: INFO: (6) /api/v1/namespaces/proxy-1288/pods/proxy-service-h6m6w-kp8hw:1080/proxy/: test<... (200; 2.765364ms) Oct 30 01:13:57.080: INFO: (6) /api/v1/namespaces/proxy-1288/services/https:proxy-service-h6m6w:tlsportname1/proxy/: tls baz (200; 2.791932ms) Oct 30 01:13:57.080: INFO: (6) /api/v1/namespaces/proxy-1288/pods/proxy-service-h6m6w-kp8hw:160/proxy/: foo (200; 2.805174ms) Oct 30 01:13:57.080: INFO: (6) /api/v1/namespaces/proxy-1288/pods/http:proxy-service-h6m6w-kp8hw:160/proxy/: foo (200; 2.924386ms) Oct 30 01:13:57.080: INFO: (6) /api/v1/namespaces/proxy-1288/pods/proxy-service-h6m6w-kp8hw/proxy/: test (200; 3.338725ms) Oct 30 01:13:57.080: INFO: (6) /api/v1/namespaces/proxy-1288/services/proxy-service-h6m6w:portname2/proxy/: bar (200; 3.391209ms) Oct 30 01:13:57.081: INFO: (6) /api/v1/namespaces/proxy-1288/services/https:proxy-service-h6m6w:tlsportname2/proxy/: tls qux (200; 3.733523ms) Oct 30 01:13:57.081: INFO: (6) /api/v1/namespaces/proxy-1288/pods/https:proxy-service-h6m6w-kp8hw:443/proxy/: ... (200; 3.897763ms) Oct 30 01:13:57.081: INFO: (6) /api/v1/namespaces/proxy-1288/pods/http:proxy-service-h6m6w-kp8hw:162/proxy/: bar (200; 3.933852ms) Oct 30 01:13:57.081: INFO: (6) /api/v1/namespaces/proxy-1288/services/proxy-service-h6m6w:portname1/proxy/: foo (200; 4.289267ms) Oct 30 01:13:57.081: INFO: (6) /api/v1/namespaces/proxy-1288/services/http:proxy-service-h6m6w:portname1/proxy/: foo (200; 4.481004ms) Oct 30 01:13:57.081: INFO: (6) /api/v1/namespaces/proxy-1288/services/http:proxy-service-h6m6w:portname2/proxy/: bar (200; 4.343756ms) Oct 30 01:13:57.084: INFO: (7) /api/v1/namespaces/proxy-1288/pods/https:proxy-service-h6m6w-kp8hw:443/proxy/: ... (200; 2.827106ms) Oct 30 01:13:57.084: INFO: (7) /api/v1/namespaces/proxy-1288/services/https:proxy-service-h6m6w:tlsportname1/proxy/: tls baz (200; 2.842952ms) Oct 30 01:13:57.084: INFO: (7) /api/v1/namespaces/proxy-1288/pods/proxy-service-h6m6w-kp8hw/proxy/: test (200; 2.67552ms) Oct 30 01:13:57.084: INFO: (7) /api/v1/namespaces/proxy-1288/pods/https:proxy-service-h6m6w-kp8hw:460/proxy/: tls baz (200; 2.834682ms) Oct 30 01:13:57.084: INFO: (7) /api/v1/namespaces/proxy-1288/pods/http:proxy-service-h6m6w-kp8hw:162/proxy/: bar (200; 2.824943ms) Oct 30 01:13:57.084: INFO: (7) /api/v1/namespaces/proxy-1288/pods/http:proxy-service-h6m6w-kp8hw:160/proxy/: foo (200; 2.759192ms) Oct 30 01:13:57.084: INFO: (7) /api/v1/namespaces/proxy-1288/pods/proxy-service-h6m6w-kp8hw:1080/proxy/: test<... (200; 3.00125ms) Oct 30 01:13:57.084: INFO: (7) /api/v1/namespaces/proxy-1288/pods/proxy-service-h6m6w-kp8hw:160/proxy/: foo (200; 2.85355ms) Oct 30 01:13:57.084: INFO: (7) /api/v1/namespaces/proxy-1288/pods/proxy-service-h6m6w-kp8hw:162/proxy/: bar (200; 2.90731ms) Oct 30 01:13:57.085: INFO: (7) /api/v1/namespaces/proxy-1288/services/proxy-service-h6m6w:portname2/proxy/: bar (200; 3.278022ms) Oct 30 01:13:57.087: INFO: (7) /api/v1/namespaces/proxy-1288/services/http:proxy-service-h6m6w:portname2/proxy/: bar (200; 5.414341ms) Oct 30 01:13:57.087: INFO: (7) /api/v1/namespaces/proxy-1288/services/proxy-service-h6m6w:portname1/proxy/: foo (200; 5.56183ms) Oct 30 01:13:57.087: INFO: (7) /api/v1/namespaces/proxy-1288/services/http:proxy-service-h6m6w:portname1/proxy/: foo (200; 5.391382ms) Oct 30 01:13:57.087: INFO: (7) /api/v1/namespaces/proxy-1288/pods/https:proxy-service-h6m6w-kp8hw:462/proxy/: tls qux (200; 5.569403ms) Oct 30 01:13:57.098: INFO: (7) /api/v1/namespaces/proxy-1288/services/https:proxy-service-h6m6w:tlsportname2/proxy/: tls qux (200; 16.290166ms) Oct 30 01:13:57.101: INFO: (8) /api/v1/namespaces/proxy-1288/pods/https:proxy-service-h6m6w-kp8hw:443/proxy/: ... (200; 2.556358ms) Oct 30 01:13:57.101: INFO: (8) /api/v1/namespaces/proxy-1288/pods/proxy-service-h6m6w-kp8hw:1080/proxy/: test<... (200; 2.426196ms) Oct 30 01:13:57.101: INFO: (8) /api/v1/namespaces/proxy-1288/pods/proxy-service-h6m6w-kp8hw/proxy/: test (200; 2.937236ms) Oct 30 01:13:57.101: INFO: (8) /api/v1/namespaces/proxy-1288/pods/proxy-service-h6m6w-kp8hw:162/proxy/: bar (200; 3.01474ms) Oct 30 01:13:57.101: INFO: (8) /api/v1/namespaces/proxy-1288/services/proxy-service-h6m6w:portname2/proxy/: bar (200; 3.248346ms) Oct 30 01:13:57.102: INFO: (8) /api/v1/namespaces/proxy-1288/pods/http:proxy-service-h6m6w-kp8hw:160/proxy/: foo (200; 3.341177ms) Oct 30 01:13:57.102: INFO: (8) /api/v1/namespaces/proxy-1288/pods/https:proxy-service-h6m6w-kp8hw:460/proxy/: tls baz (200; 3.430681ms) Oct 30 01:13:57.102: INFO: (8) /api/v1/namespaces/proxy-1288/services/https:proxy-service-h6m6w:tlsportname2/proxy/: tls qux (200; 3.41568ms) Oct 30 01:13:57.102: INFO: (8) /api/v1/namespaces/proxy-1288/services/http:proxy-service-h6m6w:portname2/proxy/: bar (200; 3.615473ms) Oct 30 01:13:57.102: INFO: (8) /api/v1/namespaces/proxy-1288/pods/proxy-service-h6m6w-kp8hw:160/proxy/: foo (200; 3.426505ms) Oct 30 01:13:57.102: INFO: (8) /api/v1/namespaces/proxy-1288/services/proxy-service-h6m6w:portname1/proxy/: foo (200; 3.814364ms) Oct 30 01:13:57.102: INFO: (8) /api/v1/namespaces/proxy-1288/services/http:proxy-service-h6m6w:portname1/proxy/: foo (200; 4.049425ms) Oct 30 01:13:57.102: INFO: (8) /api/v1/namespaces/proxy-1288/services/https:proxy-service-h6m6w:tlsportname1/proxy/: tls baz (200; 4.127225ms) Oct 30 01:13:57.105: INFO: (9) /api/v1/namespaces/proxy-1288/pods/proxy-service-h6m6w-kp8hw:160/proxy/: foo (200; 2.012243ms) Oct 30 01:13:57.105: INFO: (9) /api/v1/namespaces/proxy-1288/pods/proxy-service-h6m6w-kp8hw/proxy/: test (200; 2.190845ms) Oct 30 01:13:57.105: INFO: (9) /api/v1/namespaces/proxy-1288/pods/https:proxy-service-h6m6w-kp8hw:460/proxy/: tls baz (200; 2.311034ms) Oct 30 01:13:57.105: INFO: (9) /api/v1/namespaces/proxy-1288/pods/http:proxy-service-h6m6w-kp8hw:1080/proxy/: ... (200; 2.295456ms) Oct 30 01:13:57.105: INFO: (9) /api/v1/namespaces/proxy-1288/pods/https:proxy-service-h6m6w-kp8hw:462/proxy/: tls qux (200; 2.816686ms) Oct 30 01:13:57.105: INFO: (9) /api/v1/namespaces/proxy-1288/pods/proxy-service-h6m6w-kp8hw:162/proxy/: bar (200; 2.625951ms) Oct 30 01:13:57.105: INFO: (9) /api/v1/namespaces/proxy-1288/services/proxy-service-h6m6w:portname2/proxy/: bar (200; 2.746796ms) Oct 30 01:13:57.106: INFO: (9) /api/v1/namespaces/proxy-1288/pods/http:proxy-service-h6m6w-kp8hw:162/proxy/: bar (200; 2.870975ms) Oct 30 01:13:57.106: INFO: (9) /api/v1/namespaces/proxy-1288/pods/proxy-service-h6m6w-kp8hw:1080/proxy/: test<... (200; 3.160908ms) Oct 30 01:13:57.106: INFO: (9) /api/v1/namespaces/proxy-1288/pods/http:proxy-service-h6m6w-kp8hw:160/proxy/: foo (200; 3.058264ms) Oct 30 01:13:57.106: INFO: (9) /api/v1/namespaces/proxy-1288/services/http:proxy-service-h6m6w:portname2/proxy/: bar (200; 3.398481ms) Oct 30 01:13:57.106: INFO: (9) /api/v1/namespaces/proxy-1288/pods/https:proxy-service-h6m6w-kp8hw:443/proxy/: test (200; 1.700735ms) Oct 30 01:13:57.109: INFO: (10) /api/v1/namespaces/proxy-1288/pods/http:proxy-service-h6m6w-kp8hw:160/proxy/: foo (200; 1.98848ms) Oct 30 01:13:57.109: INFO: (10) /api/v1/namespaces/proxy-1288/pods/proxy-service-h6m6w-kp8hw:162/proxy/: bar (200; 2.342286ms) Oct 30 01:13:57.110: INFO: (10) /api/v1/namespaces/proxy-1288/pods/https:proxy-service-h6m6w-kp8hw:462/proxy/: tls qux (200; 2.600505ms) Oct 30 01:13:57.110: INFO: (10) /api/v1/namespaces/proxy-1288/pods/proxy-service-h6m6w-kp8hw:160/proxy/: foo (200; 2.760642ms) Oct 30 01:13:57.110: INFO: (10) /api/v1/namespaces/proxy-1288/pods/http:proxy-service-h6m6w-kp8hw:162/proxy/: bar (200; 2.746559ms) Oct 30 01:13:57.110: INFO: (10) /api/v1/namespaces/proxy-1288/pods/http:proxy-service-h6m6w-kp8hw:1080/proxy/: ... (200; 2.60747ms) Oct 30 01:13:57.110: INFO: (10) /api/v1/namespaces/proxy-1288/pods/https:proxy-service-h6m6w-kp8hw:443/proxy/: test<... (200; 2.912753ms) Oct 30 01:13:57.110: INFO: (10) /api/v1/namespaces/proxy-1288/pods/https:proxy-service-h6m6w-kp8hw:460/proxy/: tls baz (200; 3.290348ms) Oct 30 01:13:57.110: INFO: (10) /api/v1/namespaces/proxy-1288/services/https:proxy-service-h6m6w:tlsportname2/proxy/: tls qux (200; 3.198046ms) Oct 30 01:13:57.110: INFO: (10) /api/v1/namespaces/proxy-1288/services/proxy-service-h6m6w:portname2/proxy/: bar (200; 3.361567ms) Oct 30 01:13:57.110: INFO: (10) /api/v1/namespaces/proxy-1288/services/http:proxy-service-h6m6w:portname2/proxy/: bar (200; 3.452937ms) Oct 30 01:13:57.111: INFO: (10) /api/v1/namespaces/proxy-1288/services/http:proxy-service-h6m6w:portname1/proxy/: foo (200; 3.744809ms) Oct 30 01:13:57.111: INFO: (10) /api/v1/namespaces/proxy-1288/services/proxy-service-h6m6w:portname1/proxy/: foo (200; 3.930788ms) Oct 30 01:13:57.111: INFO: (10) /api/v1/namespaces/proxy-1288/services/https:proxy-service-h6m6w:tlsportname1/proxy/: tls baz (200; 3.989022ms) Oct 30 01:13:57.113: INFO: (11) /api/v1/namespaces/proxy-1288/pods/proxy-service-h6m6w-kp8hw:162/proxy/: bar (200; 2.08941ms) Oct 30 01:13:57.114: INFO: (11) /api/v1/namespaces/proxy-1288/pods/proxy-service-h6m6w-kp8hw:160/proxy/: foo (200; 2.248437ms) Oct 30 01:13:57.114: INFO: (11) /api/v1/namespaces/proxy-1288/pods/proxy-service-h6m6w-kp8hw/proxy/: test (200; 2.397386ms) Oct 30 01:13:57.114: INFO: (11) /api/v1/namespaces/proxy-1288/pods/proxy-service-h6m6w-kp8hw:1080/proxy/: test<... (200; 2.301267ms) Oct 30 01:13:57.114: INFO: (11) /api/v1/namespaces/proxy-1288/pods/http:proxy-service-h6m6w-kp8hw:162/proxy/: bar (200; 2.555521ms) Oct 30 01:13:57.115: INFO: (11) /api/v1/namespaces/proxy-1288/pods/http:proxy-service-h6m6w-kp8hw:1080/proxy/: ... (200; 4.06534ms) Oct 30 01:13:57.116: INFO: (11) /api/v1/namespaces/proxy-1288/pods/https:proxy-service-h6m6w-kp8hw:443/proxy/: test (200; 2.030514ms) Oct 30 01:13:57.119: INFO: (12) /api/v1/namespaces/proxy-1288/pods/https:proxy-service-h6m6w-kp8hw:462/proxy/: tls qux (200; 2.072427ms) Oct 30 01:13:57.120: INFO: (12) /api/v1/namespaces/proxy-1288/pods/https:proxy-service-h6m6w-kp8hw:460/proxy/: tls baz (200; 2.245754ms) Oct 30 01:13:57.120: INFO: (12) /api/v1/namespaces/proxy-1288/pods/http:proxy-service-h6m6w-kp8hw:160/proxy/: foo (200; 2.145828ms) Oct 30 01:13:57.120: INFO: (12) /api/v1/namespaces/proxy-1288/pods/proxy-service-h6m6w-kp8hw:162/proxy/: bar (200; 2.699087ms) Oct 30 01:13:57.120: INFO: (12) /api/v1/namespaces/proxy-1288/pods/http:proxy-service-h6m6w-kp8hw:1080/proxy/: ... (200; 2.626188ms) Oct 30 01:13:57.120: INFO: (12) /api/v1/namespaces/proxy-1288/pods/http:proxy-service-h6m6w-kp8hw:162/proxy/: bar (200; 2.685893ms) Oct 30 01:13:57.120: INFO: (12) /api/v1/namespaces/proxy-1288/pods/proxy-service-h6m6w-kp8hw:160/proxy/: foo (200; 3.062645ms) Oct 30 01:13:57.121: INFO: (12) /api/v1/namespaces/proxy-1288/pods/https:proxy-service-h6m6w-kp8hw:443/proxy/: test<... (200; 3.662008ms) Oct 30 01:13:57.121: INFO: (12) /api/v1/namespaces/proxy-1288/services/http:proxy-service-h6m6w:portname2/proxy/: bar (200; 3.813271ms) Oct 30 01:13:57.121: INFO: (12) /api/v1/namespaces/proxy-1288/services/proxy-service-h6m6w:portname1/proxy/: foo (200; 3.758276ms) Oct 30 01:13:57.121: INFO: (12) /api/v1/namespaces/proxy-1288/services/http:proxy-service-h6m6w:portname1/proxy/: foo (200; 3.919466ms) Oct 30 01:13:57.121: INFO: (12) /api/v1/namespaces/proxy-1288/services/https:proxy-service-h6m6w:tlsportname1/proxy/: tls baz (200; 3.877395ms) Oct 30 01:13:57.121: INFO: (12) /api/v1/namespaces/proxy-1288/services/proxy-service-h6m6w:portname2/proxy/: bar (200; 4.121183ms) Oct 30 01:13:57.122: INFO: (12) /api/v1/namespaces/proxy-1288/services/https:proxy-service-h6m6w:tlsportname2/proxy/: tls qux (200; 4.347139ms) Oct 30 01:13:57.124: INFO: (13) /api/v1/namespaces/proxy-1288/pods/http:proxy-service-h6m6w-kp8hw:160/proxy/: foo (200; 1.878935ms) Oct 30 01:13:57.124: INFO: (13) /api/v1/namespaces/proxy-1288/pods/http:proxy-service-h6m6w-kp8hw:162/proxy/: bar (200; 2.156116ms) Oct 30 01:13:57.124: INFO: (13) /api/v1/namespaces/proxy-1288/pods/https:proxy-service-h6m6w-kp8hw:443/proxy/: ... (200; 2.818873ms) Oct 30 01:13:57.125: INFO: (13) /api/v1/namespaces/proxy-1288/services/proxy-service-h6m6w:portname2/proxy/: bar (200; 2.948871ms) Oct 30 01:13:57.125: INFO: (13) /api/v1/namespaces/proxy-1288/pods/proxy-service-h6m6w-kp8hw/proxy/: test (200; 2.869697ms) Oct 30 01:13:57.125: INFO: (13) /api/v1/namespaces/proxy-1288/services/proxy-service-h6m6w:portname1/proxy/: foo (200; 3.371788ms) Oct 30 01:13:57.125: INFO: (13) /api/v1/namespaces/proxy-1288/pods/proxy-service-h6m6w-kp8hw:1080/proxy/: test<... (200; 3.156213ms) Oct 30 01:13:57.125: INFO: (13) /api/v1/namespaces/proxy-1288/services/https:proxy-service-h6m6w:tlsportname1/proxy/: tls baz (200; 3.130111ms) Oct 30 01:13:57.125: INFO: (13) /api/v1/namespaces/proxy-1288/services/https:proxy-service-h6m6w:tlsportname2/proxy/: tls qux (200; 3.562848ms) Oct 30 01:13:57.125: INFO: (13) /api/v1/namespaces/proxy-1288/services/http:proxy-service-h6m6w:portname2/proxy/: bar (200; 3.534383ms) Oct 30 01:13:57.126: INFO: (13) /api/v1/namespaces/proxy-1288/services/http:proxy-service-h6m6w:portname1/proxy/: foo (200; 3.78076ms) Oct 30 01:13:57.128: INFO: (14) /api/v1/namespaces/proxy-1288/pods/https:proxy-service-h6m6w-kp8hw:460/proxy/: tls baz (200; 2.138844ms) Oct 30 01:13:57.128: INFO: (14) /api/v1/namespaces/proxy-1288/pods/proxy-service-h6m6w-kp8hw:1080/proxy/: test<... (200; 2.049979ms) Oct 30 01:13:57.128: INFO: (14) /api/v1/namespaces/proxy-1288/pods/proxy-service-h6m6w-kp8hw/proxy/: test (200; 2.244134ms) Oct 30 01:13:57.129: INFO: (14) /api/v1/namespaces/proxy-1288/pods/http:proxy-service-h6m6w-kp8hw:162/proxy/: bar (200; 2.705315ms) Oct 30 01:13:57.129: INFO: (14) /api/v1/namespaces/proxy-1288/pods/http:proxy-service-h6m6w-kp8hw:1080/proxy/: ... (200; 2.711669ms) Oct 30 01:13:57.129: INFO: (14) /api/v1/namespaces/proxy-1288/pods/proxy-service-h6m6w-kp8hw:160/proxy/: foo (200; 2.725703ms) Oct 30 01:13:57.129: INFO: (14) /api/v1/namespaces/proxy-1288/pods/https:proxy-service-h6m6w-kp8hw:443/proxy/: test<... (200; 2.084946ms) Oct 30 01:13:57.133: INFO: (15) /api/v1/namespaces/proxy-1288/pods/http:proxy-service-h6m6w-kp8hw:160/proxy/: foo (200; 3.148586ms) Oct 30 01:13:57.133: INFO: (15) /api/v1/namespaces/proxy-1288/pods/https:proxy-service-h6m6w-kp8hw:462/proxy/: tls qux (200; 3.214552ms) Oct 30 01:13:57.133: INFO: (15) /api/v1/namespaces/proxy-1288/pods/proxy-service-h6m6w-kp8hw:162/proxy/: bar (200; 3.281004ms) Oct 30 01:13:57.133: INFO: (15) /api/v1/namespaces/proxy-1288/pods/proxy-service-h6m6w-kp8hw/proxy/: test (200; 3.388625ms) Oct 30 01:13:57.133: INFO: (15) /api/v1/namespaces/proxy-1288/pods/http:proxy-service-h6m6w-kp8hw:1080/proxy/: ... (200; 3.24729ms) Oct 30 01:13:57.133: INFO: (15) /api/v1/namespaces/proxy-1288/services/http:proxy-service-h6m6w:portname1/proxy/: foo (200; 3.301146ms) Oct 30 01:13:57.133: INFO: (15) /api/v1/namespaces/proxy-1288/services/https:proxy-service-h6m6w:tlsportname2/proxy/: tls qux (200; 3.276866ms) Oct 30 01:13:57.134: INFO: (15) /api/v1/namespaces/proxy-1288/services/proxy-service-h6m6w:portname2/proxy/: bar (200; 3.607652ms) Oct 30 01:13:57.134: INFO: (15) /api/v1/namespaces/proxy-1288/services/https:proxy-service-h6m6w:tlsportname1/proxy/: tls baz (200; 3.752983ms) Oct 30 01:13:57.134: INFO: (15) /api/v1/namespaces/proxy-1288/pods/https:proxy-service-h6m6w-kp8hw:443/proxy/: ... (200; 2.662329ms) Oct 30 01:13:57.137: INFO: (16) /api/v1/namespaces/proxy-1288/pods/proxy-service-h6m6w-kp8hw:1080/proxy/: test<... (200; 2.68161ms) Oct 30 01:13:57.137: INFO: (16) /api/v1/namespaces/proxy-1288/pods/proxy-service-h6m6w-kp8hw/proxy/: test (200; 2.82624ms) Oct 30 01:13:57.137: INFO: (16) /api/v1/namespaces/proxy-1288/pods/https:proxy-service-h6m6w-kp8hw:460/proxy/: tls baz (200; 2.920017ms) Oct 30 01:13:57.137: INFO: (16) /api/v1/namespaces/proxy-1288/pods/https:proxy-service-h6m6w-kp8hw:443/proxy/: test<... (200; 2.099643ms) Oct 30 01:13:57.141: INFO: (17) /api/v1/namespaces/proxy-1288/pods/proxy-service-h6m6w-kp8hw/proxy/: test (200; 2.311832ms) Oct 30 01:13:57.142: INFO: (17) /api/v1/namespaces/proxy-1288/pods/http:proxy-service-h6m6w-kp8hw:160/proxy/: foo (200; 2.439919ms) Oct 30 01:13:57.142: INFO: (17) /api/v1/namespaces/proxy-1288/pods/https:proxy-service-h6m6w-kp8hw:462/proxy/: tls qux (200; 2.49045ms) Oct 30 01:13:57.142: INFO: (17) /api/v1/namespaces/proxy-1288/pods/https:proxy-service-h6m6w-kp8hw:460/proxy/: tls baz (200; 2.77824ms) Oct 30 01:13:57.142: INFO: (17) /api/v1/namespaces/proxy-1288/pods/proxy-service-h6m6w-kp8hw:162/proxy/: bar (200; 3.052639ms) Oct 30 01:13:57.142: INFO: (17) /api/v1/namespaces/proxy-1288/pods/https:proxy-service-h6m6w-kp8hw:443/proxy/: ... (200; 3.088431ms) Oct 30 01:13:57.143: INFO: (17) /api/v1/namespaces/proxy-1288/services/http:proxy-service-h6m6w:portname1/proxy/: foo (200; 3.611886ms) Oct 30 01:13:57.143: INFO: (17) /api/v1/namespaces/proxy-1288/services/proxy-service-h6m6w:portname1/proxy/: foo (200; 3.72284ms) Oct 30 01:13:57.143: INFO: (17) /api/v1/namespaces/proxy-1288/services/https:proxy-service-h6m6w:tlsportname1/proxy/: tls baz (200; 3.826788ms) Oct 30 01:13:57.143: INFO: (17) /api/v1/namespaces/proxy-1288/services/http:proxy-service-h6m6w:portname2/proxy/: bar (200; 3.810434ms) Oct 30 01:13:57.143: INFO: (17) /api/v1/namespaces/proxy-1288/services/proxy-service-h6m6w:portname2/proxy/: bar (200; 4.024697ms) Oct 30 01:13:57.146: INFO: (18) /api/v1/namespaces/proxy-1288/pods/https:proxy-service-h6m6w-kp8hw:443/proxy/: ... (200; 2.542308ms) Oct 30 01:13:57.146: INFO: (18) /api/v1/namespaces/proxy-1288/pods/https:proxy-service-h6m6w-kp8hw:460/proxy/: tls baz (200; 2.462241ms) Oct 30 01:13:57.146: INFO: (18) /api/v1/namespaces/proxy-1288/services/proxy-service-h6m6w:portname1/proxy/: foo (200; 2.788349ms) Oct 30 01:13:57.146: INFO: (18) /api/v1/namespaces/proxy-1288/pods/proxy-service-h6m6w-kp8hw:162/proxy/: bar (200; 2.601638ms) Oct 30 01:13:57.146: INFO: (18) /api/v1/namespaces/proxy-1288/pods/http:proxy-service-h6m6w-kp8hw:160/proxy/: foo (200; 2.867619ms) Oct 30 01:13:57.146: INFO: (18) /api/v1/namespaces/proxy-1288/pods/proxy-service-h6m6w-kp8hw:160/proxy/: foo (200; 2.969912ms) Oct 30 01:13:57.147: INFO: (18) /api/v1/namespaces/proxy-1288/pods/http:proxy-service-h6m6w-kp8hw:162/proxy/: bar (200; 3.04421ms) Oct 30 01:13:57.147: INFO: (18) /api/v1/namespaces/proxy-1288/services/https:proxy-service-h6m6w:tlsportname2/proxy/: tls qux (200; 3.216324ms) Oct 30 01:13:57.147: INFO: (18) /api/v1/namespaces/proxy-1288/pods/proxy-service-h6m6w-kp8hw/proxy/: test (200; 3.272333ms) Oct 30 01:13:57.147: INFO: (18) /api/v1/namespaces/proxy-1288/pods/https:proxy-service-h6m6w-kp8hw:462/proxy/: tls qux (200; 3.164273ms) Oct 30 01:13:57.147: INFO: (18) /api/v1/namespaces/proxy-1288/pods/proxy-service-h6m6w-kp8hw:1080/proxy/: test<... (200; 3.299755ms) Oct 30 01:13:57.147: INFO: (18) /api/v1/namespaces/proxy-1288/services/proxy-service-h6m6w:portname2/proxy/: bar (200; 3.403393ms) Oct 30 01:13:57.147: INFO: (18) /api/v1/namespaces/proxy-1288/services/https:proxy-service-h6m6w:tlsportname1/proxy/: tls baz (200; 3.446427ms) Oct 30 01:13:57.147: INFO: (18) /api/v1/namespaces/proxy-1288/services/http:proxy-service-h6m6w:portname2/proxy/: bar (200; 3.463447ms) Oct 30 01:13:57.147: INFO: (18) /api/v1/namespaces/proxy-1288/services/http:proxy-service-h6m6w:portname1/proxy/: foo (200; 3.966433ms) Oct 30 01:13:57.149: INFO: (19) /api/v1/namespaces/proxy-1288/pods/http:proxy-service-h6m6w-kp8hw:162/proxy/: bar (200; 1.757272ms) Oct 30 01:13:57.150: INFO: (19) /api/v1/namespaces/proxy-1288/pods/https:proxy-service-h6m6w-kp8hw:443/proxy/: test<... (200; 2.721323ms) Oct 30 01:13:57.150: INFO: (19) /api/v1/namespaces/proxy-1288/pods/http:proxy-service-h6m6w-kp8hw:1080/proxy/: ... (200; 2.79498ms) Oct 30 01:13:57.151: INFO: (19) /api/v1/namespaces/proxy-1288/services/http:proxy-service-h6m6w:portname2/proxy/: bar (200; 2.935109ms) Oct 30 01:13:57.151: INFO: (19) /api/v1/namespaces/proxy-1288/pods/https:proxy-service-h6m6w-kp8hw:460/proxy/: tls baz (200; 2.881016ms) Oct 30 01:13:57.151: INFO: (19) /api/v1/namespaces/proxy-1288/services/proxy-service-h6m6w:portname2/proxy/: bar (200; 3.028029ms) Oct 30 01:13:57.151: INFO: (19) /api/v1/namespaces/proxy-1288/pods/proxy-service-h6m6w-kp8hw/proxy/: test (200; 2.97253ms) Oct 30 01:13:57.151: INFO: (19) /api/v1/namespaces/proxy-1288/pods/http:proxy-service-h6m6w-kp8hw:160/proxy/: foo (200; 3.388009ms) Oct 30 01:13:57.151: INFO: (19) /api/v1/namespaces/proxy-1288/pods/proxy-service-h6m6w-kp8hw:160/proxy/: foo (200; 3.425392ms) Oct 30 01:13:57.151: INFO: (19) /api/v1/namespaces/proxy-1288/services/https:proxy-service-h6m6w:tlsportname1/proxy/: tls baz (200; 3.736062ms) Oct 30 01:13:57.152: INFO: (19) /api/v1/namespaces/proxy-1288/services/proxy-service-h6m6w:portname1/proxy/: foo (200; 3.967891ms) Oct 30 01:13:57.152: INFO: (19) /api/v1/namespaces/proxy-1288/services/http:proxy-service-h6m6w:portname1/proxy/: foo (200; 4.02363ms) STEP: deleting ReplicationController proxy-service-h6m6w in namespace proxy-1288, will wait for the garbage collector to delete the pods Oct 30 01:13:57.211: INFO: Deleting ReplicationController proxy-service-h6m6w took: 6.234166ms Oct 30 01:13:57.311: INFO: Terminating ReplicationController proxy-service-h6m6w pods took: 100.879976ms [AfterEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:14:03.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-1288" for this suite. • [SLOW TEST:17.175 seconds] [sig-network] Proxy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:74 should proxy through a service and a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":-1,"completed":16,"skipped":286,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:13:45.689: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-9029 STEP: creating service affinity-clusterip in namespace services-9029 STEP: creating replication controller affinity-clusterip in namespace services-9029 I1030 01:13:45.721939 28 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: services-9029, replica count: 3 I1030 01:13:48.773984 28 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1030 01:13:51.776415 28 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 30 01:13:51.781: INFO: Creating new exec pod Oct 30 01:13:56.798: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9029 exec execpod-affinityr6kzx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Oct 30 01:13:57.044: INFO: stderr: "+ nc -v -t -w 2 affinity-clusterip 80\n+ echo hostName\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Oct 30 01:13:57.044: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Oct 30 01:13:57.044: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9029 exec execpod-affinityr6kzx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.29.189 80' Oct 30 01:13:57.305: INFO: stderr: "+ nc -v -t -w 2 10.233.29.189 80\nConnection to 10.233.29.189 80 port [tcp/http] succeeded!\n+ echo hostName\n" Oct 30 01:13:57.305: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Oct 30 01:13:57.305: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9029 exec execpod-affinityr6kzx -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.233.29.189:80/ ; done' Oct 30 01:13:57.612: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.29.189:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.29.189:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.29.189:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.29.189:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.29.189:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.29.189:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.29.189:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.29.189:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.29.189:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.29.189:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.29.189:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.29.189:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.29.189:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.29.189:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.29.189:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.29.189:80/\n" Oct 30 01:13:57.612: INFO: stdout: "\naffinity-clusterip-kfws8\naffinity-clusterip-kfws8\naffinity-clusterip-kfws8\naffinity-clusterip-kfws8\naffinity-clusterip-kfws8\naffinity-clusterip-kfws8\naffinity-clusterip-kfws8\naffinity-clusterip-kfws8\naffinity-clusterip-kfws8\naffinity-clusterip-kfws8\naffinity-clusterip-kfws8\naffinity-clusterip-kfws8\naffinity-clusterip-kfws8\naffinity-clusterip-kfws8\naffinity-clusterip-kfws8\naffinity-clusterip-kfws8" Oct 30 01:13:57.612: INFO: Received response from host: affinity-clusterip-kfws8 Oct 30 01:13:57.612: INFO: Received response from host: affinity-clusterip-kfws8 Oct 30 01:13:57.612: INFO: Received response from host: affinity-clusterip-kfws8 Oct 30 01:13:57.612: INFO: Received response from host: affinity-clusterip-kfws8 Oct 30 01:13:57.612: INFO: Received response from host: affinity-clusterip-kfws8 Oct 30 01:13:57.613: INFO: Received response from host: affinity-clusterip-kfws8 Oct 30 01:13:57.613: INFO: Received response from host: affinity-clusterip-kfws8 Oct 30 01:13:57.613: INFO: Received response from host: affinity-clusterip-kfws8 Oct 30 01:13:57.613: INFO: Received response from host: affinity-clusterip-kfws8 Oct 30 01:13:57.613: INFO: Received response from host: affinity-clusterip-kfws8 Oct 30 01:13:57.613: INFO: Received response from host: affinity-clusterip-kfws8 Oct 30 01:13:57.613: INFO: Received response from host: affinity-clusterip-kfws8 Oct 30 01:13:57.613: INFO: Received response from host: affinity-clusterip-kfws8 Oct 30 01:13:57.613: INFO: Received response from host: affinity-clusterip-kfws8 Oct 30 01:13:57.613: INFO: Received response from host: affinity-clusterip-kfws8 Oct 30 01:13:57.613: INFO: Received response from host: affinity-clusterip-kfws8 Oct 30 01:13:57.613: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip in namespace services-9029, will wait for the garbage collector to delete the pods Oct 30 01:13:57.680: INFO: Deleting ReplicationController affinity-clusterip took: 5.99425ms Oct 30 01:13:57.780: INFO: Terminating ReplicationController affinity-clusterip pods took: 100.168022ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:14:05.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9029" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:19.909 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":8,"skipped":138,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:14:02.341: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 01:14:02.375: INFO: The status of Pod busybox-readonly-fs2468a5ee-9bc8-4e73-9f99-9f5aa2ef8f44 is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:14:04.379: INFO: The status of Pod busybox-readonly-fs2468a5ee-9bc8-4e73-9f99-9f5aa2ef8f44 is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:14:06.378: INFO: The status of Pod busybox-readonly-fs2468a5ee-9bc8-4e73-9f99-9f5aa2ef8f44 is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:14:08.380: INFO: The status of Pod busybox-readonly-fs2468a5ee-9bc8-4e73-9f99-9f5aa2ef8f44 is Running (Ready = true) [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:14:08.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9343" for this suite. • [SLOW TEST:6.056 seconds] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when scheduling a read only busybox container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:188 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:14:05.609: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 01:14:05.642: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-2690ebd7-aefe-4338-a7b5-af00fa2a16e1" in namespace "security-context-test-6382" to be "Succeeded or Failed" Oct 30 01:14:05.644: INFO: Pod "busybox-readonly-false-2690ebd7-aefe-4338-a7b5-af00fa2a16e1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.475383ms Oct 30 01:14:07.647: INFO: Pod "busybox-readonly-false-2690ebd7-aefe-4338-a7b5-af00fa2a16e1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005193981s Oct 30 01:14:09.652: INFO: Pod "busybox-readonly-false-2690ebd7-aefe-4338-a7b5-af00fa2a16e1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009982586s Oct 30 01:14:09.652: INFO: Pod "busybox-readonly-false-2690ebd7-aefe-4338-a7b5-af00fa2a16e1" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:14:09.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-6382" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":142,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:14:09.683: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:14:09.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5300" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":-1,"completed":10,"skipped":153,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:14:09.764: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should adopt matching pods on creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Given a Pod with a 'name' label pod-adoption is created Oct 30 01:14:09.806: INFO: The status of Pod pod-adoption is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:14:11.810: INFO: The status of Pod pod-adoption is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:14:13.813: INFO: The status of Pod pod-adoption is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:14:15.811: INFO: The status of Pod pod-adoption is Running (Ready = true) STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:14:16.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7983" for this suite. • [SLOW TEST:7.070 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":-1,"completed":11,"skipped":170,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:14:16.871: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-map-4169962a-6cf6-46c5-a8f6-1c010dfa6da7 STEP: Creating a pod to test consume secrets Oct 30 01:14:16.906: INFO: Waiting up to 5m0s for pod "pod-secrets-0ceebd36-e2ed-4901-88ed-58bf676a1918" in namespace "secrets-2937" to be "Succeeded or Failed" Oct 30 01:14:16.909: INFO: Pod "pod-secrets-0ceebd36-e2ed-4901-88ed-58bf676a1918": Phase="Pending", Reason="", readiness=false. Elapsed: 2.867147ms Oct 30 01:14:18.913: INFO: Pod "pod-secrets-0ceebd36-e2ed-4901-88ed-58bf676a1918": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006786924s Oct 30 01:14:20.917: INFO: Pod "pod-secrets-0ceebd36-e2ed-4901-88ed-58bf676a1918": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010333217s Oct 30 01:14:22.920: INFO: Pod "pod-secrets-0ceebd36-e2ed-4901-88ed-58bf676a1918": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013468129s STEP: Saw pod success Oct 30 01:14:22.920: INFO: Pod "pod-secrets-0ceebd36-e2ed-4901-88ed-58bf676a1918" satisfied condition "Succeeded or Failed" Oct 30 01:14:22.923: INFO: Trying to get logs from node node2 pod pod-secrets-0ceebd36-e2ed-4901-88ed-58bf676a1918 container secret-volume-test: STEP: delete the pod Oct 30 01:14:22.939: INFO: Waiting for pod pod-secrets-0ceebd36-e2ed-4901-88ed-58bf676a1918 to disappear Oct 30 01:14:22.941: INFO: Pod pod-secrets-0ceebd36-e2ed-4901-88ed-58bf676a1918 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:14:22.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2937" for this suite. • [SLOW TEST:6.078 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":189,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:14:03.193: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] should run the lifecycle of a Deployment [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a Deployment STEP: waiting for Deployment to be created STEP: waiting for all Replicas to be Ready Oct 30 01:14:03.221: INFO: observed Deployment test-deployment in namespace deployment-8954 with ReadyReplicas 0 and labels map[test-deployment-static:true] Oct 30 01:14:03.221: INFO: observed Deployment test-deployment in namespace deployment-8954 with ReadyReplicas 0 and labels map[test-deployment-static:true] Oct 30 01:14:03.225: INFO: observed Deployment test-deployment in namespace deployment-8954 with ReadyReplicas 0 and labels map[test-deployment-static:true] Oct 30 01:14:03.225: INFO: observed Deployment test-deployment in namespace deployment-8954 with ReadyReplicas 0 and labels map[test-deployment-static:true] Oct 30 01:14:03.234: INFO: observed Deployment test-deployment in namespace deployment-8954 with ReadyReplicas 0 and labels map[test-deployment-static:true] Oct 30 01:14:03.234: INFO: observed Deployment test-deployment in namespace deployment-8954 with ReadyReplicas 0 and labels map[test-deployment-static:true] Oct 30 01:14:03.251: INFO: observed Deployment test-deployment in namespace deployment-8954 with ReadyReplicas 0 and labels map[test-deployment-static:true] Oct 30 01:14:03.251: INFO: observed Deployment test-deployment in namespace deployment-8954 with ReadyReplicas 0 and labels map[test-deployment-static:true] Oct 30 01:14:06.628: INFO: observed Deployment test-deployment in namespace deployment-8954 with ReadyReplicas 1 and labels map[test-deployment-static:true] Oct 30 01:14:06.628: INFO: observed Deployment test-deployment in namespace deployment-8954 with ReadyReplicas 1 and labels map[test-deployment-static:true] Oct 30 01:14:08.730: INFO: observed Deployment test-deployment in namespace deployment-8954 with ReadyReplicas 2 and labels map[test-deployment-static:true] STEP: patching the Deployment Oct 30 01:14:08.735: INFO: observed event type ADDED STEP: waiting for Replicas to scale Oct 30 01:14:08.737: INFO: observed Deployment test-deployment in namespace deployment-8954 with ReadyReplicas 0 Oct 30 01:14:08.737: INFO: observed Deployment test-deployment in namespace deployment-8954 with ReadyReplicas 0 Oct 30 01:14:08.737: INFO: observed Deployment test-deployment in namespace deployment-8954 with ReadyReplicas 0 Oct 30 01:14:08.737: INFO: observed Deployment test-deployment in namespace deployment-8954 with ReadyReplicas 0 Oct 30 01:14:08.737: INFO: observed Deployment test-deployment in namespace deployment-8954 with ReadyReplicas 0 Oct 30 01:14:08.737: INFO: observed Deployment test-deployment in namespace deployment-8954 with ReadyReplicas 0 Oct 30 01:14:08.737: INFO: observed Deployment test-deployment in namespace deployment-8954 with ReadyReplicas 0 Oct 30 01:14:08.737: INFO: observed Deployment test-deployment in namespace deployment-8954 with ReadyReplicas 0 Oct 30 01:14:08.737: INFO: observed Deployment test-deployment in namespace deployment-8954 with ReadyReplicas 1 Oct 30 01:14:08.737: INFO: observed Deployment test-deployment in namespace deployment-8954 with ReadyReplicas 1 Oct 30 01:14:08.737: INFO: observed Deployment test-deployment in namespace deployment-8954 with ReadyReplicas 2 Oct 30 01:14:08.737: INFO: observed Deployment test-deployment in namespace deployment-8954 with ReadyReplicas 2 Oct 30 01:14:08.737: INFO: observed Deployment test-deployment in namespace deployment-8954 with ReadyReplicas 2 Oct 30 01:14:08.737: INFO: observed Deployment test-deployment in namespace deployment-8954 with ReadyReplicas 2 Oct 30 01:14:08.741: INFO: observed Deployment test-deployment in namespace deployment-8954 with ReadyReplicas 2 Oct 30 01:14:08.741: INFO: observed Deployment test-deployment in namespace deployment-8954 with ReadyReplicas 2 Oct 30 01:14:08.748: INFO: observed Deployment test-deployment in namespace deployment-8954 with ReadyReplicas 2 Oct 30 01:14:08.748: INFO: observed Deployment test-deployment in namespace deployment-8954 with ReadyReplicas 2 Oct 30 01:14:08.757: INFO: observed Deployment test-deployment in namespace deployment-8954 with ReadyReplicas 1 Oct 30 01:14:08.757: INFO: observed Deployment test-deployment in namespace deployment-8954 with ReadyReplicas 1 Oct 30 01:14:08.762: INFO: observed Deployment test-deployment in namespace deployment-8954 with ReadyReplicas 1 Oct 30 01:14:08.762: INFO: observed Deployment test-deployment in namespace deployment-8954 with ReadyReplicas 1 Oct 30 01:14:13.043: INFO: observed Deployment test-deployment in namespace deployment-8954 with ReadyReplicas 2 Oct 30 01:14:13.043: INFO: observed Deployment test-deployment in namespace deployment-8954 with ReadyReplicas 2 Oct 30 01:14:13.055: INFO: observed Deployment test-deployment in namespace deployment-8954 with ReadyReplicas 1 STEP: listing Deployments Oct 30 01:14:13.058: INFO: Found test-deployment with labels: map[test-deployment:patched test-deployment-static:true] STEP: updating the Deployment Oct 30 01:14:13.070: INFO: observed Deployment test-deployment in namespace deployment-8954 with ReadyReplicas 1 STEP: fetching the DeploymentStatus Oct 30 01:14:13.076: INFO: observed Deployment test-deployment in namespace deployment-8954 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Oct 30 01:14:13.076: INFO: observed Deployment test-deployment in namespace deployment-8954 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Oct 30 01:14:13.083: INFO: observed Deployment test-deployment in namespace deployment-8954 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Oct 30 01:14:13.091: INFO: observed Deployment test-deployment in namespace deployment-8954 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Oct 30 01:14:13.098: INFO: observed Deployment test-deployment in namespace deployment-8954 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Oct 30 01:14:16.909: INFO: observed Deployment test-deployment in namespace deployment-8954 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] Oct 30 01:14:16.916: INFO: observed Deployment test-deployment in namespace deployment-8954 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] Oct 30 01:14:16.920: INFO: observed Deployment test-deployment in namespace deployment-8954 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] Oct 30 01:14:16.923: INFO: observed Deployment test-deployment in namespace deployment-8954 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] Oct 30 01:14:16.932: INFO: observed Deployment test-deployment in namespace deployment-8954 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] Oct 30 01:14:23.604: INFO: observed Deployment test-deployment in namespace deployment-8954 with ReadyReplicas 3 and labels map[test-deployment:updated test-deployment-static:true] STEP: patching the DeploymentStatus STEP: fetching the DeploymentStatus Oct 30 01:14:23.628: INFO: observed Deployment test-deployment in namespace deployment-8954 with ReadyReplicas 1 Oct 30 01:14:23.628: INFO: observed Deployment test-deployment in namespace deployment-8954 with ReadyReplicas 1 Oct 30 01:14:23.628: INFO: observed Deployment test-deployment in namespace deployment-8954 with ReadyReplicas 1 Oct 30 01:14:23.628: INFO: observed Deployment test-deployment in namespace deployment-8954 with ReadyReplicas 1 Oct 30 01:14:23.628: INFO: observed Deployment test-deployment in namespace deployment-8954 with ReadyReplicas 1 Oct 30 01:14:23.628: INFO: observed Deployment test-deployment in namespace deployment-8954 with ReadyReplicas 2 Oct 30 01:14:23.628: INFO: observed Deployment test-deployment in namespace deployment-8954 with ReadyReplicas 2 Oct 30 01:14:23.628: INFO: observed Deployment test-deployment in namespace deployment-8954 with ReadyReplicas 2 Oct 30 01:14:23.628: INFO: observed Deployment test-deployment in namespace deployment-8954 with ReadyReplicas 2 Oct 30 01:14:23.628: INFO: observed Deployment test-deployment in namespace deployment-8954 with ReadyReplicas 2 Oct 30 01:14:23.628: INFO: observed Deployment test-deployment in namespace deployment-8954 with ReadyReplicas 3 STEP: deleting the Deployment Oct 30 01:14:23.635: INFO: observed event type MODIFIED Oct 30 01:14:23.635: INFO: observed event type MODIFIED Oct 30 01:14:23.635: INFO: observed event type MODIFIED Oct 30 01:14:23.635: INFO: observed event type MODIFIED Oct 30 01:14:23.635: INFO: observed event type MODIFIED Oct 30 01:14:23.635: INFO: observed event type MODIFIED Oct 30 01:14:23.635: INFO: observed event type MODIFIED Oct 30 01:14:23.636: INFO: observed event type MODIFIED Oct 30 01:14:23.636: INFO: observed event type MODIFIED Oct 30 01:14:23.636: INFO: observed event type MODIFIED Oct 30 01:14:23.636: INFO: observed event type MODIFIED Oct 30 01:14:23.636: INFO: observed event type MODIFIED [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 Oct 30 01:14:23.638: INFO: Log out all the ReplicaSets if there is no deployment created Oct 30 01:14:23.640: INFO: ReplicaSet "test-deployment-748588b7cd": &ReplicaSet{ObjectMeta:{test-deployment-748588b7cd deployment-8954 a268704f-d4b2-4234-81e5-fa705e24cadf 85360 4 2021-10-30 01:14:08 +0000 UTC map[pod-template-hash:748588b7cd test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-deployment a1f68807-7002-4019-ad12-6747cdc16ae3 0xc0057ea377 0xc0057ea378}] [] [{kube-controller-manager Update apps/v1 2021-10-30 01:14:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a1f68807-7002-4019-ad12-6747cdc16ae3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 748588b7cd,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:748588b7cd test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment k8s.gcr.io/pause:3.4.1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0057ea3e0 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:4,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Oct 30 01:14:23.643: INFO: ReplicaSet "test-deployment-7b4c744884": &ReplicaSet{ObjectMeta:{test-deployment-7b4c744884 deployment-8954 c1210014-9d4f-47e3-8d73-80ce0c658d4d 85163 3 2021-10-30 01:14:03 +0000 UTC map[pod-template-hash:7b4c744884 test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-deployment a1f68807-7002-4019-ad12-6747cdc16ae3 0xc0057ea447 0xc0057ea448}] [] [{kube-controller-manager Update apps/v1 2021-10-30 01:14:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a1f68807-7002-4019-ad12-6747cdc16ae3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 7b4c744884,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:7b4c744884 test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0057ea4b0 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Oct 30 01:14:23.646: INFO: ReplicaSet "test-deployment-85d87c6f4b": &ReplicaSet{ObjectMeta:{test-deployment-85d87c6f4b deployment-8954 0fa16373-736e-4711-a402-7c31cee44bae 85350 2 2021-10-30 01:14:13 +0000 UTC map[pod-template-hash:85d87c6f4b test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:3] [{apps/v1 Deployment test-deployment a1f68807-7002-4019-ad12-6747cdc16ae3 0xc0057ea517 0xc0057ea518}] [] [{kube-controller-manager Update apps/v1 2021-10-30 01:14:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a1f68807-7002-4019-ad12-6747cdc16ae3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 85d87c6f4b,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:85d87c6f4b test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0057ea580 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:2,FullyLabeledReplicas:2,ObservedGeneration:2,ReadyReplicas:2,AvailableReplicas:2,Conditions:[]ReplicaSetCondition{},},} Oct 30 01:14:23.650: INFO: pod: "test-deployment-85d87c6f4b-jvs8d": &Pod{ObjectMeta:{test-deployment-85d87c6f4b-jvs8d test-deployment-85d87c6f4b- deployment-8954 352a9a44-21bd-4202-9d92-f96be3f3be96 85258 0 2021-10-30 01:14:13 +0000 UTC map[pod-template-hash:85d87c6f4b test-deployment-static:true] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.73" ], "mac": "aa:7a:14:60:82:da", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.73" ], "mac": "aa:7a:14:60:82:da", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-deployment-85d87c6f4b 0fa16373-736e-4711-a402-7c31cee44bae 0xc0057eac07 0xc0057eac08}] [] [{kube-controller-manager Update v1 2021-10-30 01:14:13 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0fa16373-736e-4711-a402-7c31cee44bae\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-10-30 01:14:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-10-30 01:14:16 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.73\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-w4ll7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w4ll7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:14:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:14:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:14:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:14:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.3.73,StartTime:2021-10-30 01:14:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-30 01:14:16 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://aea31916d171454987c726edd0feef53ce65fd33ced7fac43d09db5b8a7a8468,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.73,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 30 01:14:23.650: INFO: pod: "test-deployment-85d87c6f4b-sxqfq": &Pod{ObjectMeta:{test-deployment-85d87c6f4b-sxqfq test-deployment-85d87c6f4b- deployment-8954 dda3b561-3641-4032-b9ed-3ba9c065b6c0 85349 0 2021-10-30 01:14:16 +0000 UTC map[pod-template-hash:85d87c6f4b test-deployment-static:true] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.184" ], "mac": "ce:5d:ef:03:14:99", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.184" ], "mac": "ce:5d:ef:03:14:99", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-deployment-85d87c6f4b 0fa16373-736e-4711-a402-7c31cee44bae 0xc0057eadff 0xc0057eae10}] [] [{kube-controller-manager Update v1 2021-10-30 01:14:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0fa16373-736e-4711-a402-7c31cee44bae\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-10-30 01:14:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-10-30 01:14:23 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.184\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-pzcrc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pzcrc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:14:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:14:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:14:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:14:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.184,StartTime:2021-10-30 01:14:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-30 01:14:23 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://be0c2329531f20ba2faf7ca4e99fc4b420ce563b2c8c6585753b0c4f9d9bffc7,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.184,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:14:23.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8954" for this suite. • [SLOW TEST:20.466 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run the lifecycle of a Deployment [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","total":-1,"completed":17,"skipped":319,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:14:22.963: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [BeforeEach] Listing PodDisruptionBudgets for all namespaces /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:14:22.983: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption-2 STEP: Waiting for a default service account to be provisioned in namespace [It] should list and delete a collection of PodDisruptionBudgets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Waiting for the pdb to be processed STEP: Waiting for the pdb to be processed STEP: Waiting for the pdb to be processed STEP: listing a collection of PDBs across all namespaces STEP: listing a collection of PDBs in namespace disruption-4798 STEP: deleting a collection of PDBs STEP: Waiting for the PDB collection to be deleted [AfterEach] Listing PodDisruptionBudgets for all namespaces /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:14:27.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-2-9180" for this suite. [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:14:27.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-4798" for this suite. • ------------------------------ {"msg":"PASSED [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]","total":-1,"completed":13,"skipped":195,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:14:23.701: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-map-fdcb3b6f-88a6-4271-9ff3-49c9a38aa329 STEP: Creating a pod to test consume configMaps Oct 30 01:14:23.738: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-bffb495d-b3c8-4655-8fc8-0baa73f35f5c" in namespace "projected-9471" to be "Succeeded or Failed" Oct 30 01:14:23.740: INFO: Pod "pod-projected-configmaps-bffb495d-b3c8-4655-8fc8-0baa73f35f5c": Phase="Pending", Reason="", readiness=false. Elapsed: 1.916003ms Oct 30 01:14:25.742: INFO: Pod "pod-projected-configmaps-bffb495d-b3c8-4655-8fc8-0baa73f35f5c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004418538s Oct 30 01:14:27.746: INFO: Pod "pod-projected-configmaps-bffb495d-b3c8-4655-8fc8-0baa73f35f5c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008340433s Oct 30 01:14:29.752: INFO: Pod "pod-projected-configmaps-bffb495d-b3c8-4655-8fc8-0baa73f35f5c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013895619s STEP: Saw pod success Oct 30 01:14:29.752: INFO: Pod "pod-projected-configmaps-bffb495d-b3c8-4655-8fc8-0baa73f35f5c" satisfied condition "Succeeded or Failed" Oct 30 01:14:29.754: INFO: Trying to get logs from node node1 pod pod-projected-configmaps-bffb495d-b3c8-4655-8fc8-0baa73f35f5c container agnhost-container: STEP: delete the pod Oct 30 01:14:29.766: INFO: Waiting for pod pod-projected-configmaps-bffb495d-b3c8-4655-8fc8-0baa73f35f5c to disappear Oct 30 01:14:29.768: INFO: Pod pod-projected-configmaps-bffb495d-b3c8-4655-8fc8-0baa73f35f5c no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:14:29.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9471" for this suite. • [SLOW TEST:6.075 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":342,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:13:29.295: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with configMap that has name projected-configmap-test-upd-1b5ef1d6-9025-4e22-83c2-a8dc4a36c77c STEP: Creating the pod Oct 30 01:13:29.337: INFO: The status of Pod pod-projected-configmaps-e93bb9e5-62a3-42f9-a39f-69720a637dc1 is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:13:31.340: INFO: The status of Pod pod-projected-configmaps-e93bb9e5-62a3-42f9-a39f-69720a637dc1 is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:13:33.342: INFO: The status of Pod pod-projected-configmaps-e93bb9e5-62a3-42f9-a39f-69720a637dc1 is Running (Ready = true) STEP: Updating configmap projected-configmap-test-upd-1b5ef1d6-9025-4e22-83c2-a8dc4a36c77c STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:14:37.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9214" for this suite. • [SLOW TEST:67.803 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":358,"failed":0} SSS ------------------------------ [BeforeEach] [sig-node] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:14:27.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Oct 30 01:14:33.180: INFO: &Pod{ObjectMeta:{send-events-795e3965-9b3c-447b-babd-a33488527fa4 events-6167 b9871f7d-0ac5-48e9-b4b8-5b9849aeda61 85576 0 2021-10-30 01:14:27 +0000 UTC map[name:foo time:155762080] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.186" ], "mac": "16:ab:5d:51:8b:42", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.186" ], "mac": "16:ab:5d:51:8b:42", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [] [] [{e2e.test Update v1 2021-10-30 01:14:27 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"p\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-10-30 01:14:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-10-30 01:14:32 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.186\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-ft4nq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:p,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ft4nq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:14:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:14:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:14:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:14:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.186,StartTime:2021-10-30 01:14:27 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-30 01:14:32 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1,ContainerID:docker://43f1931874f9d2f6399138f11555a34da8980fc27331767e11b903acb3c5c74d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.186,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Oct 30 01:14:35.186: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Oct 30 01:14:37.190: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [sig-node] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:14:37.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-6167" for this suite. • [SLOW TEST:10.069 seconds] [sig-node] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":-1,"completed":14,"skipped":233,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:13:38.850: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W1030 01:13:39.915129 27 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 30 01:14:41.930: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:14:41.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3266" for this suite. • [SLOW TEST:63.090 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":-1,"completed":7,"skipped":79,"failed":0} SS ------------------------------ [BeforeEach] [sig-network] Ingress API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:14:41.948: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingress STEP: Waiting for a default service account to be provisioned in namespace [It] should support creating Ingress API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Oct 30 01:14:41.987: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching Oct 30 01:14:41.991: INFO: starting watch STEP: patching STEP: updating Oct 30 01:14:42.009: INFO: waiting for watch events with expected annotations Oct 30 01:14:42.009: INFO: saw patched and updated annotations STEP: patching /status STEP: updating /status STEP: get /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] Ingress API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:14:42.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingress-1432" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":-1,"completed":8,"skipped":81,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:14:37.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test override arguments Oct 30 01:14:37.139: INFO: Waiting up to 5m0s for pod "client-containers-6c3eee09-2df9-40bc-b24c-cab82c4661b1" in namespace "containers-1006" to be "Succeeded or Failed" Oct 30 01:14:37.142: INFO: Pod "client-containers-6c3eee09-2df9-40bc-b24c-cab82c4661b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.23852ms Oct 30 01:14:39.145: INFO: Pod "client-containers-6c3eee09-2df9-40bc-b24c-cab82c4661b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00526088s Oct 30 01:14:41.148: INFO: Pod "client-containers-6c3eee09-2df9-40bc-b24c-cab82c4661b1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008320778s Oct 30 01:14:43.151: INFO: Pod "client-containers-6c3eee09-2df9-40bc-b24c-cab82c4661b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.011837166s STEP: Saw pod success Oct 30 01:14:43.151: INFO: Pod "client-containers-6c3eee09-2df9-40bc-b24c-cab82c4661b1" satisfied condition "Succeeded or Failed" Oct 30 01:14:43.153: INFO: Trying to get logs from node node1 pod client-containers-6c3eee09-2df9-40bc-b24c-cab82c4661b1 container agnhost-container: STEP: delete the pod Oct 30 01:14:43.200: INFO: Waiting for pod client-containers-6c3eee09-2df9-40bc-b24c-cab82c4661b1 to disappear Oct 30 01:14:43.202: INFO: Pod client-containers-6c3eee09-2df9-40bc-b24c-cab82c4661b1 no longer exists [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:14:43.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-1006" for this suite. • [SLOW TEST:6.103 seconds] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":361,"failed":0} [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:14:43.213: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Oct 30 01:14:43.250: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-1618 ee918280-3a22-4560-bc72-de5db6953df5 85804 0 2021-10-30 01:14:43 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2021-10-30 01:14:43 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Oct 30 01:14:43.250: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-1618 ee918280-3a22-4560-bc72-de5db6953df5 85805 0 2021-10-30 01:14:43 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2021-10-30 01:14:43 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:14:43.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1618" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":-1,"completed":18,"skipped":361,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-node] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":230,"failed":0} [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:14:08.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:14:43.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3196" for this suite. • [SLOW TEST:35.251 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 when starting a container that exits /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:42 should run with the expected status [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":230,"failed":0} S ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:12:13.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-5100 STEP: creating service affinity-nodeport in namespace services-5100 STEP: creating replication controller affinity-nodeport in namespace services-5100 I1030 01:12:13.620307 37 runners.go:190] Created replication controller with name: affinity-nodeport, namespace: services-5100, replica count: 3 I1030 01:12:16.672210 37 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1030 01:12:19.673772 37 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 30 01:12:19.683: INFO: Creating new exec pod Oct 30 01:12:24.705: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80' Oct 30 01:12:24.956: INFO: stderr: "+ nc -v -t -w 2 affinity-nodeport 80\n+ echo hostName\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\n" Oct 30 01:12:24.956: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Oct 30 01:12:24.957: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.56.205 80' Oct 30 01:12:25.687: INFO: stderr: "+ nc -v -t -w 2 10.233.56.205 80\n+ echo hostName\nConnection to 10.233.56.205 80 port [tcp/http] succeeded!\n" Oct 30 01:12:25.687: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Oct 30 01:12:25.687: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:12:25.937: INFO: rc: 1 Oct 30 01:12:25.937: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:12:26.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:12:27.181: INFO: rc: 1 Oct 30 01:12:27.181: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:12:27.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:12:28.193: INFO: rc: 1 Oct 30 01:12:28.193: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32300 nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:12:28.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:12:29.356: INFO: rc: 1 Oct 30 01:12:29.356: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:12:29.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:12:30.225: INFO: rc: 1 Oct 30 01:12:30.225: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:12:30.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:12:31.247: INFO: rc: 1 Oct 30 01:12:31.247: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:12:31.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:12:32.220: INFO: rc: 1 Oct 30 01:12:32.220: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:12:32.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:12:33.294: INFO: rc: 1 Oct 30 01:12:33.294: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:12:33.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:12:34.189: INFO: rc: 1 Oct 30 01:12:34.189: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:12:34.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:12:35.170: INFO: rc: 1 Oct 30 01:12:35.170: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:12:35.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:12:36.194: INFO: rc: 1 Oct 30 01:12:36.194: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:12:36.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:12:37.187: INFO: rc: 1 Oct 30 01:12:37.187: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:12:37.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:12:38.167: INFO: rc: 1 Oct 30 01:12:38.167: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:12:38.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:12:39.196: INFO: rc: 1 Oct 30 01:12:39.196: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:12:39.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:12:40.561: INFO: rc: 1 Oct 30 01:12:40.561: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:12:40.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:12:41.931: INFO: rc: 1 Oct 30 01:12:41.931: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:12:41.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:12:42.310: INFO: rc: 1 Oct 30 01:12:42.310: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:12:42.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:12:43.187: INFO: rc: 1 Oct 30 01:12:43.187: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:12:43.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:12:44.170: INFO: rc: 1 Oct 30 01:12:44.170: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:12:44.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:12:45.203: INFO: rc: 1 Oct 30 01:12:45.203: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:12:45.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:12:46.512: INFO: rc: 1 Oct 30 01:12:46.512: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:12:46.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:12:47.184: INFO: rc: 1 Oct 30 01:12:47.184: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:12:47.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:12:48.191: INFO: rc: 1 Oct 30 01:12:48.191: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:12:48.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:12:49.176: INFO: rc: 1 Oct 30 01:12:49.176: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:12:49.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:12:50.179: INFO: rc: 1 Oct 30 01:12:50.179: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:12:50.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:12:51.179: INFO: rc: 1 Oct 30 01:12:51.179: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:12:51.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:12:52.410: INFO: rc: 1 Oct 30 01:12:52.410: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:12:52.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:12:53.274: INFO: rc: 1 Oct 30 01:12:53.274: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:12:53.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:12:54.350: INFO: rc: 1 Oct 30 01:12:54.350: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:12:54.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:12:55.182: INFO: rc: 1 Oct 30 01:12:55.182: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:12:55.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:12:56.166: INFO: rc: 1 Oct 30 01:12:56.166: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:12:56.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:12:57.182: INFO: rc: 1 Oct 30 01:12:57.182: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:12:57.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:12:58.187: INFO: rc: 1 Oct 30 01:12:58.187: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:12:58.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:12:59.227: INFO: rc: 1 Oct 30 01:12:59.227: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:12:59.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:13:00.183: INFO: rc: 1 Oct 30 01:13:00.183: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:00.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:13:01.188: INFO: rc: 1 Oct 30 01:13:01.188: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:01.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:13:02.177: INFO: rc: 1 Oct 30 01:13:02.177: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:02.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:13:03.278: INFO: rc: 1 Oct 30 01:13:03.278: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:03.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:13:04.359: INFO: rc: 1 Oct 30 01:13:04.359: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:04.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:13:05.254: INFO: rc: 1 Oct 30 01:13:05.254: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:05.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:13:06.457: INFO: rc: 1 Oct 30 01:13:06.457: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:06.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:13:07.304: INFO: rc: 1 Oct 30 01:13:07.304: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:07.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:13:08.561: INFO: rc: 1 Oct 30 01:13:08.561: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:08.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:13:09.267: INFO: rc: 1 Oct 30 01:13:09.267: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:09.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:13:10.181: INFO: rc: 1 Oct 30 01:13:10.181: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:10.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:13:11.667: INFO: rc: 1 Oct 30 01:13:11.667: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:11.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:13:12.321: INFO: rc: 1 Oct 30 01:13:12.321: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:12.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:13:13.196: INFO: rc: 1 Oct 30 01:13:13.196: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:13.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:13:14.286: INFO: rc: 1 Oct 30 01:13:14.286: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:14.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:13:15.187: INFO: rc: 1 Oct 30 01:13:15.187: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:15.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:13:16.213: INFO: rc: 1 Oct 30 01:13:16.213: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:16.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:13:17.362: INFO: rc: 1 Oct 30 01:13:17.362: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:17.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:13:18.192: INFO: rc: 1 Oct 30 01:13:18.192: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:18.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:13:19.164: INFO: rc: 1 Oct 30 01:13:19.164: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:19.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:13:20.205: INFO: rc: 1 Oct 30 01:13:20.205: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:20.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:13:21.180: INFO: rc: 1 Oct 30 01:13:21.180: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:21.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:13:22.200: INFO: rc: 1 Oct 30 01:13:22.200: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:22.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:13:23.489: INFO: rc: 1 Oct 30 01:13:23.489: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:23.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:13:24.276: INFO: rc: 1 Oct 30 01:13:24.276: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:24.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:13:25.409: INFO: rc: 1 Oct 30 01:13:25.410: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:25.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:13:26.209: INFO: rc: 1 Oct 30 01:13:26.209: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:26.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:13:27.186: INFO: rc: 1 Oct 30 01:13:27.186: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:27.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:13:28.247: INFO: rc: 1 Oct 30 01:13:28.247: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:28.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:13:29.199: INFO: rc: 1 Oct 30 01:13:29.199: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:29.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:13:30.306: INFO: rc: 1 Oct 30 01:13:30.306: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:30.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:13:31.227: INFO: rc: 1 Oct 30 01:13:31.227: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:31.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:13:32.262: INFO: rc: 1 Oct 30 01:13:32.262: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:32.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:13:33.360: INFO: rc: 1 Oct 30 01:13:33.360: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:33.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:13:34.188: INFO: rc: 1 Oct 30 01:13:34.188: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:34.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:13:35.177: INFO: rc: 1 Oct 30 01:13:35.177: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:35.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:13:36.228: INFO: rc: 1 Oct 30 01:13:36.228: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:36.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:13:37.290: INFO: rc: 1 Oct 30 01:13:37.290: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:37.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:13:38.198: INFO: rc: 1 Oct 30 01:13:38.198: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:38.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:13:39.201: INFO: rc: 1 Oct 30 01:13:39.201: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:39.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:13:40.644: INFO: rc: 1 Oct 30 01:13:40.644: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:40.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:13:41.211: INFO: rc: 1 Oct 30 01:13:41.212: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:41.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:13:42.186: INFO: rc: 1 Oct 30 01:13:42.186: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:42.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:13:43.345: INFO: rc: 1 Oct 30 01:13:43.345: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:43.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:13:44.252: INFO: rc: 1 Oct 30 01:13:44.252: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:44.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:13:45.162: INFO: rc: 1 Oct 30 01:13:45.162: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:45.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:13:46.179: INFO: rc: 1 Oct 30 01:13:46.179: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:46.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:13:47.183: INFO: rc: 1 Oct 30 01:13:47.183: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:47.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:13:48.482: INFO: rc: 1 Oct 30 01:13:48.483: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:48.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:13:49.178: INFO: rc: 1 Oct 30 01:13:49.178: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:49.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:13:50.180: INFO: rc: 1 Oct 30 01:13:50.180: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:50.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:13:51.166: INFO: rc: 1 Oct 30 01:13:51.166: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:51.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:13:52.278: INFO: rc: 1 Oct 30 01:13:52.278: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:52.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:13:53.417: INFO: rc: 1 Oct 30 01:13:53.417: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:53.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:13:54.239: INFO: rc: 1 Oct 30 01:13:54.239: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:54.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:13:55.219: INFO: rc: 1 Oct 30 01:13:55.219: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:55.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:13:56.192: INFO: rc: 1 Oct 30 01:13:56.192: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:56.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:13:57.201: INFO: rc: 1 Oct 30 01:13:57.201: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:57.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:13:58.201: INFO: rc: 1 Oct 30 01:13:58.201: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:58.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:13:59.537: INFO: rc: 1 Oct 30 01:13:59.537: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:59.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:14:00.352: INFO: rc: 1 Oct 30 01:14:00.352: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:14:00.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:14:01.340: INFO: rc: 1 Oct 30 01:14:01.340: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:14:01.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:14:02.197: INFO: rc: 1 Oct 30 01:14:02.197: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:14:02.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:14:03.184: INFO: rc: 1 Oct 30 01:14:03.184: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:14:03.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:14:04.288: INFO: rc: 1 Oct 30 01:14:04.289: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:14:04.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:14:05.285: INFO: rc: 1 Oct 30 01:14:05.285: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:14:05.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:14:06.234: INFO: rc: 1 Oct 30 01:14:06.234: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:14:06.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:14:07.593: INFO: rc: 1 Oct 30 01:14:07.593: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:14:07.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:14:08.181: INFO: rc: 1 Oct 30 01:14:08.181: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:14:08.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:14:09.561: INFO: rc: 1 Oct 30 01:14:09.561: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:14:09.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:14:10.586: INFO: rc: 1 Oct 30 01:14:10.586: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:14:10.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:14:12.010: INFO: rc: 1 Oct 30 01:14:12.010: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:14:12.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:14:13.281: INFO: rc: 1 Oct 30 01:14:13.281: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:14:13.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:14:14.193: INFO: rc: 1 Oct 30 01:14:14.193: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:14:14.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:14:15.195: INFO: rc: 1 Oct 30 01:14:15.195: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:14:15.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:14:16.194: INFO: rc: 1 Oct 30 01:14:16.194: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:14:16.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:14:17.185: INFO: rc: 1 Oct 30 01:14:17.185: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:14:17.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:14:18.590: INFO: rc: 1 Oct 30 01:14:18.590: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:14:18.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:14:19.263: INFO: rc: 1 Oct 30 01:14:19.263: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:14:19.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:14:20.183: INFO: rc: 1 Oct 30 01:14:20.183: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:14:20.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:14:21.217: INFO: rc: 1 Oct 30 01:14:21.217: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:14:21.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:14:22.259: INFO: rc: 1 Oct 30 01:14:22.259: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:14:22.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:14:23.206: INFO: rc: 1 Oct 30 01:14:23.206: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:14:23.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:14:24.229: INFO: rc: 1 Oct 30 01:14:24.229: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:14:24.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:14:25.182: INFO: rc: 1 Oct 30 01:14:25.182: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:14:25.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:14:26.172: INFO: rc: 1 Oct 30 01:14:26.172: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:14:26.172: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300' Oct 30 01:14:26.413: INFO: rc: 1 Oct 30 01:14:26.413: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5100 exec execpod-affinityd5bs7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32300: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32300 + echo hostName nc: connect to 10.10.190.207 port 32300 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:14:26.414: FAIL: Unexpected error: <*errors.errorString | 0xc003f84680>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:32300 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:32300 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithOptionalTransition(0xc001590c60, 0x779f8f8, 0xc002c7af20, 0xc009625680, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2572 +0x625 k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBService(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2531 k8s.io/kubernetes/test/e2e/network.glob..func24.25() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1829 +0xa5 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001401500) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc001401500) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc001401500, 0x70e7b58) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 Oct 30 01:14:26.415: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport in namespace services-5100, will wait for the garbage collector to delete the pods Oct 30 01:14:26.490: INFO: Deleting ReplicationController affinity-nodeport took: 3.543556ms Oct 30 01:14:26.590: INFO: Terminating ReplicationController affinity-nodeport pods took: 100.728342ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-5100". STEP: Found 27 events. Oct 30 01:14:43.006: INFO: At 2021-10-30 01:12:13 +0000 UTC - event for affinity-nodeport: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-hzgwk Oct 30 01:14:43.006: INFO: At 2021-10-30 01:12:13 +0000 UTC - event for affinity-nodeport: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-4lmdh Oct 30 01:14:43.006: INFO: At 2021-10-30 01:12:13 +0000 UTC - event for affinity-nodeport: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-hwth4 Oct 30 01:14:43.006: INFO: At 2021-10-30 01:12:13 +0000 UTC - event for affinity-nodeport-4lmdh: {default-scheduler } Scheduled: Successfully assigned services-5100/affinity-nodeport-4lmdh to node1 Oct 30 01:14:43.006: INFO: At 2021-10-30 01:12:13 +0000 UTC - event for affinity-nodeport-hwth4: {default-scheduler } Scheduled: Successfully assigned services-5100/affinity-nodeport-hwth4 to node2 Oct 30 01:14:43.006: INFO: At 2021-10-30 01:12:13 +0000 UTC - event for affinity-nodeport-hzgwk: {default-scheduler } Scheduled: Successfully assigned services-5100/affinity-nodeport-hzgwk to node2 Oct 30 01:14:43.006: INFO: At 2021-10-30 01:12:15 +0000 UTC - event for affinity-nodeport-hwth4: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Oct 30 01:14:43.006: INFO: At 2021-10-30 01:12:16 +0000 UTC - event for affinity-nodeport-4lmdh: {kubelet node1} Started: Started container affinity-nodeport Oct 30 01:14:43.006: INFO: At 2021-10-30 01:12:16 +0000 UTC - event for affinity-nodeport-4lmdh: {kubelet node1} Created: Created container affinity-nodeport Oct 30 01:14:43.007: INFO: At 2021-10-30 01:12:16 +0000 UTC - event for affinity-nodeport-4lmdh: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 411.088927ms Oct 30 01:14:43.007: INFO: At 2021-10-30 01:12:16 +0000 UTC - event for affinity-nodeport-4lmdh: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Oct 30 01:14:43.007: INFO: At 2021-10-30 01:12:16 +0000 UTC - event for affinity-nodeport-hwth4: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 688.080615ms Oct 30 01:14:43.007: INFO: At 2021-10-30 01:12:16 +0000 UTC - event for affinity-nodeport-hwth4: {kubelet node2} Created: Created container affinity-nodeport Oct 30 01:14:43.007: INFO: At 2021-10-30 01:12:16 +0000 UTC - event for affinity-nodeport-hzgwk: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 714.53006ms Oct 30 01:14:43.007: INFO: At 2021-10-30 01:12:16 +0000 UTC - event for affinity-nodeport-hzgwk: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Oct 30 01:14:43.007: INFO: At 2021-10-30 01:12:17 +0000 UTC - event for affinity-nodeport-hwth4: {kubelet node2} Started: Started container affinity-nodeport Oct 30 01:14:43.007: INFO: At 2021-10-30 01:12:17 +0000 UTC - event for affinity-nodeport-hzgwk: {kubelet node2} Created: Created container affinity-nodeport Oct 30 01:14:43.007: INFO: At 2021-10-30 01:12:17 +0000 UTC - event for affinity-nodeport-hzgwk: {kubelet node2} Started: Started container affinity-nodeport Oct 30 01:14:43.007: INFO: At 2021-10-30 01:12:19 +0000 UTC - event for execpod-affinityd5bs7: {default-scheduler } Scheduled: Successfully assigned services-5100/execpod-affinityd5bs7 to node2 Oct 30 01:14:43.007: INFO: At 2021-10-30 01:12:22 +0000 UTC - event for execpod-affinityd5bs7: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 415.875299ms Oct 30 01:14:43.007: INFO: At 2021-10-30 01:12:22 +0000 UTC - event for execpod-affinityd5bs7: {kubelet node2} Created: Created container agnhost-container Oct 30 01:14:43.007: INFO: At 2021-10-30 01:12:22 +0000 UTC - event for execpod-affinityd5bs7: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Oct 30 01:14:43.007: INFO: At 2021-10-30 01:12:23 +0000 UTC - event for execpod-affinityd5bs7: {kubelet node2} Started: Started container agnhost-container Oct 30 01:14:43.007: INFO: At 2021-10-30 01:14:26 +0000 UTC - event for affinity-nodeport-4lmdh: {kubelet node1} Killing: Stopping container affinity-nodeport Oct 30 01:14:43.007: INFO: At 2021-10-30 01:14:26 +0000 UTC - event for affinity-nodeport-hwth4: {kubelet node2} Killing: Stopping container affinity-nodeport Oct 30 01:14:43.007: INFO: At 2021-10-30 01:14:26 +0000 UTC - event for affinity-nodeport-hzgwk: {kubelet node2} Killing: Stopping container affinity-nodeport Oct 30 01:14:43.007: INFO: At 2021-10-30 01:14:26 +0000 UTC - event for execpod-affinityd5bs7: {kubelet node2} Killing: Stopping container agnhost-container Oct 30 01:14:43.008: INFO: POD NODE PHASE GRACE CONDITIONS Oct 30 01:14:43.008: INFO: Oct 30 01:14:43.012: INFO: Logging node info for node master1 Oct 30 01:14:43.015: INFO: Node Info: &Node{ObjectMeta:{master1 b47c04d5-47a7-4a95-8e97-481e6e60af54 85718 0 2021-10-29 21:05:34 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-10-29 21:05:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2021-10-29 21:05:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2021-10-29 21:08:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kubelet Update v1 2021-10-29 21:13:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:27 +0000 UTC,LastTransitionTime:2021-10-29 21:11:27 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 01:14:41 +0000 UTC,LastTransitionTime:2021-10-29 21:05:32 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 01:14:41 +0000 UTC,LastTransitionTime:2021-10-29 21:05:32 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 01:14:41 +0000 UTC,LastTransitionTime:2021-10-29 21:05:32 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 01:14:41 +0000 UTC,LastTransitionTime:2021-10-29 21:08:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5d3ed60c561e427db72df14bd9006ed0,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:01b9d6bc-4126-4864-a1df-901a1bee4906,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:b7d2fa8154ac5d9cff45866e4d3d210a7d390f8576611c301a2eed2b57273227 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:2012644549052fa07c43b0d19f320c871a25e105d0b23e33645e4f1bcf8fcd97 nginx:1.20.1-alpine],SizeBytes:22650454,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 30 01:14:43.016: INFO: Logging kubelet events for node master1 Oct 30 01:14:43.017: INFO: Logging pods the kubelet thinks is on node master1 Oct 30 01:14:43.047: INFO: kube-proxy-z5k8p started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded) Oct 30 01:14:43.047: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 01:14:43.047: INFO: coredns-8474476ff8-lczbr started at 2021-10-29 21:09:04 +0000 UTC (0+1 container statuses recorded) Oct 30 01:14:43.047: INFO: Container coredns ready: true, restart count 1 Oct 30 01:14:43.047: INFO: container-registry-65d7c44b96-zzkfl started at 2021-10-29 21:12:56 +0000 UTC (0+2 container statuses recorded) Oct 30 01:14:43.047: INFO: Container docker-registry ready: true, restart count 0 Oct 30 01:14:43.047: INFO: Container nginx ready: true, restart count 0 Oct 30 01:14:43.047: INFO: node-exporter-fv84w started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded) Oct 30 01:14:43.047: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 01:14:43.047: INFO: Container node-exporter ready: true, restart count 0 Oct 30 01:14:43.047: INFO: kube-scheduler-master1 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded) Oct 30 01:14:43.047: INFO: Container kube-scheduler ready: true, restart count 0 Oct 30 01:14:43.047: INFO: kube-controller-manager-master1 started at 2021-10-29 21:14:43 +0000 UTC (0+1 container statuses recorded) Oct 30 01:14:43.047: INFO: Container kube-controller-manager ready: true, restart count 2 Oct 30 01:14:43.047: INFO: kube-flannel-d4pmt started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded) Oct 30 01:14:43.047: INFO: Init container install-cni ready: true, restart count 0 Oct 30 01:14:43.047: INFO: Container kube-flannel ready: true, restart count 2 Oct 30 01:14:43.047: INFO: kube-multus-ds-amd64-wgkfq started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded) Oct 30 01:14:43.047: INFO: Container kube-multus ready: true, restart count 1 Oct 30 01:14:43.047: INFO: kube-apiserver-master1 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded) Oct 30 01:14:43.047: INFO: Container kube-apiserver ready: true, restart count 0 W1030 01:14:43.061692 37 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 30 01:14:43.141: INFO: Latency metrics for node master1 Oct 30 01:14:43.141: INFO: Logging node info for node master2 Oct 30 01:14:43.144: INFO: Node Info: &Node{ObjectMeta:{master2 208792d3-d365-4ddb-83d4-10e6e818079c 85689 0 2021-10-29 21:06:06 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-10-29 21:06:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-10-29 21:08:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-10-29 21:08:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-10-29 21:18:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:19 +0000 UTC,LastTransitionTime:2021-10-29 21:11:19 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 01:14:38 +0000 UTC,LastTransitionTime:2021-10-29 21:06:06 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 01:14:38 +0000 UTC,LastTransitionTime:2021-10-29 21:06:06 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 01:14:38 +0000 UTC,LastTransitionTime:2021-10-29 21:06:06 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 01:14:38 +0000 UTC,LastTransitionTime:2021-10-29 21:08:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:12290c1916d84ddda20431c28083da6a,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:314e82b8-9747-4131-b883-220496309995,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 30 01:14:43.144: INFO: Logging kubelet events for node master2 Oct 30 01:14:43.146: INFO: Logging pods the kubelet thinks is on node master2 Oct 30 01:14:43.166: INFO: kube-apiserver-master2 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded) Oct 30 01:14:43.166: INFO: Container kube-apiserver ready: true, restart count 0 Oct 30 01:14:43.166: INFO: kube-controller-manager-master2 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded) Oct 30 01:14:43.166: INFO: Container kube-controller-manager ready: true, restart count 3 Oct 30 01:14:43.166: INFO: kube-scheduler-master2 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded) Oct 30 01:14:43.166: INFO: Container kube-scheduler ready: true, restart count 2 Oct 30 01:14:43.166: INFO: kube-proxy-5gz4v started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded) Oct 30 01:14:43.166: INFO: Container kube-proxy ready: true, restart count 2 Oct 30 01:14:43.166: INFO: kube-flannel-qvqll started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded) Oct 30 01:14:43.166: INFO: Init container install-cni ready: true, restart count 2 Oct 30 01:14:43.166: INFO: Container kube-flannel ready: true, restart count 1 Oct 30 01:14:43.166: INFO: kube-multus-ds-amd64-brkpk started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded) Oct 30 01:14:43.166: INFO: Container kube-multus ready: true, restart count 1 Oct 30 01:14:43.166: INFO: node-exporter-lc9kk started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded) Oct 30 01:14:43.166: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 01:14:43.166: INFO: Container node-exporter ready: true, restart count 0 W1030 01:14:43.179272 37 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 30 01:14:43.244: INFO: Latency metrics for node master2 Oct 30 01:14:43.244: INFO: Logging node info for node master3 Oct 30 01:14:43.247: INFO: Node Info: &Node{ObjectMeta:{master3 168f1589-e029-47ae-b194-10215fc22d6a 85671 0 2021-10-29 21:06:17 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-10-29 21:06:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-10-29 21:08:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-10-29 21:08:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-10-29 21:16:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2021-10-29 21:16:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:36 +0000 UTC,LastTransitionTime:2021-10-29 21:11:36 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 01:14:37 +0000 UTC,LastTransitionTime:2021-10-29 21:06:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 01:14:37 +0000 UTC,LastTransitionTime:2021-10-29 21:06:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 01:14:37 +0000 UTC,LastTransitionTime:2021-10-29 21:06:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 01:14:37 +0000 UTC,LastTransitionTime:2021-10-29 21:08:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:de18dcb6cb4c493e9f4d987da2c8b3fd,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:89235c4b-b1f5-4716-bbd7-18b41c0bde74,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 30 01:14:43.247: INFO: Logging kubelet events for node master3 Oct 30 01:14:43.249: INFO: Logging pods the kubelet thinks is on node master3 Oct 30 01:14:43.265: INFO: kube-apiserver-master3 started at 2021-10-29 21:11:10 +0000 UTC (0+1 container statuses recorded) Oct 30 01:14:43.265: INFO: Container kube-apiserver ready: true, restart count 0 Oct 30 01:14:43.265: INFO: kube-scheduler-master3 started at 2021-10-29 21:14:43 +0000 UTC (0+1 container statuses recorded) Oct 30 01:14:43.265: INFO: Container kube-scheduler ready: true, restart count 2 Oct 30 01:14:43.265: INFO: dns-autoscaler-7df78bfcfb-phsdx started at 2021-10-29 21:09:02 +0000 UTC (0+1 container statuses recorded) Oct 30 01:14:43.265: INFO: Container autoscaler ready: true, restart count 1 Oct 30 01:14:43.265: INFO: node-feature-discovery-controller-cff799f9f-qq7g4 started at 2021-10-29 21:15:58 +0000 UTC (0+1 container statuses recorded) Oct 30 01:14:43.265: INFO: Container nfd-controller ready: true, restart count 0 Oct 30 01:14:43.265: INFO: prometheus-operator-585ccfb458-czbr2 started at 2021-10-29 21:21:06 +0000 UTC (0+2 container statuses recorded) Oct 30 01:14:43.265: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 01:14:43.265: INFO: Container prometheus-operator ready: true, restart count 0 Oct 30 01:14:43.265: INFO: node-exporter-bv946 started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded) Oct 30 01:14:43.265: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 01:14:43.265: INFO: Container node-exporter ready: true, restart count 0 Oct 30 01:14:43.265: INFO: kube-controller-manager-master3 started at 2021-10-29 21:14:43 +0000 UTC (0+1 container statuses recorded) Oct 30 01:14:43.265: INFO: Container kube-controller-manager ready: true, restart count 1 Oct 30 01:14:43.265: INFO: kube-proxy-r6fpx started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded) Oct 30 01:14:43.265: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 01:14:43.265: INFO: kube-flannel-rbdlt started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded) Oct 30 01:14:43.265: INFO: Init container install-cni ready: true, restart count 2 Oct 30 01:14:43.265: INFO: Container kube-flannel ready: true, restart count 2 Oct 30 01:14:43.265: INFO: kube-multus-ds-amd64-bdwh9 started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded) Oct 30 01:14:43.265: INFO: Container kube-multus ready: true, restart count 1 Oct 30 01:14:43.265: INFO: coredns-8474476ff8-wrwwv started at 2021-10-29 21:09:00 +0000 UTC (0+1 container statuses recorded) Oct 30 01:14:43.265: INFO: Container coredns ready: true, restart count 1 W1030 01:14:43.280985 37 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 30 01:14:43.357: INFO: Latency metrics for node master3 Oct 30 01:14:43.357: INFO: Logging node info for node node1 Oct 30 01:14:43.364: INFO: Node Info: &Node{ObjectMeta:{node1 ddef9269-94c5-4165-81fb-a3b0c4ac5c75 85628 0 2021-10-29 21:07:27 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-10-29 21:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-10-29 21:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-10-29 21:08:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-10-29 21:16:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-10-29 21:19:31 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-10-29 21:19:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:38 +0000 UTC,LastTransitionTime:2021-10-29 21:11:38 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 01:14:36 +0000 UTC,LastTransitionTime:2021-10-29 21:07:27 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 01:14:36 +0000 UTC,LastTransitionTime:2021-10-29 21:07:27 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 01:14:36 +0000 UTC,LastTransitionTime:2021-10-29 21:07:27 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 01:14:36 +0000 UTC,LastTransitionTime:2021-10-29 21:08:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3bf4179125e4495c89c046ed0ae7baf7,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:ce868148-dc5e-4c7c-a555-42ee929547f7,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003432289,},ContainerImage{Names:[localhost:30500/cmk@sha256:430843a71fa03faf488543c9f5b50d3efbef49988d6784f9f48b8077cc806f60 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:2f1ff7ac170c0ac8079e232ea4ee89d23b7906d1b824d901927acb4e399c52c9 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:a296b4c6f6ee2b88f095b61e95c7ef4f51ba25598835b4978c9256d8c8ace48a alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 30 01:14:43.365: INFO: Logging kubelet events for node node1 Oct 30 01:14:43.369: INFO: Logging pods the kubelet thinks is on node node1 Oct 30 01:14:43.385: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-t789r started at 2021-10-29 21:17:10 +0000 UTC (0+1 container statuses recorded) Oct 30 01:14:43.385: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 30 01:14:43.385: INFO: kubernetes-metrics-scraper-5558854cb-5rmjw started at 2021-10-29 21:09:04 +0000 UTC (0+1 container statuses recorded) Oct 30 01:14:43.385: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Oct 30 01:14:43.385: INFO: node-exporter-256wm started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded) Oct 30 01:14:43.385: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 01:14:43.385: INFO: Container node-exporter ready: true, restart count 0 Oct 30 01:14:43.385: INFO: prometheus-k8s-0 started at 2021-10-29 21:21:17 +0000 UTC (0+4 container statuses recorded) Oct 30 01:14:43.385: INFO: Container config-reloader ready: true, restart count 0 Oct 30 01:14:43.385: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Oct 30 01:14:43.385: INFO: Container grafana ready: true, restart count 0 Oct 30 01:14:43.385: INFO: Container prometheus ready: true, restart count 1 Oct 30 01:14:43.385: INFO: simpletest.deployment-9858f564d-wj8hs started at 2021-10-30 01:13:38 +0000 UTC (0+1 container statuses recorded) Oct 30 01:14:43.385: INFO: Container nginx ready: true, restart count 0 Oct 30 01:14:43.385: INFO: affinity-nodeport-transition-xw4wb started at 2021-10-30 01:12:52 +0000 UTC (0+1 container statuses recorded) Oct 30 01:14:43.385: INFO: Container affinity-nodeport-transition ready: true, restart count 0 Oct 30 01:14:43.385: INFO: liveness-0dabe50d-844e-45e6-afe9-2a19099ecbd0 started at 2021-10-30 01:13:15 +0000 UTC (0+1 container statuses recorded) Oct 30 01:14:43.385: INFO: Container agnhost-container ready: true, restart count 0 Oct 30 01:14:43.386: INFO: node-feature-discovery-worker-w5vdb started at 2021-10-29 21:15:58 +0000 UTC (0+1 container statuses recorded) Oct 30 01:14:43.386: INFO: Container nfd-worker ready: true, restart count 0 Oct 30 01:14:43.386: INFO: kube-flannel-phg88 started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded) Oct 30 01:14:43.386: INFO: Init container install-cni ready: true, restart count 2 Oct 30 01:14:43.386: INFO: Container kube-flannel ready: true, restart count 2 Oct 30 01:14:43.386: INFO: agnhost-primary-hfshr started at 2021-10-30 01:14:42 +0000 UTC (0+1 container statuses recorded) Oct 30 01:14:43.386: INFO: Container agnhost-primary ready: false, restart count 0 Oct 30 01:14:43.386: INFO: kube-proxy-z5hqt started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded) Oct 30 01:14:43.386: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 01:14:43.386: INFO: collectd-d45rv started at 2021-10-29 21:25:13 +0000 UTC (0+3 container statuses recorded) Oct 30 01:14:43.386: INFO: Container collectd ready: true, restart count 0 Oct 30 01:14:43.386: INFO: Container collectd-exporter ready: true, restart count 0 Oct 30 01:14:43.386: INFO: Container rbac-proxy ready: true, restart count 0 Oct 30 01:14:43.386: INFO: nginx-proxy-node1 started at 2021-10-29 21:07:28 +0000 UTC (0+1 container statuses recorded) Oct 30 01:14:43.386: INFO: Container nginx-proxy ready: true, restart count 2 Oct 30 01:14:43.386: INFO: cmk-init-discover-node1-n4mcc started at 2021-10-29 21:19:28 +0000 UTC (0+3 container statuses recorded) Oct 30 01:14:43.386: INFO: Container discover ready: false, restart count 0 Oct 30 01:14:43.386: INFO: Container init ready: false, restart count 0 Oct 30 01:14:43.386: INFO: Container install ready: false, restart count 0 Oct 30 01:14:43.386: INFO: cmk-89lqq started at 2021-10-29 21:20:10 +0000 UTC (0+2 container statuses recorded) Oct 30 01:14:43.386: INFO: Container nodereport ready: true, restart count 0 Oct 30 01:14:43.386: INFO: Container reconcile ready: true, restart count 0 Oct 30 01:14:43.386: INFO: kube-multus-ds-amd64-68wrz started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded) Oct 30 01:14:43.386: INFO: Container kube-multus ready: true, restart count 1 Oct 30 01:14:43.386: INFO: terminate-cmd-rpn71f72612-aa97-442c-988a-bc84a74d3fbd started at 2021-10-30 01:14:37 +0000 UTC (0+1 container statuses recorded) Oct 30 01:14:43.386: INFO: Container terminate-cmd-rpn ready: false, restart count 0 Oct 30 01:14:43.386: INFO: test-webserver-5273e099-3d7b-4c2b-9e8d-704e4a24361c started at 2021-10-30 01:11:45 +0000 UTC (0+1 container statuses recorded) Oct 30 01:14:43.386: INFO: Container test-webserver ready: true, restart count 0 W1030 01:14:43.400348 37 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 30 01:14:44.926: INFO: Latency metrics for node node1 Oct 30 01:14:44.926: INFO: Logging node info for node node2 Oct 30 01:14:44.930: INFO: Node Info: &Node{ObjectMeta:{node2 3b49ad19-ba56-4f4a-b1fa-eef102063de9 85702 0 2021-10-29 21:07:28 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-10-29 21:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-10-29 21:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-10-29 21:08:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-10-29 21:16:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-10-29 21:19:53 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-10-29 21:19:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:34 +0000 UTC,LastTransitionTime:2021-10-29 21:11:34 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 01:14:39 +0000 UTC,LastTransitionTime:2021-10-29 21:07:28 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 01:14:39 +0000 UTC,LastTransitionTime:2021-10-29 21:07:28 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 01:14:39 +0000 UTC,LastTransitionTime:2021-10-29 21:07:28 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 01:14:39 +0000 UTC,LastTransitionTime:2021-10-29 21:08:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7283436dd9e34722a6e4df817add95ed,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:c219e7bd-582b-4d6c-b379-1161acc70676,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[localhost:30500/cmk@sha256:430843a71fa03faf488543c9f5b50d3efbef49988d6784f9f48b8077cc806f60 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:2f1ff7ac170c0ac8079e232ea4ee89d23b7906d1b824d901927acb4e399c52c9 localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[localhost:30500/tasextender@sha256:b7d2fa8154ac5d9cff45866e4d3d210a7d390f8576611c301a2eed2b57273227 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 30 01:14:44.931: INFO: Logging kubelet events for node node2 Oct 30 01:14:44.934: INFO: Logging pods the kubelet thinks is on node node2 Oct 30 01:14:44.950: INFO: kube-multus-ds-amd64-7tvbl started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded) Oct 30 01:14:44.950: INFO: Container kube-multus ready: true, restart count 1 Oct 30 01:14:44.950: INFO: cmk-webhook-6c9d5f8578-ffk66 started at 2021-10-29 21:20:11 +0000 UTC (0+1 container statuses recorded) Oct 30 01:14:44.950: INFO: Container cmk-webhook ready: true, restart count 0 Oct 30 01:14:44.950: INFO: kube-flannel-f6s5v started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded) Oct 30 01:14:44.950: INFO: Init container install-cni ready: true, restart count 2 Oct 30 01:14:44.950: INFO: Container kube-flannel ready: true, restart count 3 Oct 30 01:14:44.950: INFO: kubernetes-dashboard-785dcbb76d-pbjjt started at 2021-10-29 21:09:04 +0000 UTC (0+1 container statuses recorded) Oct 30 01:14:44.950: INFO: Container kubernetes-dashboard ready: true, restart count 1 Oct 30 01:14:44.950: INFO: cmk-init-discover-node2-2fmmt started at 2021-10-29 21:19:48 +0000 UTC (0+3 container statuses recorded) Oct 30 01:14:44.950: INFO: Container discover ready: false, restart count 0 Oct 30 01:14:44.950: INFO: Container init ready: false, restart count 0 Oct 30 01:14:44.950: INFO: Container install ready: false, restart count 0 Oct 30 01:14:44.950: INFO: sample-webhook-deployment-78988fc6cd-cxf66 started at 2021-10-30 01:14:43 +0000 UTC (0+1 container statuses recorded) Oct 30 01:14:44.950: INFO: Container sample-webhook ready: false, restart count 0 Oct 30 01:14:44.950: INFO: execpod-affinity9mqvn started at 2021-10-30 01:12:57 +0000 UTC (0+1 container statuses recorded) Oct 30 01:14:44.950: INFO: Container agnhost-container ready: true, restart count 0 Oct 30 01:14:44.950: INFO: nginx-proxy-node2 started at 2021-10-29 21:07:28 +0000 UTC (0+1 container statuses recorded) Oct 30 01:14:44.950: INFO: Container nginx-proxy ready: true, restart count 2 Oct 30 01:14:44.950: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-69pkg started at 2021-10-29 21:17:10 +0000 UTC (0+1 container statuses recorded) Oct 30 01:14:44.950: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 30 01:14:44.950: INFO: node-exporter-r77s4 started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded) Oct 30 01:14:44.950: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 01:14:44.950: INFO: Container node-exporter ready: true, restart count 0 Oct 30 01:14:44.950: INFO: tas-telemetry-aware-scheduling-84ff454dfb-989mh started at 2021-10-29 21:24:23 +0000 UTC (0+1 container statuses recorded) Oct 30 01:14:44.950: INFO: Container tas-extender ready: true, restart count 0 Oct 30 01:14:44.950: INFO: cmk-8bpbf started at 2021-10-29 21:20:11 +0000 UTC (0+2 container statuses recorded) Oct 30 01:14:44.950: INFO: Container nodereport ready: true, restart count 0 Oct 30 01:14:44.950: INFO: Container reconcile ready: true, restart count 0 Oct 30 01:14:44.950: INFO: collectd-flvhl started at 2021-10-29 21:25:13 +0000 UTC (0+3 container statuses recorded) Oct 30 01:14:44.950: INFO: Container collectd ready: true, restart count 0 Oct 30 01:14:44.950: INFO: Container collectd-exporter ready: true, restart count 0 Oct 30 01:14:44.950: INFO: Container rbac-proxy ready: true, restart count 0 Oct 30 01:14:44.950: INFO: test-pod started at 2021-10-30 01:12:21 +0000 UTC (0+1 container statuses recorded) Oct 30 01:14:44.950: INFO: Container webserver ready: true, restart count 0 Oct 30 01:14:44.950: INFO: send-events-795e3965-9b3c-447b-babd-a33488527fa4 started at 2021-10-30 01:14:27 +0000 UTC (0+1 container statuses recorded) Oct 30 01:14:44.950: INFO: Container p ready: true, restart count 0 Oct 30 01:14:44.950: INFO: simpletest.deployment-9858f564d-7ggjb started at 2021-10-30 01:13:38 +0000 UTC (0+1 container statuses recorded) Oct 30 01:14:44.950: INFO: Container nginx ready: true, restart count 0 Oct 30 01:14:44.950: INFO: kube-proxy-76285 started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded) Oct 30 01:14:44.950: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 01:14:44.950: INFO: affinity-nodeport-transition-5j9d8 started at 2021-10-30 01:12:51 +0000 UTC (0+1 container statuses recorded) Oct 30 01:14:44.950: INFO: Container affinity-nodeport-transition ready: true, restart count 0 Oct 30 01:14:44.950: INFO: busybox-readonly-fs2468a5ee-9bc8-4e73-9f99-9f5aa2ef8f44 started at 2021-10-30 01:14:02 +0000 UTC (0+1 container statuses recorded) Oct 30 01:14:44.950: INFO: Container busybox-readonly-fs2468a5ee-9bc8-4e73-9f99-9f5aa2ef8f44 ready: false, restart count 0 Oct 30 01:14:44.950: INFO: affinity-nodeport-transition-8xhb4 started at 2021-10-30 01:12:51 +0000 UTC (0+1 container statuses recorded) Oct 30 01:14:44.950: INFO: Container affinity-nodeport-transition ready: true, restart count 0 Oct 30 01:14:44.950: INFO: node-feature-discovery-worker-h6lcp started at 2021-10-29 21:15:58 +0000 UTC (0+1 container statuses recorded) Oct 30 01:14:44.950: INFO: Container nfd-worker ready: true, restart count 0 W1030 01:14:44.964652 37 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 30 01:14:45.676: INFO: Latency metrics for node node2 Oct 30 01:14:45.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5100" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [152.098 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should have session affinity work for NodePort service [LinuxOnly] [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 01:14:26.414: Unexpected error: <*errors.errorString | 0xc003f84680>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:32300 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:32300 over TCP protocol occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2572 ------------------------------ {"msg":"FAILED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":9,"skipped":165,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:14:42.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 01:14:42.101: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1651 create -f -' Oct 30 01:14:42.518: INFO: stderr: "" Oct 30 01:14:42.518: INFO: stdout: "replicationcontroller/agnhost-primary created\n" Oct 30 01:14:42.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1651 create -f -' Oct 30 01:14:42.811: INFO: stderr: "" Oct 30 01:14:42.811: INFO: stdout: "service/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Oct 30 01:14:43.814: INFO: Selector matched 1 pods for map[app:agnhost] Oct 30 01:14:43.814: INFO: Found 0 / 1 Oct 30 01:14:44.815: INFO: Selector matched 1 pods for map[app:agnhost] Oct 30 01:14:44.815: INFO: Found 0 / 1 Oct 30 01:14:45.814: INFO: Selector matched 1 pods for map[app:agnhost] Oct 30 01:14:45.814: INFO: Found 0 / 1 Oct 30 01:14:46.816: INFO: Selector matched 1 pods for map[app:agnhost] Oct 30 01:14:46.816: INFO: Found 1 / 1 Oct 30 01:14:46.816: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Oct 30 01:14:46.818: INFO: Selector matched 1 pods for map[app:agnhost] Oct 30 01:14:46.818: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Oct 30 01:14:46.818: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1651 describe pod agnhost-primary-hfshr' Oct 30 01:14:46.998: INFO: stderr: "" Oct 30 01:14:46.998: INFO: stdout: "Name: agnhost-primary-hfshr\nNamespace: kubectl-1651\nPriority: 0\nNode: node1/10.10.190.207\nStart Time: Sat, 30 Oct 2021 01:14:42 +0000\nLabels: app=agnhost\n role=primary\nAnnotations: k8s.v1.cni.cncf.io/network-status:\n [{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.3.80\"\n ],\n \"mac\": \"f2:4d:6b:6e:48:92\",\n \"default\": true,\n \"dns\": {}\n }]\n k8s.v1.cni.cncf.io/networks-status:\n [{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.3.80\"\n ],\n \"mac\": \"f2:4d:6b:6e:48:92\",\n \"default\": true,\n \"dns\": {}\n }]\n kubernetes.io/psp: collectd\nStatus: Running\nIP: 10.244.3.80\nIPs:\n IP: 10.244.3.80\nControlled By: ReplicationController/agnhost-primary\nContainers:\n agnhost-primary:\n Container ID: docker://a15b210d9a8565c734eca750abdb28066f9da134a31a595eb55d404c7aeeb4f6\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.32\n Image ID: docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Sat, 30 Oct 2021 01:14:45 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5rh7q (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-5rh7q:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: \n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 4s default-scheduler Successfully assigned kubectl-1651/agnhost-primary-hfshr to node1\n Normal Pulling 2s kubelet Pulling image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\"\n Normal Pulled 1s kubelet Successfully pulled image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\" in 320.268312ms\n Normal Created 1s kubelet Created container agnhost-primary\n Normal Started 1s kubelet Started container agnhost-primary\n" Oct 30 01:14:46.999: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1651 describe rc agnhost-primary' Oct 30 01:14:47.202: INFO: stderr: "" Oct 30 01:14:47.202: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-1651\nSelector: app=agnhost,role=primary\nLabels: app=agnhost\n role=primary\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=primary\n Containers:\n agnhost-primary:\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.32\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 5s replication-controller Created pod: agnhost-primary-hfshr\n" Oct 30 01:14:47.202: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1651 describe service agnhost-primary' Oct 30 01:14:47.386: INFO: stderr: "" Oct 30 01:14:47.387: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-1651\nLabels: app=agnhost\n role=primary\nAnnotations: \nSelector: app=agnhost,role=primary\nType: ClusterIP\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 10.233.58.131\nIPs: 10.233.58.131\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.3.80:6379\nSession Affinity: None\nEvents: \n" Oct 30 01:14:47.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1651 describe node master1' Oct 30 01:14:47.601: INFO: stderr: "" Oct 30 01:14:47.601: INFO: stdout: "Name: master1\nRoles: control-plane,master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=master1\n kubernetes.io/os=linux\n node-role.kubernetes.io/control-plane=\n node-role.kubernetes.io/master=\n node.kubernetes.io/exclude-from-external-load-balancers=\nAnnotations: flannel.alpha.coreos.com/backend-data: null\n flannel.alpha.coreos.com/backend-type: host-gw\n flannel.alpha.coreos.com/kube-subnet-manager: true\n flannel.alpha.coreos.com/public-ip: 10.10.190.202\n kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Fri, 29 Oct 2021 21:05:34 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: master1\n AcquireTime: \n RenewTime: Sat, 30 Oct 2021 01:14:40 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n NetworkUnavailable False Fri, 29 Oct 2021 21:11:27 +0000 Fri, 29 Oct 2021 21:11:27 +0000 FlannelIsUp Flannel is running on this node\n MemoryPressure False Sat, 30 Oct 2021 01:14:41 +0000 Fri, 29 Oct 2021 21:05:32 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Sat, 30 Oct 2021 01:14:41 +0000 Fri, 29 Oct 2021 21:05:32 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Sat, 30 Oct 2021 01:14:41 +0000 Fri, 29 Oct 2021 21:05:32 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Sat, 30 Oct 2021 01:14:41 +0000 Fri, 29 Oct 2021 21:08:35 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 10.10.190.202\n Hostname: master1\nCapacity:\n cpu: 80\n ephemeral-storage: 439913340Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 196518328Ki\n pods: 110\nAllocatable:\n cpu: 79550m\n ephemeral-storage: 405424133473\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 195629496Ki\n pods: 110\nSystem Info:\n Machine ID: 5d3ed60c561e427db72df14bd9006ed0\n System UUID: 00ACFB60-0631-E711-906E-0017A4403562\n Boot ID: 01b9d6bc-4126-4864-a1df-901a1bee4906\n Kernel Version: 3.10.0-1160.45.1.el7.x86_64\n OS Image: CentOS Linux 7 (Core)\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: docker://20.10.10\n Kubelet Version: v1.21.1\n Kube-Proxy Version: v1.21.1\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system container-registry-65d7c44b96-zzkfl 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4h1m\n kube-system coredns-8474476ff8-lczbr 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 4h5m\n kube-system kube-apiserver-master1 250m (0%) 0 (0%) 0 (0%) 0 (0%) 3h59m\n kube-system kube-controller-manager-master1 200m (0%) 0 (0%) 0 (0%) 0 (0%) 4h8m\n kube-system kube-flannel-d4pmt 150m (0%) 300m (0%) 64M (0%) 500M (0%) 4h6m\n kube-system kube-multus-ds-amd64-wgkfq 100m (0%) 100m (0%) 90Mi (0%) 90Mi (0%) 4h6m\n kube-system kube-proxy-z5k8p 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4h7m\n kube-system kube-scheduler-master1 100m (0%) 0 (0%) 0 (0%) 0 (0%) 3h50m\n monitoring node-exporter-fv84w 112m (0%) 270m (0%) 200Mi (0%) 220Mi (0%) 3h53m\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 1012m (1%) 670m (0%)\n memory 431140Ki (0%) 1003316480 (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" Oct 30 01:14:47.601: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1651 describe namespace kubectl-1651' Oct 30 01:14:47.763: INFO: stderr: "" Oct 30 01:14:47.763: INFO: stdout: "Name: kubectl-1651\nLabels: e2e-framework=kubectl\n e2e-run=d23bab8b-8609-441f-9678-f3fafe6e2efa\n kubernetes.io/metadata.name=kubectl-1651\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:14:47.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1651" for this suite. • [SLOW TEST:5.693 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1084 should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":-1,"completed":9,"skipped":99,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:14:43.655: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-e6f40b89-c915-4af3-9a7f-d1bc02e2e48e STEP: Creating a pod to test consume configMaps Oct 30 01:14:43.689: INFO: Waiting up to 5m0s for pod "pod-configmaps-98c330be-278b-4614-9f0f-fb060dd9c54e" in namespace "configmap-7090" to be "Succeeded or Failed" Oct 30 01:14:43.691: INFO: Pod "pod-configmaps-98c330be-278b-4614-9f0f-fb060dd9c54e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.233089ms Oct 30 01:14:45.694: INFO: Pod "pod-configmaps-98c330be-278b-4614-9f0f-fb060dd9c54e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005424997s Oct 30 01:14:47.698: INFO: Pod "pod-configmaps-98c330be-278b-4614-9f0f-fb060dd9c54e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008750244s STEP: Saw pod success Oct 30 01:14:47.698: INFO: Pod "pod-configmaps-98c330be-278b-4614-9f0f-fb060dd9c54e" satisfied condition "Succeeded or Failed" Oct 30 01:14:47.700: INFO: Trying to get logs from node node1 pod pod-configmaps-98c330be-278b-4614-9f0f-fb060dd9c54e container agnhost-container: STEP: delete the pod Oct 30 01:14:47.772: INFO: Waiting for pod pod-configmaps-98c330be-278b-4614-9f0f-fb060dd9c54e to disappear Oct 30 01:14:47.774: INFO: Pod pod-configmaps-98c330be-278b-4614-9f0f-fb060dd9c54e no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:14:47.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7090" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":231,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:14:43.313: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 30 01:14:43.597: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 30 01:14:45.606: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153283, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153283, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153283, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153283, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 30 01:14:48.615: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:14:48.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5732" for this suite. STEP: Destroying namespace "webhook-5732-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.391 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":-1,"completed":19,"skipped":390,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:14:47.777: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should add annotations for pods in rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating Agnhost RC Oct 30 01:14:47.797: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9271 create -f -' Oct 30 01:14:48.120: INFO: stderr: "" Oct 30 01:14:48.120: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Oct 30 01:14:49.124: INFO: Selector matched 1 pods for map[app:agnhost] Oct 30 01:14:49.124: INFO: Found 0 / 1 Oct 30 01:14:50.123: INFO: Selector matched 1 pods for map[app:agnhost] Oct 30 01:14:50.123: INFO: Found 0 / 1 Oct 30 01:14:51.124: INFO: Selector matched 1 pods for map[app:agnhost] Oct 30 01:14:51.124: INFO: Found 0 / 1 Oct 30 01:14:52.123: INFO: Selector matched 1 pods for map[app:agnhost] Oct 30 01:14:52.123: INFO: Found 1 / 1 Oct 30 01:14:52.123: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Oct 30 01:14:52.126: INFO: Selector matched 1 pods for map[app:agnhost] Oct 30 01:14:52.126: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Oct 30 01:14:52.126: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9271 patch pod agnhost-primary-vw9vp -p {"metadata":{"annotations":{"x":"y"}}}' Oct 30 01:14:52.300: INFO: stderr: "" Oct 30 01:14:52.300: INFO: stdout: "pod/agnhost-primary-vw9vp patched\n" STEP: checking annotations Oct 30 01:14:52.303: INFO: Selector matched 1 pods for map[app:agnhost] Oct 30 01:14:52.303: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:14:52.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9271" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":-1,"completed":10,"skipped":100,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:14:48.755: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test env composition Oct 30 01:14:48.787: INFO: Waiting up to 5m0s for pod "var-expansion-47c6d83c-6003-4416-9981-5a64746ba7e1" in namespace "var-expansion-9848" to be "Succeeded or Failed" Oct 30 01:14:48.789: INFO: Pod "var-expansion-47c6d83c-6003-4416-9981-5a64746ba7e1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034956ms Oct 30 01:14:50.792: INFO: Pod "var-expansion-47c6d83c-6003-4416-9981-5a64746ba7e1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004789312s Oct 30 01:14:52.796: INFO: Pod "var-expansion-47c6d83c-6003-4416-9981-5a64746ba7e1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008493727s STEP: Saw pod success Oct 30 01:14:52.796: INFO: Pod "var-expansion-47c6d83c-6003-4416-9981-5a64746ba7e1" satisfied condition "Succeeded or Failed" Oct 30 01:14:52.798: INFO: Trying to get logs from node node1 pod var-expansion-47c6d83c-6003-4416-9981-5a64746ba7e1 container dapi-container: STEP: delete the pod Oct 30 01:14:52.827: INFO: Waiting for pod var-expansion-47c6d83c-6003-4416-9981-5a64746ba7e1 to disappear Oct 30 01:14:52.829: INFO: Pod var-expansion-47c6d83c-6003-4416-9981-5a64746ba7e1 no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:14:52.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9848" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":418,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:14:52.921: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should find a service from listing all namespaces [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: fetching services [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:14:52.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4151" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • ------------------------------ [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:14:45.705: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 01:14:45.731: INFO: Creating ReplicaSet my-hostname-basic-91c244c0-77e9-41d5-a366-e67b3646e7bd Oct 30 01:14:45.737: INFO: Pod name my-hostname-basic-91c244c0-77e9-41d5-a366-e67b3646e7bd: Found 0 pods out of 1 Oct 30 01:14:50.745: INFO: Pod name my-hostname-basic-91c244c0-77e9-41d5-a366-e67b3646e7bd: Found 1 pods out of 1 Oct 30 01:14:50.745: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-91c244c0-77e9-41d5-a366-e67b3646e7bd" is running Oct 30 01:14:50.748: INFO: Pod "my-hostname-basic-91c244c0-77e9-41d5-a366-e67b3646e7bd-vcg7r" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-30 01:14:45 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-30 01:14:48 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-30 01:14:48 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-30 01:14:45 +0000 UTC Reason: Message:}]) Oct 30 01:14:50.749: INFO: Trying to dial the pod Oct 30 01:14:55.758: INFO: Controller my-hostname-basic-91c244c0-77e9-41d5-a366-e67b3646e7bd: Got expected result from replica 1 [my-hostname-basic-91c244c0-77e9-41d5-a366-e67b3646e7bd-vcg7r]: "my-hostname-basic-91c244c0-77e9-41d5-a366-e67b3646e7bd-vcg7r", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:14:55.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-6733" for this suite. • [SLOW TEST:10.062 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":-1,"completed":10,"skipped":172,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} S ------------------------------ {"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":-1,"completed":21,"skipped":467,"failed":0} [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:14:52.960: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating secret secrets-8867/secret-test-d515a9dd-e805-4e64-be4b-ce365ee2ee38 STEP: Creating a pod to test consume secrets Oct 30 01:14:52.994: INFO: Waiting up to 5m0s for pod "pod-configmaps-988b544a-01d1-45d3-b004-3740cb8467e7" in namespace "secrets-8867" to be "Succeeded or Failed" Oct 30 01:14:52.996: INFO: Pod "pod-configmaps-988b544a-01d1-45d3-b004-3740cb8467e7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.315189ms Oct 30 01:14:54.999: INFO: Pod "pod-configmaps-988b544a-01d1-45d3-b004-3740cb8467e7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004904031s Oct 30 01:14:57.004: INFO: Pod "pod-configmaps-988b544a-01d1-45d3-b004-3740cb8467e7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010151265s Oct 30 01:14:59.009: INFO: Pod "pod-configmaps-988b544a-01d1-45d3-b004-3740cb8467e7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.01538383s STEP: Saw pod success Oct 30 01:14:59.009: INFO: Pod "pod-configmaps-988b544a-01d1-45d3-b004-3740cb8467e7" satisfied condition "Succeeded or Failed" Oct 30 01:14:59.014: INFO: Trying to get logs from node node1 pod pod-configmaps-988b544a-01d1-45d3-b004-3740cb8467e7 container env-test: STEP: delete the pod Oct 30 01:14:59.051: INFO: Waiting for pod pod-configmaps-988b544a-01d1-45d3-b004-3740cb8467e7 to disappear Oct 30 01:14:59.053: INFO: Pod pod-configmaps-988b544a-01d1-45d3-b004-3740cb8467e7 no longer exists [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:14:59.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8867" for this suite. • [SLOW TEST:6.101 seconds] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":467,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:14:55.772: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0777 on tmpfs Oct 30 01:14:55.807: INFO: Waiting up to 5m0s for pod "pod-c97b488c-2bab-49a2-a89e-264ad90d6753" in namespace "emptydir-7347" to be "Succeeded or Failed" Oct 30 01:14:55.809: INFO: Pod "pod-c97b488c-2bab-49a2-a89e-264ad90d6753": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063656ms Oct 30 01:14:57.812: INFO: Pod "pod-c97b488c-2bab-49a2-a89e-264ad90d6753": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005479664s Oct 30 01:14:59.816: INFO: Pod "pod-c97b488c-2bab-49a2-a89e-264ad90d6753": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009277418s STEP: Saw pod success Oct 30 01:14:59.816: INFO: Pod "pod-c97b488c-2bab-49a2-a89e-264ad90d6753" satisfied condition "Succeeded or Failed" Oct 30 01:14:59.819: INFO: Trying to get logs from node node1 pod pod-c97b488c-2bab-49a2-a89e-264ad90d6753 container test-container: STEP: delete the pod Oct 30 01:14:59.831: INFO: Waiting for pod pod-c97b488c-2bab-49a2-a89e-264ad90d6753 to disappear Oct 30 01:14:59.833: INFO: Pod pod-c97b488c-2bab-49a2-a89e-264ad90d6753 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:14:59.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7347" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":173,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:14:59.897: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-map-6d24160d-e332-4def-b684-92d86962efd2 STEP: Creating a pod to test consume configMaps Oct 30 01:14:59.937: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-bf342aa3-7f9f-4623-bdab-080185380156" in namespace "projected-5588" to be "Succeeded or Failed" Oct 30 01:14:59.942: INFO: Pod "pod-projected-configmaps-bf342aa3-7f9f-4623-bdab-080185380156": Phase="Pending", Reason="", readiness=false. Elapsed: 4.431859ms Oct 30 01:15:01.945: INFO: Pod "pod-projected-configmaps-bf342aa3-7f9f-4623-bdab-080185380156": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008057775s Oct 30 01:15:03.949: INFO: Pod "pod-projected-configmaps-bf342aa3-7f9f-4623-bdab-080185380156": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011750005s STEP: Saw pod success Oct 30 01:15:03.949: INFO: Pod "pod-projected-configmaps-bf342aa3-7f9f-4623-bdab-080185380156" satisfied condition "Succeeded or Failed" Oct 30 01:15:03.951: INFO: Trying to get logs from node node2 pod pod-projected-configmaps-bf342aa3-7f9f-4623-bdab-080185380156 container agnhost-container: STEP: delete the pod Oct 30 01:15:03.963: INFO: Waiting for pod pod-projected-configmaps-bf342aa3-7f9f-4623-bdab-080185380156 to disappear Oct 30 01:15:03.964: INFO: Pod pod-projected-configmaps-bf342aa3-7f9f-4623-bdab-080185380156 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:15:03.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5588" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":203,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:14:59.114: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0666 on node default medium Oct 30 01:14:59.146: INFO: Waiting up to 5m0s for pod "pod-9bcbc098-ab48-4163-b907-4dd4b72222c8" in namespace "emptydir-7352" to be "Succeeded or Failed" Oct 30 01:14:59.149: INFO: Pod "pod-9bcbc098-ab48-4163-b907-4dd4b72222c8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.282381ms Oct 30 01:15:01.152: INFO: Pod "pod-9bcbc098-ab48-4163-b907-4dd4b72222c8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00601158s Oct 30 01:15:03.157: INFO: Pod "pod-9bcbc098-ab48-4163-b907-4dd4b72222c8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010299997s Oct 30 01:15:05.161: INFO: Pod "pod-9bcbc098-ab48-4163-b907-4dd4b72222c8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014815951s STEP: Saw pod success Oct 30 01:15:05.161: INFO: Pod "pod-9bcbc098-ab48-4163-b907-4dd4b72222c8" satisfied condition "Succeeded or Failed" Oct 30 01:15:05.164: INFO: Trying to get logs from node node1 pod pod-9bcbc098-ab48-4163-b907-4dd4b72222c8 container test-container: STEP: delete the pod Oct 30 01:15:05.193: INFO: Waiting for pod pod-9bcbc098-ab48-4163-b907-4dd4b72222c8 to disappear Oct 30 01:15:05.195: INFO: Pod pod-9bcbc098-ab48-4163-b907-4dd4b72222c8 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:15:05.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7352" for this suite. • [SLOW TEST:6.090 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":496,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:15:05.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-c50a7944-c51b-4738-ba44-5675b0d8c273 STEP: Creating a pod to test consume configMaps Oct 30 01:15:05.259: INFO: Waiting up to 5m0s for pod "pod-configmaps-292c349f-1b89-4056-a146-cdeaae1c9360" in namespace "configmap-5989" to be "Succeeded or Failed" Oct 30 01:15:05.262: INFO: Pod "pod-configmaps-292c349f-1b89-4056-a146-cdeaae1c9360": Phase="Pending", Reason="", readiness=false. Elapsed: 2.200827ms Oct 30 01:15:07.266: INFO: Pod "pod-configmaps-292c349f-1b89-4056-a146-cdeaae1c9360": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006679951s Oct 30 01:15:09.270: INFO: Pod "pod-configmaps-292c349f-1b89-4056-a146-cdeaae1c9360": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010859294s STEP: Saw pod success Oct 30 01:15:09.270: INFO: Pod "pod-configmaps-292c349f-1b89-4056-a146-cdeaae1c9360" satisfied condition "Succeeded or Failed" Oct 30 01:15:09.273: INFO: Trying to get logs from node node2 pod pod-configmaps-292c349f-1b89-4056-a146-cdeaae1c9360 container configmap-volume-test: STEP: delete the pod Oct 30 01:15:09.288: INFO: Waiting for pod pod-configmaps-292c349f-1b89-4056-a146-cdeaae1c9360 to disappear Oct 30 01:15:09.291: INFO: Pod pod-configmaps-292c349f-1b89-4056-a146-cdeaae1c9360 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:15:09.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5989" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":507,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:15:03.995: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount projected service account token [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test service account token: Oct 30 01:15:04.029: INFO: Waiting up to 5m0s for pod "test-pod-b811eb64-2029-4a50-a161-12e781a6027b" in namespace "svcaccounts-4055" to be "Succeeded or Failed" Oct 30 01:15:04.031: INFO: Pod "test-pod-b811eb64-2029-4a50-a161-12e781a6027b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.136434ms Oct 30 01:15:06.035: INFO: Pod "test-pod-b811eb64-2029-4a50-a161-12e781a6027b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005920177s Oct 30 01:15:08.039: INFO: Pod "test-pod-b811eb64-2029-4a50-a161-12e781a6027b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010394555s Oct 30 01:15:10.043: INFO: Pod "test-pod-b811eb64-2029-4a50-a161-12e781a6027b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014021087s Oct 30 01:15:12.047: INFO: Pod "test-pod-b811eb64-2029-4a50-a161-12e781a6027b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.017787376s STEP: Saw pod success Oct 30 01:15:12.047: INFO: Pod "test-pod-b811eb64-2029-4a50-a161-12e781a6027b" satisfied condition "Succeeded or Failed" Oct 30 01:15:12.051: INFO: Trying to get logs from node node1 pod test-pod-b811eb64-2029-4a50-a161-12e781a6027b container agnhost-container: STEP: delete the pod Oct 30 01:15:12.065: INFO: Waiting for pod test-pod-b811eb64-2029-4a50-a161-12e781a6027b to disappear Oct 30 01:15:12.067: INFO: Pod test-pod-b811eb64-2029-4a50-a161-12e781a6027b no longer exists [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:15:12.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-4055" for this suite. • [SLOW TEST:8.080 seconds] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount projected service account token [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount projected service account token [Conformance]","total":-1,"completed":13,"skipped":214,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:12:51.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-9787 STEP: creating service affinity-nodeport-transition in namespace services-9787 STEP: creating replication controller affinity-nodeport-transition in namespace services-9787 I1030 01:12:51.635058 32 runners.go:190] Created replication controller with name: affinity-nodeport-transition, namespace: services-9787, replica count: 3 I1030 01:12:54.686281 32 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1030 01:12:57.686474 32 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 30 01:12:57.695: INFO: Creating new exec pod Oct 30 01:13:02.719: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Oct 30 01:13:02.968: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Oct 30 01:13:02.968: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Oct 30 01:13:02.968: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.7.146 80' Oct 30 01:13:03.261: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.7.146 80\nConnection to 10.233.7.146 80 port [tcp/http] succeeded!\n" Oct 30 01:13:03.261: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Oct 30 01:13:03.262: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:13:03.608: INFO: rc: 1 Oct 30 01:13:03.608: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:04.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:13:04.918: INFO: rc: 1 Oct 30 01:13:04.918: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:05.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:13:06.010: INFO: rc: 1 Oct 30 01:13:06.010: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:06.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:13:06.976: INFO: rc: 1 Oct 30 01:13:06.976: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:07.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:13:07.999: INFO: rc: 1 Oct 30 01:13:07.999: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:08.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:13:09.076: INFO: rc: 1 Oct 30 01:13:09.076: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:09.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:13:09.868: INFO: rc: 1 Oct 30 01:13:09.868: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:10.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:13:10.850: INFO: rc: 1 Oct 30 01:13:10.850: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:11.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:13:11.996: INFO: rc: 1 Oct 30 01:13:11.996: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:12.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:13:13.166: INFO: rc: 1 Oct 30 01:13:13.167: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:13.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:13:14.068: INFO: rc: 1 Oct 30 01:13:14.068: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:14.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:13:14.876: INFO: rc: 1 Oct 30 01:13:14.876: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:15.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:13:15.993: INFO: rc: 1 Oct 30 01:13:15.993: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:16.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:13:16.853: INFO: rc: 1 Oct 30 01:13:16.853: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:17.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:13:17.863: INFO: rc: 1 Oct 30 01:13:17.863: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:18.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:13:18.861: INFO: rc: 1 Oct 30 01:13:18.861: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:19.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:13:19.862: INFO: rc: 1 Oct 30 01:13:19.862: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:20.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:13:20.848: INFO: rc: 1 Oct 30 01:13:20.848: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:21.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:13:21.858: INFO: rc: 1 Oct 30 01:13:21.858: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:22.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:13:22.991: INFO: rc: 1 Oct 30 01:13:22.991: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:23.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:13:24.063: INFO: rc: 1 Oct 30 01:13:24.063: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:24.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:13:24.833: INFO: rc: 1 Oct 30 01:13:24.833: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:25.608: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:13:25.856: INFO: rc: 1 Oct 30 01:13:25.856: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:26.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:13:26.920: INFO: rc: 1 Oct 30 01:13:26.920: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:27.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:13:27.844: INFO: rc: 1 Oct 30 01:13:27.844: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:28.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:13:29.143: INFO: rc: 1 Oct 30 01:13:29.143: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:29.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:13:30.122: INFO: rc: 1 Oct 30 01:13:30.122: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:30.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:13:30.868: INFO: rc: 1 Oct 30 01:13:30.868: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:31.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:13:31.874: INFO: rc: 1 Oct 30 01:13:31.874: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:32.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:13:33.316: INFO: rc: 1 Oct 30 01:13:33.316: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:33.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:13:33.903: INFO: rc: 1 Oct 30 01:13:33.903: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:34.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:13:34.854: INFO: rc: 1 Oct 30 01:13:34.854: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:35.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:13:36.003: INFO: rc: 1 Oct 30 01:13:36.003: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:36.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:13:36.926: INFO: rc: 1 Oct 30 01:13:36.926: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:37.608: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:13:37.854: INFO: rc: 1 Oct 30 01:13:37.854: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:38.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:13:39.181: INFO: rc: 1 Oct 30 01:13:39.182: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:39.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:13:39.886: INFO: rc: 1 Oct 30 01:13:39.886: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:40.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:13:40.836: INFO: rc: 1 Oct 30 01:13:40.836: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:41.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:13:41.904: INFO: rc: 1 Oct 30 01:13:41.905: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:42.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:13:43.056: INFO: rc: 1 Oct 30 01:13:43.056: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:43.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:13:43.878: INFO: rc: 1 Oct 30 01:13:43.878: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:44.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:13:44.840: INFO: rc: 1 Oct 30 01:13:44.841: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:45.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:13:45.984: INFO: rc: 1 Oct 30 01:13:45.984: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31992 + echo hostName nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:46.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:13:47.143: INFO: rc: 1 Oct 30 01:13:47.144: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:47.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:13:47.989: INFO: rc: 1 Oct 30 01:13:47.989: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:48.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:13:49.045: INFO: rc: 1 Oct 30 01:13:49.045: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:49.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:13:49.850: INFO: rc: 1 Oct 30 01:13:49.850: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:50.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:13:50.868: INFO: rc: 1 Oct 30 01:13:50.868: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31992 + echo hostName nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:51.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:13:51.853: INFO: rc: 1 Oct 30 01:13:51.853: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:52.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:13:53.336: INFO: rc: 1 Oct 30 01:13:53.336: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:53.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:13:53.874: INFO: rc: 1 Oct 30 01:13:53.874: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:54.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:13:55.152: INFO: rc: 1 Oct 30 01:13:55.153: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:55.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:13:55.853: INFO: rc: 1 Oct 30 01:13:55.853: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:56.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:13:56.848: INFO: rc: 1 Oct 30 01:13:56.848: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:57.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:13:57.940: INFO: rc: 1 Oct 30 01:13:57.940: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:58.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:13:58.892: INFO: rc: 1 Oct 30 01:13:58.892: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:13:59.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:13:59.947: INFO: rc: 1 Oct 30 01:13:59.947: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:14:00.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:14:01.038: INFO: rc: 1 Oct 30 01:14:01.038: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:14:01.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:14:01.955: INFO: rc: 1 Oct 30 01:14:01.955: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:14:02.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:14:02.854: INFO: rc: 1 Oct 30 01:14:02.854: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:14:03.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:14:04.209: INFO: rc: 1 Oct 30 01:14:04.209: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName+ nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:14:04.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:14:04.991: INFO: rc: 1 Oct 30 01:14:04.991: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:14:05.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:14:05.915: INFO: rc: 1 Oct 30 01:14:05.915: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:14:06.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:14:06.844: INFO: rc: 1 Oct 30 01:14:06.844: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:14:07.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:14:07.855: INFO: rc: 1 Oct 30 01:14:07.855: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:14:08.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:14:08.990: INFO: rc: 1 Oct 30 01:14:08.990: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:14:09.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:14:09.937: INFO: rc: 1 Oct 30 01:14:09.937: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:14:10.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:14:10.861: INFO: rc: 1 Oct 30 01:14:10.861: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:14:11.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:14:12.327: INFO: rc: 1 Oct 30 01:14:12.327: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:14:12.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:14:12.879: INFO: rc: 1 Oct 30 01:14:12.879: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:14:13.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:14:13.854: INFO: rc: 1 Oct 30 01:14:13.854: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:14:14.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:14:14.903: INFO: rc: 1 Oct 30 01:14:14.903: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:14:15.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:14:15.853: INFO: rc: 1 Oct 30 01:14:15.853: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:14:16.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:14:16.847: INFO: rc: 1 Oct 30 01:14:16.847: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:14:17.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:14:17.898: INFO: rc: 1 Oct 30 01:14:17.898: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:14:18.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:14:19.003: INFO: rc: 1 Oct 30 01:14:19.003: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:14:19.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:14:19.844: INFO: rc: 1 Oct 30 01:14:19.844: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:14:20.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:14:20.853: INFO: rc: 1 Oct 30 01:14:20.853: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:14:21.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:14:21.988: INFO: rc: 1 Oct 30 01:14:21.988: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:14:22.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:14:22.863: INFO: rc: 1 Oct 30 01:14:22.863: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:14:23.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:14:23.870: INFO: rc: 1 Oct 30 01:14:23.870: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:14:24.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:14:24.956: INFO: rc: 1 Oct 30 01:14:24.956: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:14:25.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:14:25.831: INFO: rc: 1 Oct 30 01:14:25.831: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:14:26.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:14:27.012: INFO: rc: 1 Oct 30 01:14:27.012: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:14:27.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:14:27.918: INFO: rc: 1 Oct 30 01:14:27.918: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:14:28.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:14:29.216: INFO: rc: 1 Oct 30 01:14:29.216: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:14:29.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:14:29.900: INFO: rc: 1 Oct 30 01:14:29.901: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:14:30.608: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:14:30.948: INFO: rc: 1 Oct 30 01:14:30.948: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:14:31.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:14:32.118: INFO: rc: 1 Oct 30 01:14:32.118: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:14:32.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:14:32.858: INFO: rc: 1 Oct 30 01:14:32.858: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:14:33.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:14:33.867: INFO: rc: 1 Oct 30 01:14:33.867: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:14:34.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:14:34.860: INFO: rc: 1 Oct 30 01:14:34.860: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:14:35.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:14:36.115: INFO: rc: 1 Oct 30 01:14:36.115: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:14:36.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:14:36.876: INFO: rc: 1 Oct 30 01:14:36.876: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31992 + echo hostName nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:14:37.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:14:38.034: INFO: rc: 1 Oct 30 01:14:38.034: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:14:38.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:14:38.858: INFO: rc: 1 Oct 30 01:14:38.858: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:14:39.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:14:39.866: INFO: rc: 1 Oct 30 01:14:39.867: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:14:40.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:14:40.854: INFO: rc: 1 Oct 30 01:14:40.854: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:14:41.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:14:42.335: INFO: rc: 1 Oct 30 01:14:42.335: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:14:42.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:14:42.871: INFO: rc: 1 Oct 30 01:14:42.871: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:14:43.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:14:44.077: INFO: rc: 1 Oct 30 01:14:44.077: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:14:44.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:14:44.861: INFO: rc: 1 Oct 30 01:14:44.862: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31992 + echo hostName nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:14:45.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:14:45.837: INFO: rc: 1 Oct 30 01:14:45.837: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:14:46.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:14:46.846: INFO: rc: 1 Oct 30 01:14:46.846: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:14:47.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:14:48.017: INFO: rc: 1 Oct 30 01:14:48.017: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:14:48.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:14:48.930: INFO: rc: 1 Oct 30 01:14:48.930: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:14:49.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:14:49.921: INFO: rc: 1 Oct 30 01:14:49.921: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:14:50.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:14:51.233: INFO: rc: 1 Oct 30 01:14:51.233: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:14:51.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:14:51.854: INFO: rc: 1 Oct 30 01:14:51.854: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:14:52.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:14:52.864: INFO: rc: 1 Oct 30 01:14:52.864: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:14:53.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:14:53.843: INFO: rc: 1 Oct 30 01:14:53.843: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:14:54.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:14:54.870: INFO: rc: 1 Oct 30 01:14:54.870: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:14:55.608: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:14:55.863: INFO: rc: 1 Oct 30 01:14:55.863: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:14:56.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:14:56.847: INFO: rc: 1 Oct 30 01:14:56.847: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:14:57.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:14:57.938: INFO: rc: 1 Oct 30 01:14:57.938: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:14:58.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:14:58.876: INFO: rc: 1 Oct 30 01:14:58.876: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:14:59.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:14:59.868: INFO: rc: 1 Oct 30 01:14:59.869: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:15:00.608: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:15:00.927: INFO: rc: 1 Oct 30 01:15:00.927: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:15:01.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:15:01.916: INFO: rc: 1 Oct 30 01:15:01.916: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:15:02.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:15:02.855: INFO: rc: 1 Oct 30 01:15:02.855: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:15:03.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:15:03.836: INFO: rc: 1 Oct 30 01:15:03.836: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31992 nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:15:03.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992' Oct 30 01:15:04.083: INFO: rc: 1 Oct 30 01:15:04.083: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9787 exec execpod-affinity9mqvn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31992: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31992 + echo hostName nc: connect to 10.10.190.207 port 31992 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:15:04.084: FAIL: Unexpected error: <*errors.errorString | 0xc0049920e0>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31992 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31992 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithOptionalTransition(0xc001534160, 0x779f8f8, 0xc0046171e0, 0xc001897900, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2572 +0x625 k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithTransition(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2527 k8s.io/kubernetes/test/e2e/network.glob..func24.27() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1862 +0xa5 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000cb4480) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc000cb4480) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc000cb4480, 0x70e7b58) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 Oct 30 01:15:04.085: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-9787, will wait for the garbage collector to delete the pods Oct 30 01:15:04.149: INFO: Deleting ReplicationController affinity-nodeport-transition took: 3.236541ms Oct 30 01:15:04.250: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 100.847882ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-9787". STEP: Found 27 events. Oct 30 01:15:12.965: INFO: At 2021-10-30 01:12:51 +0000 UTC - event for affinity-nodeport-transition: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-transition-xw4wb Oct 30 01:15:12.965: INFO: At 2021-10-30 01:12:51 +0000 UTC - event for affinity-nodeport-transition: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-transition-5j9d8 Oct 30 01:15:12.966: INFO: At 2021-10-30 01:12:51 +0000 UTC - event for affinity-nodeport-transition: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-transition-8xhb4 Oct 30 01:15:12.966: INFO: At 2021-10-30 01:12:51 +0000 UTC - event for affinity-nodeport-transition-5j9d8: {default-scheduler } Scheduled: Successfully assigned services-9787/affinity-nodeport-transition-5j9d8 to node2 Oct 30 01:15:12.966: INFO: At 2021-10-30 01:12:51 +0000 UTC - event for affinity-nodeport-transition-8xhb4: {default-scheduler } Scheduled: Successfully assigned services-9787/affinity-nodeport-transition-8xhb4 to node2 Oct 30 01:15:12.966: INFO: At 2021-10-30 01:12:51 +0000 UTC - event for affinity-nodeport-transition-xw4wb: {default-scheduler } Scheduled: Successfully assigned services-9787/affinity-nodeport-transition-xw4wb to node1 Oct 30 01:15:12.966: INFO: At 2021-10-30 01:12:53 +0000 UTC - event for affinity-nodeport-transition-5j9d8: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Oct 30 01:15:12.966: INFO: At 2021-10-30 01:12:53 +0000 UTC - event for affinity-nodeport-transition-8xhb4: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Oct 30 01:15:12.966: INFO: At 2021-10-30 01:12:53 +0000 UTC - event for affinity-nodeport-transition-8xhb4: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 304.012318ms Oct 30 01:15:12.966: INFO: At 2021-10-30 01:12:53 +0000 UTC - event for affinity-nodeport-transition-8xhb4: {kubelet node2} Created: Created container affinity-nodeport-transition Oct 30 01:15:12.966: INFO: At 2021-10-30 01:12:54 +0000 UTC - event for affinity-nodeport-transition-5j9d8: {kubelet node2} Started: Started container affinity-nodeport-transition Oct 30 01:15:12.966: INFO: At 2021-10-30 01:12:54 +0000 UTC - event for affinity-nodeport-transition-5j9d8: {kubelet node2} Created: Created container affinity-nodeport-transition Oct 30 01:15:12.966: INFO: At 2021-10-30 01:12:54 +0000 UTC - event for affinity-nodeport-transition-5j9d8: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 386.257642ms Oct 30 01:15:12.966: INFO: At 2021-10-30 01:12:54 +0000 UTC - event for affinity-nodeport-transition-8xhb4: {kubelet node2} Started: Started container affinity-nodeport-transition Oct 30 01:15:12.966: INFO: At 2021-10-30 01:12:54 +0000 UTC - event for affinity-nodeport-transition-xw4wb: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Oct 30 01:15:12.966: INFO: At 2021-10-30 01:12:55 +0000 UTC - event for affinity-nodeport-transition-xw4wb: {kubelet node1} Created: Created container affinity-nodeport-transition Oct 30 01:15:12.966: INFO: At 2021-10-30 01:12:55 +0000 UTC - event for affinity-nodeport-transition-xw4wb: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 456.296664ms Oct 30 01:15:12.966: INFO: At 2021-10-30 01:12:56 +0000 UTC - event for affinity-nodeport-transition-xw4wb: {kubelet node1} Started: Started container affinity-nodeport-transition Oct 30 01:15:12.966: INFO: At 2021-10-30 01:12:57 +0000 UTC - event for execpod-affinity9mqvn: {default-scheduler } Scheduled: Successfully assigned services-9787/execpod-affinity9mqvn to node2 Oct 30 01:15:12.966: INFO: At 2021-10-30 01:12:59 +0000 UTC - event for execpod-affinity9mqvn: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 316.880487ms Oct 30 01:15:12.966: INFO: At 2021-10-30 01:12:59 +0000 UTC - event for execpod-affinity9mqvn: {kubelet node2} Started: Started container agnhost-container Oct 30 01:15:12.966: INFO: At 2021-10-30 01:12:59 +0000 UTC - event for execpod-affinity9mqvn: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Oct 30 01:15:12.966: INFO: At 2021-10-30 01:12:59 +0000 UTC - event for execpod-affinity9mqvn: {kubelet node2} Created: Created container agnhost-container Oct 30 01:15:12.966: INFO: At 2021-10-30 01:15:04 +0000 UTC - event for affinity-nodeport-transition-5j9d8: {kubelet node2} Killing: Stopping container affinity-nodeport-transition Oct 30 01:15:12.966: INFO: At 2021-10-30 01:15:04 +0000 UTC - event for affinity-nodeport-transition-8xhb4: {kubelet node2} Killing: Stopping container affinity-nodeport-transition Oct 30 01:15:12.966: INFO: At 2021-10-30 01:15:04 +0000 UTC - event for affinity-nodeport-transition-xw4wb: {kubelet node1} Killing: Stopping container affinity-nodeport-transition Oct 30 01:15:12.966: INFO: At 2021-10-30 01:15:04 +0000 UTC - event for execpod-affinity9mqvn: {kubelet node2} Killing: Stopping container agnhost-container Oct 30 01:15:12.968: INFO: POD NODE PHASE GRACE CONDITIONS Oct 30 01:15:12.968: INFO: Oct 30 01:15:12.972: INFO: Logging node info for node master1 Oct 30 01:15:12.974: INFO: Node Info: &Node{ObjectMeta:{master1 b47c04d5-47a7-4a95-8e97-481e6e60af54 86529 0 2021-10-29 21:05:34 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-10-29 21:05:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2021-10-29 21:05:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2021-10-29 21:08:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kubelet Update v1 2021-10-29 21:13:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:27 +0000 UTC,LastTransitionTime:2021-10-29 21:11:27 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 01:15:12 +0000 UTC,LastTransitionTime:2021-10-29 21:05:32 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 01:15:12 +0000 UTC,LastTransitionTime:2021-10-29 21:05:32 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 01:15:12 +0000 UTC,LastTransitionTime:2021-10-29 21:05:32 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 01:15:12 +0000 UTC,LastTransitionTime:2021-10-29 21:08:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5d3ed60c561e427db72df14bd9006ed0,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:01b9d6bc-4126-4864-a1df-901a1bee4906,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:b7d2fa8154ac5d9cff45866e4d3d210a7d390f8576611c301a2eed2b57273227 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:2012644549052fa07c43b0d19f320c871a25e105d0b23e33645e4f1bcf8fcd97 nginx:1.20.1-alpine],SizeBytes:22650454,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 30 01:15:12.975: INFO: Logging kubelet events for node master1 Oct 30 01:15:12.977: INFO: Logging pods the kubelet thinks is on node master1 Oct 30 01:15:12.997: INFO: kube-controller-manager-master1 started at 2021-10-29 21:14:43 +0000 UTC (0+1 container statuses recorded) Oct 30 01:15:12.997: INFO: Container kube-controller-manager ready: true, restart count 2 Oct 30 01:15:12.997: INFO: kube-flannel-d4pmt started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded) Oct 30 01:15:12.997: INFO: Init container install-cni ready: true, restart count 0 Oct 30 01:15:12.997: INFO: Container kube-flannel ready: true, restart count 2 Oct 30 01:15:12.997: INFO: kube-multus-ds-amd64-wgkfq started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded) Oct 30 01:15:12.997: INFO: Container kube-multus ready: true, restart count 1 Oct 30 01:15:12.997: INFO: kube-apiserver-master1 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded) Oct 30 01:15:12.997: INFO: Container kube-apiserver ready: true, restart count 0 Oct 30 01:15:12.997: INFO: kube-proxy-z5k8p started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded) Oct 30 01:15:12.997: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 01:15:12.997: INFO: coredns-8474476ff8-lczbr started at 2021-10-29 21:09:04 +0000 UTC (0+1 container statuses recorded) Oct 30 01:15:12.997: INFO: Container coredns ready: true, restart count 1 Oct 30 01:15:12.997: INFO: container-registry-65d7c44b96-zzkfl started at 2021-10-29 21:12:56 +0000 UTC (0+2 container statuses recorded) Oct 30 01:15:12.997: INFO: Container docker-registry ready: true, restart count 0 Oct 30 01:15:12.997: INFO: Container nginx ready: true, restart count 0 Oct 30 01:15:12.997: INFO: node-exporter-fv84w started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded) Oct 30 01:15:12.997: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 01:15:12.997: INFO: Container node-exporter ready: true, restart count 0 Oct 30 01:15:12.997: INFO: kube-scheduler-master1 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded) Oct 30 01:15:12.997: INFO: Container kube-scheduler ready: true, restart count 0 W1030 01:15:13.010720 32 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 30 01:15:13.086: INFO: Latency metrics for node master1 Oct 30 01:15:13.086: INFO: Logging node info for node master2 Oct 30 01:15:13.089: INFO: Node Info: &Node{ObjectMeta:{master2 208792d3-d365-4ddb-83d4-10e6e818079c 86465 0 2021-10-29 21:06:06 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-10-29 21:06:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-10-29 21:08:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-10-29 21:08:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-10-29 21:18:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:19 +0000 UTC,LastTransitionTime:2021-10-29 21:11:19 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 01:15:09 +0000 UTC,LastTransitionTime:2021-10-29 21:06:06 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 01:15:09 +0000 UTC,LastTransitionTime:2021-10-29 21:06:06 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 01:15:09 +0000 UTC,LastTransitionTime:2021-10-29 21:06:06 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 01:15:09 +0000 UTC,LastTransitionTime:2021-10-29 21:08:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:12290c1916d84ddda20431c28083da6a,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:314e82b8-9747-4131-b883-220496309995,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 30 01:15:13.089: INFO: Logging kubelet events for node master2 Oct 30 01:15:13.091: INFO: Logging pods the kubelet thinks is on node master2 Oct 30 01:15:13.098: INFO: node-exporter-lc9kk started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded) Oct 30 01:15:13.098: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 01:15:13.098: INFO: Container node-exporter ready: true, restart count 0 Oct 30 01:15:13.098: INFO: kube-apiserver-master2 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded) Oct 30 01:15:13.098: INFO: Container kube-apiserver ready: true, restart count 0 Oct 30 01:15:13.098: INFO: kube-controller-manager-master2 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded) Oct 30 01:15:13.098: INFO: Container kube-controller-manager ready: true, restart count 3 Oct 30 01:15:13.098: INFO: kube-scheduler-master2 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded) Oct 30 01:15:13.098: INFO: Container kube-scheduler ready: true, restart count 2 Oct 30 01:15:13.098: INFO: kube-proxy-5gz4v started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded) Oct 30 01:15:13.098: INFO: Container kube-proxy ready: true, restart count 2 Oct 30 01:15:13.098: INFO: kube-flannel-qvqll started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded) Oct 30 01:15:13.098: INFO: Init container install-cni ready: true, restart count 2 Oct 30 01:15:13.098: INFO: Container kube-flannel ready: true, restart count 1 Oct 30 01:15:13.098: INFO: kube-multus-ds-amd64-brkpk started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded) Oct 30 01:15:13.098: INFO: Container kube-multus ready: true, restart count 1 W1030 01:15:13.110554 32 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 30 01:15:13.173: INFO: Latency metrics for node master2 Oct 30 01:15:13.173: INFO: Logging node info for node master3 Oct 30 01:15:13.175: INFO: Node Info: &Node{ObjectMeta:{master3 168f1589-e029-47ae-b194-10215fc22d6a 86444 0 2021-10-29 21:06:17 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-10-29 21:06:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-10-29 21:08:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-10-29 21:08:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-10-29 21:16:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2021-10-29 21:16:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:36 +0000 UTC,LastTransitionTime:2021-10-29 21:11:36 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 01:15:07 +0000 UTC,LastTransitionTime:2021-10-29 21:06:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 01:15:07 +0000 UTC,LastTransitionTime:2021-10-29 21:06:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 01:15:07 +0000 UTC,LastTransitionTime:2021-10-29 21:06:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 01:15:07 +0000 UTC,LastTransitionTime:2021-10-29 21:08:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:de18dcb6cb4c493e9f4d987da2c8b3fd,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:89235c4b-b1f5-4716-bbd7-18b41c0bde74,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 30 01:15:13.176: INFO: Logging kubelet events for node master3 Oct 30 01:15:13.178: INFO: Logging pods the kubelet thinks is on node master3 Oct 30 01:15:13.187: INFO: kube-apiserver-master3 started at 2021-10-29 21:11:10 +0000 UTC (0+1 container statuses recorded) Oct 30 01:15:13.187: INFO: Container kube-apiserver ready: true, restart count 0 Oct 30 01:15:13.187: INFO: kube-scheduler-master3 started at 2021-10-29 21:14:43 +0000 UTC (0+1 container statuses recorded) Oct 30 01:15:13.187: INFO: Container kube-scheduler ready: true, restart count 2 Oct 30 01:15:13.187: INFO: dns-autoscaler-7df78bfcfb-phsdx started at 2021-10-29 21:09:02 +0000 UTC (0+1 container statuses recorded) Oct 30 01:15:13.187: INFO: Container autoscaler ready: true, restart count 1 Oct 30 01:15:13.187: INFO: node-feature-discovery-controller-cff799f9f-qq7g4 started at 2021-10-29 21:15:58 +0000 UTC (0+1 container statuses recorded) Oct 30 01:15:13.187: INFO: Container nfd-controller ready: true, restart count 0 Oct 30 01:15:13.187: INFO: kube-controller-manager-master3 started at 2021-10-29 21:14:43 +0000 UTC (0+1 container statuses recorded) Oct 30 01:15:13.187: INFO: Container kube-controller-manager ready: true, restart count 1 Oct 30 01:15:13.187: INFO: kube-proxy-r6fpx started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded) Oct 30 01:15:13.188: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 01:15:13.188: INFO: kube-flannel-rbdlt started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded) Oct 30 01:15:13.188: INFO: Init container install-cni ready: true, restart count 2 Oct 30 01:15:13.188: INFO: Container kube-flannel ready: true, restart count 2 Oct 30 01:15:13.188: INFO: kube-multus-ds-amd64-bdwh9 started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded) Oct 30 01:15:13.188: INFO: Container kube-multus ready: true, restart count 1 Oct 30 01:15:13.188: INFO: coredns-8474476ff8-wrwwv started at 2021-10-29 21:09:00 +0000 UTC (0+1 container statuses recorded) Oct 30 01:15:13.188: INFO: Container coredns ready: true, restart count 1 Oct 30 01:15:13.188: INFO: prometheus-operator-585ccfb458-czbr2 started at 2021-10-29 21:21:06 +0000 UTC (0+2 container statuses recorded) Oct 30 01:15:13.188: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 01:15:13.188: INFO: Container prometheus-operator ready: true, restart count 0 Oct 30 01:15:13.188: INFO: node-exporter-bv946 started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded) Oct 30 01:15:13.188: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 01:15:13.188: INFO: Container node-exporter ready: true, restart count 0 W1030 01:15:13.203198 32 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 30 01:15:13.280: INFO: Latency metrics for node master3 Oct 30 01:15:13.280: INFO: Logging node info for node node1 Oct 30 01:15:13.282: INFO: Node Info: &Node{ObjectMeta:{node1 ddef9269-94c5-4165-81fb-a3b0c4ac5c75 86435 0 2021-10-29 21:07:27 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-10-29 21:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-10-29 21:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-10-29 21:08:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-10-29 21:16:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-10-29 21:19:31 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-10-29 21:19:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:38 +0000 UTC,LastTransitionTime:2021-10-29 21:11:38 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 01:15:06 +0000 UTC,LastTransitionTime:2021-10-29 21:07:27 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 01:15:06 +0000 UTC,LastTransitionTime:2021-10-29 21:07:27 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 01:15:06 +0000 UTC,LastTransitionTime:2021-10-29 21:07:27 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 01:15:06 +0000 UTC,LastTransitionTime:2021-10-29 21:08:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3bf4179125e4495c89c046ed0ae7baf7,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:ce868148-dc5e-4c7c-a555-42ee929547f7,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003432289,},ContainerImage{Names:[localhost:30500/cmk@sha256:430843a71fa03faf488543c9f5b50d3efbef49988d6784f9f48b8077cc806f60 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:2f1ff7ac170c0ac8079e232ea4ee89d23b7906d1b824d901927acb4e399c52c9 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:a296b4c6f6ee2b88f095b61e95c7ef4f51ba25598835b4978c9256d8c8ace48a alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 30 01:15:13.283: INFO: Logging kubelet events for node node1 Oct 30 01:15:13.285: INFO: Logging pods the kubelet thinks is on node node1 Oct 30 01:15:13.299: INFO: kube-flannel-phg88 started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded) Oct 30 01:15:13.299: INFO: Init container install-cni ready: true, restart count 2 Oct 30 01:15:13.299: INFO: Container kube-flannel ready: true, restart count 2 Oct 30 01:15:13.299: INFO: nodeport-test-2tzcl started at 2021-10-30 01:14:52 +0000 UTC (0+1 container statuses recorded) Oct 30 01:15:13.299: INFO: Container nodeport-test ready: true, restart count 0 Oct 30 01:15:13.299: INFO: kube-proxy-z5hqt started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded) Oct 30 01:15:13.299: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 01:15:13.299: INFO: collectd-d45rv started at 2021-10-29 21:25:13 +0000 UTC (0+3 container statuses recorded) Oct 30 01:15:13.299: INFO: Container collectd ready: true, restart count 0 Oct 30 01:15:13.299: INFO: Container collectd-exporter ready: true, restart count 0 Oct 30 01:15:13.299: INFO: Container rbac-proxy ready: true, restart count 0 Oct 30 01:15:13.299: INFO: nginx-proxy-node1 started at 2021-10-29 21:07:28 +0000 UTC (0+1 container statuses recorded) Oct 30 01:15:13.299: INFO: Container nginx-proxy ready: true, restart count 2 Oct 30 01:15:13.299: INFO: cmk-init-discover-node1-n4mcc started at 2021-10-29 21:19:28 +0000 UTC (0+3 container statuses recorded) Oct 30 01:15:13.299: INFO: Container discover ready: false, restart count 0 Oct 30 01:15:13.299: INFO: Container init ready: false, restart count 0 Oct 30 01:15:13.299: INFO: Container install ready: false, restart count 0 Oct 30 01:15:13.299: INFO: cmk-89lqq started at 2021-10-29 21:20:10 +0000 UTC (0+2 container statuses recorded) Oct 30 01:15:13.299: INFO: Container nodereport ready: true, restart count 0 Oct 30 01:15:13.299: INFO: Container reconcile ready: true, restart count 0 Oct 30 01:15:13.299: INFO: execpodtvtst started at 2021-10-30 01:14:58 +0000 UTC (0+1 container statuses recorded) Oct 30 01:15:13.299: INFO: Container agnhost-container ready: true, restart count 0 Oct 30 01:15:13.299: INFO: kube-multus-ds-amd64-68wrz started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded) Oct 30 01:15:13.299: INFO: Container kube-multus ready: true, restart count 1 Oct 30 01:15:13.299: INFO: test-webserver-5273e099-3d7b-4c2b-9e8d-704e4a24361c started at 2021-10-30 01:11:45 +0000 UTC (0+1 container statuses recorded) Oct 30 01:15:13.299: INFO: Container test-webserver ready: true, restart count 0 Oct 30 01:15:13.299: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-t789r started at 2021-10-29 21:17:10 +0000 UTC (0+1 container statuses recorded) Oct 30 01:15:13.299: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 30 01:15:13.299: INFO: liveness-0dabe50d-844e-45e6-afe9-2a19099ecbd0 started at 2021-10-30 01:13:15 +0000 UTC (0+1 container statuses recorded) Oct 30 01:15:13.299: INFO: Container agnhost-container ready: true, restart count 0 Oct 30 01:15:13.299: INFO: kubernetes-metrics-scraper-5558854cb-5rmjw started at 2021-10-29 21:09:04 +0000 UTC (0+1 container statuses recorded) Oct 30 01:15:13.299: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Oct 30 01:15:13.299: INFO: node-exporter-256wm started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded) Oct 30 01:15:13.299: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 01:15:13.299: INFO: Container node-exporter ready: true, restart count 0 Oct 30 01:15:13.299: INFO: prometheus-k8s-0 started at 2021-10-29 21:21:17 +0000 UTC (0+4 container statuses recorded) Oct 30 01:15:13.299: INFO: Container config-reloader ready: true, restart count 0 Oct 30 01:15:13.299: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Oct 30 01:15:13.299: INFO: Container grafana ready: true, restart count 0 Oct 30 01:15:13.299: INFO: Container prometheus ready: true, restart count 1 Oct 30 01:15:13.299: INFO: test-webserver-4c319eb2-abfb-4e22-bbe5-cae41590c5bb started at 2021-10-30 01:15:12 +0000 UTC (0+1 container statuses recorded) Oct 30 01:15:13.299: INFO: Container test-webserver ready: false, restart count 0 Oct 30 01:15:13.299: INFO: node-feature-discovery-worker-w5vdb started at 2021-10-29 21:15:58 +0000 UTC (0+1 container statuses recorded) Oct 30 01:15:13.299: INFO: Container nfd-worker ready: true, restart count 0 W1030 01:15:13.311793 32 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 30 01:15:13.687: INFO: Latency metrics for node node1 Oct 30 01:15:13.687: INFO: Logging node info for node node2 Oct 30 01:15:13.690: INFO: Node Info: &Node{ObjectMeta:{node2 3b49ad19-ba56-4f4a-b1fa-eef102063de9 86504 0 2021-10-29 21:07:28 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-10-29 21:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-10-29 21:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-10-29 21:08:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-10-29 21:16:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-10-29 21:19:53 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-10-29 21:19:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:34 +0000 UTC,LastTransitionTime:2021-10-29 21:11:34 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 01:15:10 +0000 UTC,LastTransitionTime:2021-10-29 21:07:28 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 01:15:10 +0000 UTC,LastTransitionTime:2021-10-29 21:07:28 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 01:15:10 +0000 UTC,LastTransitionTime:2021-10-29 21:07:28 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 01:15:10 +0000 UTC,LastTransitionTime:2021-10-29 21:08:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7283436dd9e34722a6e4df817add95ed,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:c219e7bd-582b-4d6c-b379-1161acc70676,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[localhost:30500/cmk@sha256:430843a71fa03faf488543c9f5b50d3efbef49988d6784f9f48b8077cc806f60 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:2f1ff7ac170c0ac8079e232ea4ee89d23b7906d1b824d901927acb4e399c52c9 localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[localhost:30500/tasextender@sha256:b7d2fa8154ac5d9cff45866e4d3d210a7d390f8576611c301a2eed2b57273227 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 30 01:15:13.691: INFO: Logging kubelet events for node node2 Oct 30 01:15:13.693: INFO: Logging pods the kubelet thinks is on node node2 Oct 30 01:15:13.705: INFO: nginx-proxy-node2 started at 2021-10-29 21:07:28 +0000 UTC (0+1 container statuses recorded) Oct 30 01:15:13.705: INFO: Container nginx-proxy ready: true, restart count 2 Oct 30 01:15:13.705: INFO: kubernetes-dashboard-785dcbb76d-pbjjt started at 2021-10-29 21:09:04 +0000 UTC (0+1 container statuses recorded) Oct 30 01:15:13.705: INFO: Container kubernetes-dashboard ready: true, restart count 1 Oct 30 01:15:13.705: INFO: cmk-init-discover-node2-2fmmt started at 2021-10-29 21:19:48 +0000 UTC (0+3 container statuses recorded) Oct 30 01:15:13.705: INFO: Container discover ready: false, restart count 0 Oct 30 01:15:13.705: INFO: Container init ready: false, restart count 0 Oct 30 01:15:13.705: INFO: Container install ready: false, restart count 0 Oct 30 01:15:13.705: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-69pkg started at 2021-10-29 21:17:10 +0000 UTC (0+1 container statuses recorded) Oct 30 01:15:13.705: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 30 01:15:13.705: INFO: cmk-8bpbf started at 2021-10-29 21:20:11 +0000 UTC (0+2 container statuses recorded) Oct 30 01:15:13.705: INFO: Container nodereport ready: true, restart count 0 Oct 30 01:15:13.705: INFO: Container reconcile ready: true, restart count 0 Oct 30 01:15:13.705: INFO: node-exporter-r77s4 started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded) Oct 30 01:15:13.705: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 01:15:13.705: INFO: Container node-exporter ready: true, restart count 0 Oct 30 01:15:13.705: INFO: tas-telemetry-aware-scheduling-84ff454dfb-989mh started at 2021-10-29 21:24:23 +0000 UTC (0+1 container statuses recorded) Oct 30 01:15:13.705: INFO: Container tas-extender ready: true, restart count 0 Oct 30 01:15:13.705: INFO: collectd-flvhl started at 2021-10-29 21:25:13 +0000 UTC (0+3 container statuses recorded) Oct 30 01:15:13.706: INFO: Container collectd ready: true, restart count 0 Oct 30 01:15:13.706: INFO: Container collectd-exporter ready: true, restart count 0 Oct 30 01:15:13.706: INFO: Container rbac-proxy ready: true, restart count 0 Oct 30 01:15:13.706: INFO: kube-proxy-76285 started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded) Oct 30 01:15:13.706: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 01:15:13.706: INFO: test-pod started at 2021-10-30 01:12:21 +0000 UTC (0+1 container statuses recorded) Oct 30 01:15:13.706: INFO: Container webserver ready: true, restart count 0 Oct 30 01:15:13.706: INFO: node-feature-discovery-worker-h6lcp started at 2021-10-29 21:15:58 +0000 UTC (0+1 container statuses recorded) Oct 30 01:15:13.706: INFO: Container nfd-worker ready: true, restart count 0 Oct 30 01:15:13.706: INFO: nodeport-test-fsrr5 started at 2021-10-30 01:14:52 +0000 UTC (0+1 container statuses recorded) Oct 30 01:15:13.706: INFO: Container nodeport-test ready: true, restart count 0 Oct 30 01:15:13.706: INFO: sample-webhook-deployment-78988fc6cd-xf8dw started at 2021-10-30 01:15:09 +0000 UTC (0+1 container statuses recorded) Oct 30 01:15:13.706: INFO: Container sample-webhook ready: false, restart count 0 Oct 30 01:15:13.706: INFO: kube-flannel-f6s5v started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded) Oct 30 01:15:13.706: INFO: Init container install-cni ready: true, restart count 2 Oct 30 01:15:13.706: INFO: Container kube-flannel ready: true, restart count 3 Oct 30 01:15:13.706: INFO: kube-multus-ds-amd64-7tvbl started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded) Oct 30 01:15:13.706: INFO: Container kube-multus ready: true, restart count 1 Oct 30 01:15:13.706: INFO: cmk-webhook-6c9d5f8578-ffk66 started at 2021-10-29 21:20:11 +0000 UTC (0+1 container statuses recorded) Oct 30 01:15:13.706: INFO: Container cmk-webhook ready: true, restart count 0 W1030 01:15:13.726130 32 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 30 01:15:13.987: INFO: Latency metrics for node node2 Oct 30 01:15:13.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9787" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [142.395 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 01:15:04.084: Unexpected error: <*errors.errorString | 0xc0049920e0>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31992 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31992 over TCP protocol occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2572 ------------------------------ {"msg":"FAILED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":11,"skipped":153,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:15:14.028: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap that has name configmap-test-emptyKey-e700d3c9-22d7-42e4-b0f8-acafab84aa78 [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:15:14.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8385" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":-1,"completed":12,"skipped":166,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} S ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:15:09.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 30 01:15:09.787: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 30 01:15:11.797: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153309, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153309, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153309, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153309, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 30 01:15:13.801: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153309, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153309, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153309, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153309, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 30 01:15:16.808: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:15:16.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6773" for this suite. STEP: Destroying namespace "webhook-6773-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.401 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":-1,"completed":25,"skipped":620,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:15:14.060: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 01:15:14.093: INFO: The status of Pod busybox-scheduling-8b855415-3562-48ba-8f36-879250a151dc is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:15:16.096: INFO: The status of Pod busybox-scheduling-8b855415-3562-48ba-8f36-879250a151dc is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:15:18.096: INFO: The status of Pod busybox-scheduling-8b855415-3562-48ba-8f36-879250a151dc is Running (Ready = true) [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:15:18.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9136" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":167,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:15:16.949: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-c8dc2e66-4d34-44c3-ad19-76f09f64647e STEP: Creating a pod to test consume secrets Oct 30 01:15:16.984: INFO: Waiting up to 5m0s for pod "pod-secrets-b5396210-2705-4751-b90d-c6868d334003" in namespace "secrets-1463" to be "Succeeded or Failed" Oct 30 01:15:16.986: INFO: Pod "pod-secrets-b5396210-2705-4751-b90d-c6868d334003": Phase="Pending", Reason="", readiness=false. Elapsed: 2.230836ms Oct 30 01:15:18.991: INFO: Pod "pod-secrets-b5396210-2705-4751-b90d-c6868d334003": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006701954s Oct 30 01:15:21.011: INFO: Pod "pod-secrets-b5396210-2705-4751-b90d-c6868d334003": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027564298s STEP: Saw pod success Oct 30 01:15:21.011: INFO: Pod "pod-secrets-b5396210-2705-4751-b90d-c6868d334003" satisfied condition "Succeeded or Failed" Oct 30 01:15:21.014: INFO: Trying to get logs from node node2 pod pod-secrets-b5396210-2705-4751-b90d-c6868d334003 container secret-volume-test: STEP: delete the pod Oct 30 01:15:21.027: INFO: Waiting for pod pod-secrets-b5396210-2705-4751-b90d-c6868d334003 to disappear Oct 30 01:15:21.028: INFO: Pod pod-secrets-b5396210-2705-4751-b90d-c6868d334003 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:15:21.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1463" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":651,"failed":0} S ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:15:18.129: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: submitting the pod to kubernetes Oct 30 01:15:18.164: INFO: The status of Pod pod-update-eee4c418-1396-489b-ab47-74e23d5a648b is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:15:20.168: INFO: The status of Pod pod-update-eee4c418-1396-489b-ab47-74e23d5a648b is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:15:22.168: INFO: The status of Pod pod-update-eee4c418-1396-489b-ab47-74e23d5a648b is Running (Ready = true) STEP: verifying the pod is in kubernetes STEP: updating the pod Oct 30 01:15:22.682: INFO: Successfully updated pod "pod-update-eee4c418-1396-489b-ab47-74e23d5a648b" STEP: verifying the updated pod is in kubernetes Oct 30 01:15:22.687: INFO: Pod update OK [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:15:22.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-62" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Pods should be updated [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":171,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:15:21.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name projected-secret-test-04dc2d5c-e33b-4037-96fd-aa4701364988 STEP: Creating a pod to test consume secrets Oct 30 01:15:21.076: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0346882c-95d1-4685-a979-bea88066d705" in namespace "projected-7788" to be "Succeeded or Failed" Oct 30 01:15:21.080: INFO: Pod "pod-projected-secrets-0346882c-95d1-4685-a979-bea88066d705": Phase="Pending", Reason="", readiness=false. Elapsed: 4.000773ms Oct 30 01:15:23.083: INFO: Pod "pod-projected-secrets-0346882c-95d1-4685-a979-bea88066d705": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007653847s Oct 30 01:15:25.087: INFO: Pod "pod-projected-secrets-0346882c-95d1-4685-a979-bea88066d705": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010902447s STEP: Saw pod success Oct 30 01:15:25.087: INFO: Pod "pod-projected-secrets-0346882c-95d1-4685-a979-bea88066d705" satisfied condition "Succeeded or Failed" Oct 30 01:15:25.089: INFO: Trying to get logs from node node2 pod pod-projected-secrets-0346882c-95d1-4685-a979-bea88066d705 container secret-volume-test: STEP: delete the pod Oct 30 01:15:25.103: INFO: Waiting for pod pod-projected-secrets-0346882c-95d1-4685-a979-bea88066d705 to disappear Oct 30 01:15:25.105: INFO: Pod pod-projected-secrets-0346882c-95d1-4685-a979-bea88066d705 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:15:25.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7788" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":652,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:15:22.753: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0777 on tmpfs Oct 30 01:15:22.785: INFO: Waiting up to 5m0s for pod "pod-b9fbecd2-e102-40c9-8cbd-c121df1b95dc" in namespace "emptydir-6265" to be "Succeeded or Failed" Oct 30 01:15:22.787: INFO: Pod "pod-b9fbecd2-e102-40c9-8cbd-c121df1b95dc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.224981ms Oct 30 01:15:24.791: INFO: Pod "pod-b9fbecd2-e102-40c9-8cbd-c121df1b95dc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006332769s Oct 30 01:15:26.796: INFO: Pod "pod-b9fbecd2-e102-40c9-8cbd-c121df1b95dc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010687898s STEP: Saw pod success Oct 30 01:15:26.796: INFO: Pod "pod-b9fbecd2-e102-40c9-8cbd-c121df1b95dc" satisfied condition "Succeeded or Failed" Oct 30 01:15:26.798: INFO: Trying to get logs from node node1 pod pod-b9fbecd2-e102-40c9-8cbd-c121df1b95dc container test-container: STEP: delete the pod Oct 30 01:15:26.812: INFO: Waiting for pod pod-b9fbecd2-e102-40c9-8cbd-c121df1b95dc to disappear Oct 30 01:15:26.814: INFO: Pod pod-b9fbecd2-e102-40c9-8cbd-c121df1b95dc no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:15:26.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6265" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":203,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:15:26.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward api env vars Oct 30 01:15:26.892: INFO: Waiting up to 5m0s for pod "downward-api-7c2778cd-f4aa-4785-8f80-366a027b5e20" in namespace "downward-api-412" to be "Succeeded or Failed" Oct 30 01:15:26.894: INFO: Pod "downward-api-7c2778cd-f4aa-4785-8f80-366a027b5e20": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027177ms Oct 30 01:15:28.898: INFO: Pod "downward-api-7c2778cd-f4aa-4785-8f80-366a027b5e20": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006570346s Oct 30 01:15:30.902: INFO: Pod "downward-api-7c2778cd-f4aa-4785-8f80-366a027b5e20": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010758032s Oct 30 01:15:32.907: INFO: Pod "downward-api-7c2778cd-f4aa-4785-8f80-366a027b5e20": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015346489s STEP: Saw pod success Oct 30 01:15:32.907: INFO: Pod "downward-api-7c2778cd-f4aa-4785-8f80-366a027b5e20" satisfied condition "Succeeded or Failed" Oct 30 01:15:32.909: INFO: Trying to get logs from node node2 pod downward-api-7c2778cd-f4aa-4785-8f80-366a027b5e20 container dapi-container: STEP: delete the pod Oct 30 01:15:32.921: INFO: Waiting for pod downward-api-7c2778cd-f4aa-4785-8f80-366a027b5e20 to disappear Oct 30 01:15:32.923: INFO: Pod downward-api-7c2778cd-f4aa-4785-8f80-366a027b5e20 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:15:32.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-412" for this suite. • [SLOW TEST:6.075 seconds] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":223,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] EndpointSliceMirroring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:15:32.969: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename endpointslicemirroring STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSliceMirroring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslicemirroring.go:39 [It] should mirror a custom Endpoints resource through create update and delete [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: mirroring a new custom Endpoint Oct 30 01:15:33.000: INFO: Waiting for at least 1 EndpointSlice to exist, got 0 STEP: mirroring an update to a custom Endpoint Oct 30 01:15:35.013: INFO: Expected EndpointSlice to have 10.2.3.4 as address, got 10.1.2.3 STEP: mirroring deletion of a custom Endpoint Oct 30 01:15:37.027: INFO: Waiting for 0 EndpointSlices to exist, got 1 [AfterEach] [sig-network] EndpointSliceMirroring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:15:39.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslicemirroring-9119" for this suite. • [SLOW TEST:6.070 seconds] [sig-network] EndpointSliceMirroring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should mirror a custom Endpoints resource through create update and delete [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","total":-1,"completed":17,"skipped":245,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:14:37.225: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W1030 01:14:38.295983 28 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 30 01:15:40.314: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:15:40.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4613" for this suite. • [SLOW TEST:63.098 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":-1,"completed":15,"skipped":243,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:14:29.806: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W1030 01:14:39.855796 30 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 30 01:15:41.872: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:15:41.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7617" for this suite. • [SLOW TEST:72.073 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":-1,"completed":19,"skipped":357,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:14:47.821: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 01:14:47.839: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:15:48.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-408" for this suite. • [SLOW TEST:60.822 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":-1,"completed":17,"skipped":253,"failed":0} SSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:15:48.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir volume type on tmpfs Oct 30 01:15:48.681: INFO: Waiting up to 5m0s for pod "pod-fa11f108-acd3-4333-ac77-b5b0a9e1e1a4" in namespace "emptydir-629" to be "Succeeded or Failed" Oct 30 01:15:48.683: INFO: Pod "pod-fa11f108-acd3-4333-ac77-b5b0a9e1e1a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087827ms Oct 30 01:15:50.686: INFO: Pod "pod-fa11f108-acd3-4333-ac77-b5b0a9e1e1a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005849138s Oct 30 01:15:52.689: INFO: Pod "pod-fa11f108-acd3-4333-ac77-b5b0a9e1e1a4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008788726s STEP: Saw pod success Oct 30 01:15:52.689: INFO: Pod "pod-fa11f108-acd3-4333-ac77-b5b0a9e1e1a4" satisfied condition "Succeeded or Failed" Oct 30 01:15:52.691: INFO: Trying to get logs from node node1 pod pod-fa11f108-acd3-4333-ac77-b5b0a9e1e1a4 container test-container: STEP: delete the pod Oct 30 01:15:52.704: INFO: Waiting for pod pod-fa11f108-acd3-4333-ac77-b5b0a9e1e1a4 to disappear Oct 30 01:15:52.706: INFO: Pod pod-fa11f108-acd3-4333-ac77-b5b0a9e1e1a4 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:15:52.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-629" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":256,"failed":0} SSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:15:40.341: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name cm-test-opt-del-a0e31883-0721-4b32-8f6e-885167e1da0a STEP: Creating configMap with name cm-test-opt-upd-3096c59f-7b12-45dc-9ade-aa35acafebe8 STEP: Creating the pod Oct 30 01:15:40.391: INFO: The status of Pod pod-configmaps-07dc6b03-28ab-4dd3-9ad6-50e267ddfeb3 is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:15:42.395: INFO: The status of Pod pod-configmaps-07dc6b03-28ab-4dd3-9ad6-50e267ddfeb3 is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:15:44.396: INFO: The status of Pod pod-configmaps-07dc6b03-28ab-4dd3-9ad6-50e267ddfeb3 is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:15:46.395: INFO: The status of Pod pod-configmaps-07dc6b03-28ab-4dd3-9ad6-50e267ddfeb3 is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:15:48.396: INFO: The status of Pod pod-configmaps-07dc6b03-28ab-4dd3-9ad6-50e267ddfeb3 is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:15:50.395: INFO: The status of Pod pod-configmaps-07dc6b03-28ab-4dd3-9ad6-50e267ddfeb3 is Running (Ready = true) STEP: Deleting configmap cm-test-opt-del-a0e31883-0721-4b32-8f6e-885167e1da0a STEP: Updating configmap cm-test-opt-upd-3096c59f-7b12-45dc-9ade-aa35acafebe8 STEP: Creating configMap with name cm-test-opt-create-4a1e529b-f179-4a89-92e0-07b7d1efd728 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:15:54.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5798" for this suite. • [SLOW TEST:14.169 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":252,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:11:45.327: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod test-webserver-5273e099-3d7b-4c2b-9e8d-704e4a24361c in namespace container-probe-3692 Oct 30 01:11:55.370: INFO: Started pod test-webserver-5273e099-3d7b-4c2b-9e8d-704e4a24361c in namespace container-probe-3692 STEP: checking the pod's current state and verifying that restartCount is present Oct 30 01:11:55.372: INFO: Initial restart count of pod test-webserver-5273e099-3d7b-4c2b-9e8d-704e4a24361c is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:15:55.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3692" for this suite. • [SLOW TEST:250.568 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":147,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:15:52.721: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 01:15:52.752: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-5f057258-ed3b-4317-a0f1-fedbf54acf14" in namespace "security-context-test-8911" to be "Succeeded or Failed" Oct 30 01:15:52.754: INFO: Pod "busybox-privileged-false-5f057258-ed3b-4317-a0f1-fedbf54acf14": Phase="Pending", Reason="", readiness=false. Elapsed: 1.937524ms Oct 30 01:15:54.757: INFO: Pod "busybox-privileged-false-5f057258-ed3b-4317-a0f1-fedbf54acf14": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005187393s Oct 30 01:15:56.761: INFO: Pod "busybox-privileged-false-5f057258-ed3b-4317-a0f1-fedbf54acf14": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008461151s Oct 30 01:15:56.761: INFO: Pod "busybox-privileged-false-5f057258-ed3b-4317-a0f1-fedbf54acf14" satisfied condition "Succeeded or Failed" Oct 30 01:15:56.768: INFO: Got logs for pod "busybox-privileged-false-5f057258-ed3b-4317-a0f1-fedbf54acf14": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:15:56.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8911" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":259,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:15:55.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0777 on node default medium Oct 30 01:15:55.950: INFO: Waiting up to 5m0s for pod "pod-8c21128b-7b93-438d-9ca4-d4ea8811a2e7" in namespace "emptydir-1379" to be "Succeeded or Failed" Oct 30 01:15:55.952: INFO: Pod "pod-8c21128b-7b93-438d-9ca4-d4ea8811a2e7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.879585ms Oct 30 01:15:57.956: INFO: Pod "pod-8c21128b-7b93-438d-9ca4-d4ea8811a2e7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006712097s Oct 30 01:15:59.961: INFO: Pod "pod-8c21128b-7b93-438d-9ca4-d4ea8811a2e7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011613321s STEP: Saw pod success Oct 30 01:15:59.961: INFO: Pod "pod-8c21128b-7b93-438d-9ca4-d4ea8811a2e7" satisfied condition "Succeeded or Failed" Oct 30 01:15:59.964: INFO: Trying to get logs from node node2 pod pod-8c21128b-7b93-438d-9ca4-d4ea8811a2e7 container test-container: STEP: delete the pod Oct 30 01:15:59.977: INFO: Waiting for pod pod-8c21128b-7b93-438d-9ca4-d4ea8811a2e7 to disappear Oct 30 01:15:59.979: INFO: Pod pod-8c21128b-7b93-438d-9ca4-d4ea8811a2e7 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:15:59.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1379" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":158,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:16:00.015: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should complete a service status lifecycle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a Service STEP: watching for the Service to be added Oct 30 01:16:00.044: INFO: Found Service test-service-wclct in namespace services-7615 with labels: map[test-service-static:true] & ports [{http TCP 80 {0 80 } 0}] Oct 30 01:16:00.044: INFO: Service test-service-wclct created STEP: Getting /status Oct 30 01:16:00.047: INFO: Service test-service-wclct has LoadBalancer: {[]} STEP: patching the ServiceStatus STEP: watching for the Service to be patched Oct 30 01:16:00.051: INFO: observed Service test-service-wclct in namespace services-7615 with annotations: map[] & LoadBalancer: {[]} Oct 30 01:16:00.051: INFO: Found Service test-service-wclct in namespace services-7615 with annotations: map[patchedstatus:true] & LoadBalancer: {[{203.0.113.1 []}]} Oct 30 01:16:00.051: INFO: Service test-service-wclct has service status patched STEP: updating the ServiceStatus Oct 30 01:16:00.056: INFO: updatedStatus.Conditions: []v1.Condition{v1.Condition{Type:"StatusUpdate", Status:"True", ObservedGeneration:0, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Reason:"E2E", Message:"Set from e2e test"}} STEP: watching for the Service to be updated Oct 30 01:16:00.058: INFO: Observed Service test-service-wclct in namespace services-7615 with annotations: map[] & Conditions: {[]} Oct 30 01:16:00.058: INFO: Observed event: &Service{ObjectMeta:{test-service-wclct services-7615 2af00182-be80-4e79-91bd-14ea1b47d908 87526 0 2021-10-30 01:16:00 +0000 UTC map[test-service-static:true] map[patchedstatus:true] [] [] [{e2e.test Update v1 2021-10-30 01:16:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:patchedstatus":{}},"f:labels":{".":{},"f:test-service-static":{}}},"f:spec":{"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:sessionAffinity":{},"f:type":{}},"f:status":{"f:loadBalancer":{"f:ingress":{}}}}}]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:http,Protocol:TCP,Port:80,TargetPort:{0 80 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:10.233.37.229,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,TopologyKeys:[],IPFamilyPolicy:*SingleStack,ClusterIPs:[10.233.37.229],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{LoadBalancerIngress{IP:203.0.113.1,Hostname:,Ports:[]PortStatus{},},},},Conditions:[]Condition{},},} Oct 30 01:16:00.058: INFO: Found Service test-service-wclct in namespace services-7615 with annotations: map[patchedstatus:true] & Conditions: [{StatusUpdate True 0 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] Oct 30 01:16:00.058: INFO: Service test-service-wclct has service status updated STEP: patching the service STEP: watching for the Service to be patched Oct 30 01:16:00.064: INFO: observed Service test-service-wclct in namespace services-7615 with labels: map[test-service-static:true] Oct 30 01:16:00.064: INFO: observed Service test-service-wclct in namespace services-7615 with labels: map[test-service-static:true] Oct 30 01:16:00.064: INFO: observed Service test-service-wclct in namespace services-7615 with labels: map[test-service-static:true] Oct 30 01:16:00.064: INFO: Found Service test-service-wclct in namespace services-7615 with labels: map[test-service:patched test-service-static:true] Oct 30 01:16:00.064: INFO: Service test-service-wclct patched STEP: deleting the service STEP: watching for the Service to be deleted Oct 30 01:16:00.072: INFO: Observed event: ADDED Oct 30 01:16:00.072: INFO: Observed event: MODIFIED Oct 30 01:16:00.072: INFO: Observed event: MODIFIED Oct 30 01:16:00.072: INFO: Observed event: MODIFIED Oct 30 01:16:00.072: INFO: Found Service test-service-wclct in namespace services-7615 with labels: map[test-service:patched test-service-static:true] & annotations: map[patchedstatus:true] Oct 30 01:16:00.072: INFO: Service test-service-wclct deleted [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:16:00.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7615" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • ------------------------------ {"msg":"PASSED [sig-network] Services should complete a service status lifecycle [Conformance]","total":-1,"completed":8,"skipped":174,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:15:56.787: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-8b70831a-39b5-4546-aaa3-20c8751b04e4 STEP: Creating a pod to test consume secrets Oct 30 01:15:56.820: INFO: Waiting up to 5m0s for pod "pod-secrets-5963efb3-02a3-4e40-bb65-58bb773925d8" in namespace "secrets-1742" to be "Succeeded or Failed" Oct 30 01:15:56.822: INFO: Pod "pod-secrets-5963efb3-02a3-4e40-bb65-58bb773925d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047739ms Oct 30 01:15:58.826: INFO: Pod "pod-secrets-5963efb3-02a3-4e40-bb65-58bb773925d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005926328s Oct 30 01:16:00.829: INFO: Pod "pod-secrets-5963efb3-02a3-4e40-bb65-58bb773925d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009013118s STEP: Saw pod success Oct 30 01:16:00.830: INFO: Pod "pod-secrets-5963efb3-02a3-4e40-bb65-58bb773925d8" satisfied condition "Succeeded or Failed" Oct 30 01:16:00.832: INFO: Trying to get logs from node node1 pod pod-secrets-5963efb3-02a3-4e40-bb65-58bb773925d8 container secret-volume-test: STEP: delete the pod Oct 30 01:16:00.904: INFO: Waiting for pod pod-secrets-5963efb3-02a3-4e40-bb65-58bb773925d8 to disappear Oct 30 01:16:00.906: INFO: Pod pod-secrets-5963efb3-02a3-4e40-bb65-58bb773925d8 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:16:00.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1742" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":265,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:16:00.084: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Oct 30 01:16:00.112: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3d43f528-9990-4c13-9481-a4e527990c8b" in namespace "downward-api-2611" to be "Succeeded or Failed" Oct 30 01:16:00.115: INFO: Pod "downwardapi-volume-3d43f528-9990-4c13-9481-a4e527990c8b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.490391ms Oct 30 01:16:02.117: INFO: Pod "downwardapi-volume-3d43f528-9990-4c13-9481-a4e527990c8b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005050913s Oct 30 01:16:04.122: INFO: Pod "downwardapi-volume-3d43f528-9990-4c13-9481-a4e527990c8b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009877564s STEP: Saw pod success Oct 30 01:16:04.122: INFO: Pod "downwardapi-volume-3d43f528-9990-4c13-9481-a4e527990c8b" satisfied condition "Succeeded or Failed" Oct 30 01:16:04.124: INFO: Trying to get logs from node node1 pod downwardapi-volume-3d43f528-9990-4c13-9481-a4e527990c8b container client-container: STEP: delete the pod Oct 30 01:16:04.137: INFO: Waiting for pod downwardapi-volume-3d43f528-9990-4c13-9481-a4e527990c8b to disappear Oct 30 01:16:04.139: INFO: Pod downwardapi-volume-3d43f528-9990-4c13-9481-a4e527990c8b no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:16:04.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2611" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":176,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:16:04.206: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Oct 30 01:16:04.244: INFO: Waiting up to 5m0s for pod "downwardapi-volume-102189d3-d6fd-4ec0-8837-a582ec08e359" in namespace "downward-api-4848" to be "Succeeded or Failed" Oct 30 01:16:04.246: INFO: Pod "downwardapi-volume-102189d3-d6fd-4ec0-8837-a582ec08e359": Phase="Pending", Reason="", readiness=false. Elapsed: 2.38404ms Oct 30 01:16:06.249: INFO: Pod "downwardapi-volume-102189d3-d6fd-4ec0-8837-a582ec08e359": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005349707s Oct 30 01:16:08.255: INFO: Pod "downwardapi-volume-102189d3-d6fd-4ec0-8837-a582ec08e359": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011271761s STEP: Saw pod success Oct 30 01:16:08.255: INFO: Pod "downwardapi-volume-102189d3-d6fd-4ec0-8837-a582ec08e359" satisfied condition "Succeeded or Failed" Oct 30 01:16:08.258: INFO: Trying to get logs from node node1 pod downwardapi-volume-102189d3-d6fd-4ec0-8837-a582ec08e359 container client-container: STEP: delete the pod Oct 30 01:16:08.274: INFO: Waiting for pod downwardapi-volume-102189d3-d6fd-4ec0-8837-a582ec08e359 to disappear Oct 30 01:16:08.276: INFO: Pod downwardapi-volume-102189d3-d6fd-4ec0-8837-a582ec08e359 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:16:08.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4848" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":207,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:15:41.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Oct 30 01:15:41.940: INFO: >>> kubeConfig: /root/.kube/config Oct 30 01:15:50.523: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:16:08.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9777" for this suite. • [SLOW TEST:26.807 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":-1,"completed":20,"skipped":378,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:16:00.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2605.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-2605.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2605.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-2605.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2605.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-2605.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2605.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-2605.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2605.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-2605.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2605.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-2605.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2605.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 243.7.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.7.243_udp@PTR;check="$$(dig +tcp +noall +answer +search 243.7.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.7.243_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2605.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-2605.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2605.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-2605.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2605.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-2605.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2605.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-2605.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2605.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-2605.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2605.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-2605.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2605.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 243.7.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.7.243_udp@PTR;check="$$(dig +tcp +noall +answer +search 243.7.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.7.243_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Oct 30 01:16:06.972: INFO: Unable to read wheezy_udp@dns-test-service.dns-2605.svc.cluster.local from pod dns-2605/dns-test-2cfb42fc-362b-4af8-843f-315267328dbb: the server could not find the requested resource (get pods dns-test-2cfb42fc-362b-4af8-843f-315267328dbb) Oct 30 01:16:06.975: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2605.svc.cluster.local from pod dns-2605/dns-test-2cfb42fc-362b-4af8-843f-315267328dbb: the server could not find the requested resource (get pods dns-test-2cfb42fc-362b-4af8-843f-315267328dbb) Oct 30 01:16:06.978: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2605.svc.cluster.local from pod dns-2605/dns-test-2cfb42fc-362b-4af8-843f-315267328dbb: the server could not find the requested resource (get pods dns-test-2cfb42fc-362b-4af8-843f-315267328dbb) Oct 30 01:16:06.980: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2605.svc.cluster.local from pod dns-2605/dns-test-2cfb42fc-362b-4af8-843f-315267328dbb: the server could not find the requested resource (get pods dns-test-2cfb42fc-362b-4af8-843f-315267328dbb) Oct 30 01:16:06.997: INFO: Unable to read jessie_udp@dns-test-service.dns-2605.svc.cluster.local from pod dns-2605/dns-test-2cfb42fc-362b-4af8-843f-315267328dbb: the server could not find the requested resource (get pods dns-test-2cfb42fc-362b-4af8-843f-315267328dbb) Oct 30 01:16:07.000: INFO: Unable to read jessie_tcp@dns-test-service.dns-2605.svc.cluster.local from pod dns-2605/dns-test-2cfb42fc-362b-4af8-843f-315267328dbb: the server could not find the requested resource (get pods dns-test-2cfb42fc-362b-4af8-843f-315267328dbb) Oct 30 01:16:07.002: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2605.svc.cluster.local from pod dns-2605/dns-test-2cfb42fc-362b-4af8-843f-315267328dbb: the server could not find the requested resource (get pods dns-test-2cfb42fc-362b-4af8-843f-315267328dbb) Oct 30 01:16:07.005: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2605.svc.cluster.local from pod dns-2605/dns-test-2cfb42fc-362b-4af8-843f-315267328dbb: the server could not find the requested resource (get pods dns-test-2cfb42fc-362b-4af8-843f-315267328dbb) Oct 30 01:16:07.021: INFO: Lookups using dns-2605/dns-test-2cfb42fc-362b-4af8-843f-315267328dbb failed for: [wheezy_udp@dns-test-service.dns-2605.svc.cluster.local wheezy_tcp@dns-test-service.dns-2605.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2605.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2605.svc.cluster.local jessie_udp@dns-test-service.dns-2605.svc.cluster.local jessie_tcp@dns-test-service.dns-2605.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2605.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2605.svc.cluster.local] Oct 30 01:16:12.067: INFO: DNS probes using dns-2605/dns-test-2cfb42fc-362b-4af8-843f-315267328dbb succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:16:12.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2605" for this suite. • [SLOW TEST:11.177 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":-1,"completed":21,"skipped":266,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:15:12.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:16:12.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-40" for this suite. • [SLOW TEST:60.044 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":231,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:16:08.348: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name projected-secret-test-404e6ca9-da16-43c2-833c-703f855cc604 STEP: Creating a pod to test consume secrets Oct 30 01:16:08.384: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-36c85172-5b69-40ad-9a98-adf65769aab8" in namespace "projected-5379" to be "Succeeded or Failed" Oct 30 01:16:08.388: INFO: Pod "pod-projected-secrets-36c85172-5b69-40ad-9a98-adf65769aab8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063451ms Oct 30 01:16:10.391: INFO: Pod "pod-projected-secrets-36c85172-5b69-40ad-9a98-adf65769aab8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0070387s Oct 30 01:16:12.394: INFO: Pod "pod-projected-secrets-36c85172-5b69-40ad-9a98-adf65769aab8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01065728s STEP: Saw pod success Oct 30 01:16:12.394: INFO: Pod "pod-projected-secrets-36c85172-5b69-40ad-9a98-adf65769aab8" satisfied condition "Succeeded or Failed" Oct 30 01:16:12.397: INFO: Trying to get logs from node node2 pod pod-projected-secrets-36c85172-5b69-40ad-9a98-adf65769aab8 container projected-secret-volume-test: STEP: delete the pod Oct 30 01:16:12.409: INFO: Waiting for pod pod-projected-secrets-36c85172-5b69-40ad-9a98-adf65769aab8 to disappear Oct 30 01:16:12.411: INFO: Pod pod-projected-secrets-36c85172-5b69-40ad-9a98-adf65769aab8 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:16:12.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5379" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":247,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:16:12.213: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting the auto-created API token Oct 30 01:16:12.763: INFO: created pod pod-service-account-defaultsa Oct 30 01:16:12.764: INFO: pod pod-service-account-defaultsa service account token volume mount: true Oct 30 01:16:12.772: INFO: created pod pod-service-account-mountsa Oct 30 01:16:12.772: INFO: pod pod-service-account-mountsa service account token volume mount: true Oct 30 01:16:12.780: INFO: created pod pod-service-account-nomountsa Oct 30 01:16:12.780: INFO: pod pod-service-account-nomountsa service account token volume mount: false Oct 30 01:16:12.789: INFO: created pod pod-service-account-defaultsa-mountspec Oct 30 01:16:12.789: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Oct 30 01:16:12.797: INFO: created pod pod-service-account-mountsa-mountspec Oct 30 01:16:12.797: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Oct 30 01:16:12.806: INFO: created pod pod-service-account-nomountsa-mountspec Oct 30 01:16:12.806: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Oct 30 01:16:12.813: INFO: created pod pod-service-account-defaultsa-nomountspec Oct 30 01:16:12.814: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Oct 30 01:16:12.821: INFO: created pod pod-service-account-mountsa-nomountspec Oct 30 01:16:12.821: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Oct 30 01:16:12.830: INFO: created pod pod-service-account-nomountsa-nomountspec Oct 30 01:16:12.830: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:16:12.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-7558" for this suite. • ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":-1,"completed":15,"skipped":267,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:16:08.751: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir volume type on node default medium Oct 30 01:16:08.781: INFO: Waiting up to 5m0s for pod "pod-561d1992-73a7-4778-a9b5-37e8eccebe21" in namespace "emptydir-3517" to be "Succeeded or Failed" Oct 30 01:16:08.783: INFO: Pod "pod-561d1992-73a7-4778-a9b5-37e8eccebe21": Phase="Pending", Reason="", readiness=false. Elapsed: 1.936186ms Oct 30 01:16:10.789: INFO: Pod "pod-561d1992-73a7-4778-a9b5-37e8eccebe21": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007623842s Oct 30 01:16:12.794: INFO: Pod "pod-561d1992-73a7-4778-a9b5-37e8eccebe21": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012167865s Oct 30 01:16:14.799: INFO: Pod "pod-561d1992-73a7-4778-a9b5-37e8eccebe21": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.017787473s STEP: Saw pod success Oct 30 01:16:14.799: INFO: Pod "pod-561d1992-73a7-4778-a9b5-37e8eccebe21" satisfied condition "Succeeded or Failed" Oct 30 01:16:14.802: INFO: Trying to get logs from node node1 pod pod-561d1992-73a7-4778-a9b5-37e8eccebe21 container test-container: STEP: delete the pod Oct 30 01:16:15.255: INFO: Waiting for pod pod-561d1992-73a7-4778-a9b5-37e8eccebe21 to disappear Oct 30 01:16:15.258: INFO: Pod pod-561d1992-73a7-4778-a9b5-37e8eccebe21 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:16:15.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3517" for this suite. • [SLOW TEST:6.515 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":392,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:16:15.301: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check is all data is printed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 01:16:15.323: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5144 version' Oct 30 01:16:15.445: INFO: stderr: "" Oct 30 01:16:15.446: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"21\", GitVersion:\"v1.21.5\", GitCommit:\"aea7bbadd2fc0cd689de94a54e5b7b758869d691\", GitTreeState:\"clean\", BuildDate:\"2021-09-15T21:10:45Z\", GoVersion:\"go1.16.8\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"21\", GitVersion:\"v1.21.1\", GitCommit:\"5e58841cce77d4bc13713ad2b91fa0d961e69192\", GitTreeState:\"clean\", BuildDate:\"2021-05-12T14:12:29Z\", GoVersion:\"go1.16.4\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:16:15.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5144" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":-1,"completed":22,"skipped":410,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:16:15.479: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 30 01:16:15.738: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 30 01:16:17.746: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153375, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153375, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153375, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153375, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 30 01:16:19.751: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153375, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153375, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153375, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153375, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 30 01:16:21.751: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153375, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153375, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153375, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153375, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 30 01:16:24.757: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:16:24.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9503" for this suite. STEP: Destroying namespace "webhook-9503-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.336 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":-1,"completed":23,"skipped":421,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:16:12.846: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-map-1740c23a-de5b-4ed0-b0a6-e3d9c3638aac STEP: Creating a pod to test consume configMaps Oct 30 01:16:12.879: INFO: Waiting up to 5m0s for pod "pod-configmaps-0866f5d6-04ff-4057-9d53-681f1d181edd" in namespace "configmap-833" to be "Succeeded or Failed" Oct 30 01:16:12.882: INFO: Pod "pod-configmaps-0866f5d6-04ff-4057-9d53-681f1d181edd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.295617ms Oct 30 01:16:14.885: INFO: Pod "pod-configmaps-0866f5d6-04ff-4057-9d53-681f1d181edd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00570269s Oct 30 01:16:16.892: INFO: Pod "pod-configmaps-0866f5d6-04ff-4057-9d53-681f1d181edd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012310271s Oct 30 01:16:18.896: INFO: Pod "pod-configmaps-0866f5d6-04ff-4057-9d53-681f1d181edd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.016530195s Oct 30 01:16:20.899: INFO: Pod "pod-configmaps-0866f5d6-04ff-4057-9d53-681f1d181edd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.019622419s Oct 30 01:16:22.904: INFO: Pod "pod-configmaps-0866f5d6-04ff-4057-9d53-681f1d181edd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.02503636s Oct 30 01:16:24.910: INFO: Pod "pod-configmaps-0866f5d6-04ff-4057-9d53-681f1d181edd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.030586901s STEP: Saw pod success Oct 30 01:16:24.910: INFO: Pod "pod-configmaps-0866f5d6-04ff-4057-9d53-681f1d181edd" satisfied condition "Succeeded or Failed" Oct 30 01:16:24.912: INFO: Trying to get logs from node node2 pod pod-configmaps-0866f5d6-04ff-4057-9d53-681f1d181edd container agnhost-container: STEP: delete the pod Oct 30 01:16:24.935: INFO: Waiting for pod pod-configmaps-0866f5d6-04ff-4057-9d53-681f1d181edd to disappear Oct 30 01:16:24.937: INFO: Pod pod-configmaps-0866f5d6-04ff-4057-9d53-681f1d181edd no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:16:24.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-833" for this suite. • [SLOW TEST:12.098 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":270,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:16:12.452: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6471.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-6471.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6471.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6471.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-6471.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-6471.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-6471.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-6471.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6471.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6471.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-6471.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6471.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-6471.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-6471.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-6471.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-6471.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-6471.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6471.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Oct 30 01:16:20.511: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6471.svc.cluster.local from pod dns-6471/dns-test-f3d97411-c975-48a7-9b90-7fd65c91ad42: the server could not find the requested resource (get pods dns-test-f3d97411-c975-48a7-9b90-7fd65c91ad42) Oct 30 01:16:20.514: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6471.svc.cluster.local from pod dns-6471/dns-test-f3d97411-c975-48a7-9b90-7fd65c91ad42: the server could not find the requested resource (get pods dns-test-f3d97411-c975-48a7-9b90-7fd65c91ad42) Oct 30 01:16:20.517: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6471.svc.cluster.local from pod dns-6471/dns-test-f3d97411-c975-48a7-9b90-7fd65c91ad42: the server could not find the requested resource (get pods dns-test-f3d97411-c975-48a7-9b90-7fd65c91ad42) Oct 30 01:16:20.519: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6471.svc.cluster.local from pod dns-6471/dns-test-f3d97411-c975-48a7-9b90-7fd65c91ad42: the server could not find the requested resource (get pods dns-test-f3d97411-c975-48a7-9b90-7fd65c91ad42) Oct 30 01:16:20.526: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6471.svc.cluster.local from pod dns-6471/dns-test-f3d97411-c975-48a7-9b90-7fd65c91ad42: the server could not find the requested resource (get pods dns-test-f3d97411-c975-48a7-9b90-7fd65c91ad42) Oct 30 01:16:20.529: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6471.svc.cluster.local from pod dns-6471/dns-test-f3d97411-c975-48a7-9b90-7fd65c91ad42: the server could not find the requested resource (get pods dns-test-f3d97411-c975-48a7-9b90-7fd65c91ad42) Oct 30 01:16:20.531: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6471.svc.cluster.local from pod dns-6471/dns-test-f3d97411-c975-48a7-9b90-7fd65c91ad42: the server could not find the requested resource (get pods dns-test-f3d97411-c975-48a7-9b90-7fd65c91ad42) Oct 30 01:16:20.534: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6471.svc.cluster.local from pod dns-6471/dns-test-f3d97411-c975-48a7-9b90-7fd65c91ad42: the server could not find the requested resource (get pods dns-test-f3d97411-c975-48a7-9b90-7fd65c91ad42) Oct 30 01:16:20.539: INFO: Lookups using dns-6471/dns-test-f3d97411-c975-48a7-9b90-7fd65c91ad42 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6471.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6471.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6471.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6471.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6471.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6471.svc.cluster.local jessie_udp@dns-test-service-2.dns-6471.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6471.svc.cluster.local] Oct 30 01:16:25.578: INFO: DNS probes using dns-6471/dns-test-f3d97411-c975-48a7-9b90-7fd65c91ad42 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:16:25.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6471" for this suite. • [SLOW TEST:13.147 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":-1,"completed":12,"skipped":264,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:16:12.179: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should test the lifecycle of a ReplicationController [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a ReplicationController STEP: waiting for RC to be added STEP: waiting for available Replicas STEP: patching ReplicationController STEP: waiting for RC to be modified STEP: patching ReplicationController status STEP: waiting for RC to be modified STEP: waiting for available Replicas STEP: fetching ReplicationController status STEP: patching ReplicationController scale STEP: waiting for RC to be modified STEP: waiting for ReplicationController's scale to be the max amount STEP: fetching ReplicationController; ensuring that it's patched STEP: updating ReplicationController status STEP: waiting for RC to be modified STEP: listing all ReplicationControllers STEP: checking that ReplicationController has expected values STEP: deleting ReplicationControllers by collection STEP: waiting for ReplicationController to have a DELETED watchEvent [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:16:26.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2987" for this suite. • [SLOW TEST:13.829 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should test the lifecycle of a ReplicationController [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]","total":-1,"completed":22,"skipped":314,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:16:24.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 01:16:24.914: INFO: The status of Pod pod-secrets-9b2093c6-e3c3-4bb8-a44a-b1f3090b7e93 is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:16:26.918: INFO: The status of Pod pod-secrets-9b2093c6-e3c3-4bb8-a44a-b1f3090b7e93 is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:16:28.919: INFO: The status of Pod pod-secrets-9b2093c6-e3c3-4bb8-a44a-b1f3090b7e93 is Running (Ready = true) STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:16:28.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-7883" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":-1,"completed":24,"skipped":450,"failed":0} S ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:16:24.969: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should serve a basic endpoint from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service endpoint-test2 in namespace services-7263 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7263 to expose endpoints map[] Oct 30 01:16:25.000: INFO: successfully validated that service endpoint-test2 in namespace services-7263 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-7263 Oct 30 01:16:25.014: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:16:27.018: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:16:29.019: INFO: The status of Pod pod1 is Running (Ready = true) STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7263 to expose endpoints map[pod1:[80]] Oct 30 01:16:29.029: INFO: successfully validated that service endpoint-test2 in namespace services-7263 exposes endpoints map[pod1:[80]] STEP: Creating pod pod2 in namespace services-7263 Oct 30 01:16:29.041: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:16:31.043: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:16:33.044: INFO: The status of Pod pod2 is Running (Ready = true) STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7263 to expose endpoints map[pod1:[80] pod2:[80]] Oct 30 01:16:33.059: INFO: successfully validated that service endpoint-test2 in namespace services-7263 exposes endpoints map[pod1:[80] pod2:[80]] STEP: Deleting pod pod1 in namespace services-7263 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7263 to expose endpoints map[pod2:[80]] Oct 30 01:16:33.074: INFO: successfully validated that service endpoint-test2 in namespace services-7263 exposes endpoints map[pod2:[80]] STEP: Deleting pod pod2 in namespace services-7263 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7263 to expose endpoints map[] Oct 30 01:16:33.083: INFO: successfully validated that service endpoint-test2 in namespace services-7263 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:16:33.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7263" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:8.130 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should serve a basic endpoint from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":-1,"completed":17,"skipped":280,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:15:54.525: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-7007 [It] should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating statefulset ss in namespace statefulset-7007 Oct 30 01:15:54.554: INFO: Found 0 stateful pods, waiting for 1 Oct 30 01:16:04.559: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified STEP: Patch a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 Oct 30 01:16:04.579: INFO: Deleting all statefulset in ns statefulset-7007 Oct 30 01:16:04.580: INFO: Scaling statefulset ss to 0 Oct 30 01:16:34.594: INFO: Waiting for statefulset status.replicas updated to 0 Oct 30 01:16:34.595: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:16:34.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7007" for this suite. • [SLOW TEST:40.086 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":-1,"completed":17,"skipped":258,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:16:28.947: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 01:16:28.968: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:16:37.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2947" for this suite. • [SLOW TEST:8.138 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 custom resource defaulting for requests and from storage works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":-1,"completed":25,"skipped":451,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:16:33.164: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-projected-all-test-volume-4f28d12f-6c37-4d37-8263-a599638a8928 STEP: Creating secret with name secret-projected-all-test-volume-19483ecd-c94f-47b2-884d-eb7ccf6d95fe STEP: Creating a pod to test Check all projections for projected volume plugin Oct 30 01:16:33.203: INFO: Waiting up to 5m0s for pod "projected-volume-cab0e8c4-ffd6-4aa6-9dbb-d935429f98c2" in namespace "projected-9290" to be "Succeeded or Failed" Oct 30 01:16:33.208: INFO: Pod "projected-volume-cab0e8c4-ffd6-4aa6-9dbb-d935429f98c2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.333672ms Oct 30 01:16:35.211: INFO: Pod "projected-volume-cab0e8c4-ffd6-4aa6-9dbb-d935429f98c2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008141702s Oct 30 01:16:37.214: INFO: Pod "projected-volume-cab0e8c4-ffd6-4aa6-9dbb-d935429f98c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010900473s STEP: Saw pod success Oct 30 01:16:37.214: INFO: Pod "projected-volume-cab0e8c4-ffd6-4aa6-9dbb-d935429f98c2" satisfied condition "Succeeded or Failed" Oct 30 01:16:37.216: INFO: Trying to get logs from node node2 pod projected-volume-cab0e8c4-ffd6-4aa6-9dbb-d935429f98c2 container projected-all-volume-test: STEP: delete the pod Oct 30 01:16:37.227: INFO: Waiting for pod projected-volume-cab0e8c4-ffd6-4aa6-9dbb-d935429f98c2 to disappear Oct 30 01:16:37.229: INFO: Pod projected-volume-cab0e8c4-ffd6-4aa6-9dbb-d935429f98c2 no longer exists [AfterEach] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:16:37.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9290" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":313,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:16:37.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Oct 30 01:16:40.196: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:16:40.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6311" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":488,"failed":0} SS ------------------------------ [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:16:40.221: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should invoke init containers on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod Oct 30 01:16:40.243: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:16:45.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8915" for this suite. • [SLOW TEST:5.435 seconds] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should invoke init containers on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":-1,"completed":27,"skipped":490,"failed":0} S ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:16:34.654: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Oct 30 01:16:35.185: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Oct 30 01:16:37.193: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153395, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153395, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153395, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153395, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-697cdbd8f4\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 30 01:16:40.204: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 01:16:40.206: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:16:48.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-9750" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:13.683 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:15:39.057: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W1030 01:15:49.145933 32 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 30 01:16:51.161: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. Oct 30 01:16:51.161: INFO: Deleting pod "simpletest-rc-to-be-deleted-5ttr5" in namespace "gc-7666" Oct 30 01:16:51.168: INFO: Deleting pod "simpletest-rc-to-be-deleted-8gtfm" in namespace "gc-7666" Oct 30 01:16:51.173: INFO: Deleting pod "simpletest-rc-to-be-deleted-bkpdm" in namespace "gc-7666" Oct 30 01:16:51.180: INFO: Deleting pod "simpletest-rc-to-be-deleted-cg4ss" in namespace "gc-7666" Oct 30 01:16:51.187: INFO: Deleting pod "simpletest-rc-to-be-deleted-dkfwg" in namespace "gc-7666" [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:16:51.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7666" for this suite. • [SLOW TEST:72.144 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":-1,"completed":18,"skipped":254,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:16:45.661: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-4049.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-4049.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4049.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-4049.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-4049.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4049.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Oct 30 01:16:51.731: INFO: DNS probes using dns-4049/dns-test-e89d062b-adee-4a1c-8b02-d4fade09ae7f succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:16:51.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4049" for this suite. • [SLOW TEST:6.086 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":-1,"completed":28,"skipped":491,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:16:26.043: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod pod-subpath-test-configmap-h6ch STEP: Creating a pod to test atomic-volume-subpath Oct 30 01:16:26.081: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-h6ch" in namespace "subpath-6836" to be "Succeeded or Failed" Oct 30 01:16:26.083: INFO: Pod "pod-subpath-test-configmap-h6ch": Phase="Pending", Reason="", readiness=false. Elapsed: 2.535014ms Oct 30 01:16:28.087: INFO: Pod "pod-subpath-test-configmap-h6ch": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005779187s Oct 30 01:16:30.091: INFO: Pod "pod-subpath-test-configmap-h6ch": Phase="Pending", Reason="", readiness=false. Elapsed: 4.00992123s Oct 30 01:16:32.094: INFO: Pod "pod-subpath-test-configmap-h6ch": Phase="Running", Reason="", readiness=true. Elapsed: 6.012752594s Oct 30 01:16:34.096: INFO: Pod "pod-subpath-test-configmap-h6ch": Phase="Running", Reason="", readiness=true. Elapsed: 8.015375236s Oct 30 01:16:36.100: INFO: Pod "pod-subpath-test-configmap-h6ch": Phase="Running", Reason="", readiness=true. Elapsed: 10.019252695s Oct 30 01:16:38.104: INFO: Pod "pod-subpath-test-configmap-h6ch": Phase="Running", Reason="", readiness=true. Elapsed: 12.022685196s Oct 30 01:16:40.109: INFO: Pod "pod-subpath-test-configmap-h6ch": Phase="Running", Reason="", readiness=true. Elapsed: 14.028034636s Oct 30 01:16:42.112: INFO: Pod "pod-subpath-test-configmap-h6ch": Phase="Running", Reason="", readiness=true. Elapsed: 16.031075584s Oct 30 01:16:44.116: INFO: Pod "pod-subpath-test-configmap-h6ch": Phase="Running", Reason="", readiness=true. Elapsed: 18.03477573s Oct 30 01:16:46.118: INFO: Pod "pod-subpath-test-configmap-h6ch": Phase="Running", Reason="", readiness=true. Elapsed: 20.037553876s Oct 30 01:16:48.123: INFO: Pod "pod-subpath-test-configmap-h6ch": Phase="Running", Reason="", readiness=true. Elapsed: 22.042565056s Oct 30 01:16:50.127: INFO: Pod "pod-subpath-test-configmap-h6ch": Phase="Running", Reason="", readiness=true. Elapsed: 24.046450558s Oct 30 01:16:52.131: INFO: Pod "pod-subpath-test-configmap-h6ch": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.050242315s STEP: Saw pod success Oct 30 01:16:52.131: INFO: Pod "pod-subpath-test-configmap-h6ch" satisfied condition "Succeeded or Failed" Oct 30 01:16:52.134: INFO: Trying to get logs from node node2 pod pod-subpath-test-configmap-h6ch container test-container-subpath-configmap-h6ch: STEP: delete the pod Oct 30 01:16:52.144: INFO: Waiting for pod pod-subpath-test-configmap-h6ch to disappear Oct 30 01:16:52.147: INFO: Pod pod-subpath-test-configmap-h6ch no longer exists STEP: Deleting pod pod-subpath-test-configmap-h6ch Oct 30 01:16:52.147: INFO: Deleting pod "pod-subpath-test-configmap-h6ch" in namespace "subpath-6836" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:16:52.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6836" for this suite. • [SLOW TEST:26.113 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":-1,"completed":23,"skipped":333,"failed":0} SSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:16:51.212: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 30 01:16:51.609: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 30 01:16:53.618: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153411, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153411, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153411, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153411, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 30 01:16:56.626: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:16:56.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1953" for this suite. STEP: Destroying namespace "webhook-1953-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.486 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":-1,"completed":19,"skipped":258,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:16:51.813: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test substitution in container's command Oct 30 01:16:51.849: INFO: Waiting up to 5m0s for pod "var-expansion-257fde3b-72d8-4134-8621-f6324eb371f2" in namespace "var-expansion-5261" to be "Succeeded or Failed" Oct 30 01:16:51.851: INFO: Pod "var-expansion-257fde3b-72d8-4134-8621-f6324eb371f2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.54001ms Oct 30 01:16:53.855: INFO: Pod "var-expansion-257fde3b-72d8-4134-8621-f6324eb371f2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006649496s Oct 30 01:16:55.859: INFO: Pod "var-expansion-257fde3b-72d8-4134-8621-f6324eb371f2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010264883s Oct 30 01:16:57.863: INFO: Pod "var-expansion-257fde3b-72d8-4134-8621-f6324eb371f2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014431558s STEP: Saw pod success Oct 30 01:16:57.863: INFO: Pod "var-expansion-257fde3b-72d8-4134-8621-f6324eb371f2" satisfied condition "Succeeded or Failed" Oct 30 01:16:57.866: INFO: Trying to get logs from node node2 pod var-expansion-257fde3b-72d8-4134-8621-f6324eb371f2 container dapi-container: STEP: delete the pod Oct 30 01:16:58.031: INFO: Waiting for pod var-expansion-257fde3b-72d8-4134-8621-f6324eb371f2 to disappear Oct 30 01:16:58.034: INFO: Pod var-expansion-257fde3b-72d8-4134-8621-f6324eb371f2 no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:16:58.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5261" for this suite. • [SLOW TEST:6.228 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":-1,"completed":29,"skipped":526,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:16:56.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 30 01:16:57.281: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 30 01:16:59.290: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153417, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153417, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153417, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153417, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 30 01:17:01.294: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153417, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153417, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153417, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153417, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 30 01:17:04.301: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Oct 30 01:17:04.316: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:17:04.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2635" for this suite. STEP: Destroying namespace "webhook-2635-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.651 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":-1,"completed":20,"skipped":263,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:16:52.165: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:17:05.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8254" for this suite. • [SLOW TEST:13.096 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":-1,"completed":24,"skipped":336,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:14:52.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to create a functioning NodePort service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service nodeport-test with type=NodePort in namespace services-6896 STEP: creating replication controller nodeport-test in namespace services-6896 I1030 01:14:52.352392 27 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-6896, replica count: 2 I1030 01:14:55.404608 27 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1030 01:14:58.407441 27 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 30 01:14:58.407: INFO: Creating new exec pod Oct 30 01:15:03.428: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' Oct 30 01:15:03.910: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\n" Oct 30 01:15:03.910: INFO: stdout: "nodeport-test-2tzcl" Oct 30 01:15:03.910: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.2.248 80' Oct 30 01:15:05.461: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.2.248 80\nConnection to 10.233.2.248 80 port [tcp/http] succeeded!\n" Oct 30 01:15:05.461: INFO: stdout: "nodeport-test-2tzcl" Oct 30 01:15:05.461: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:15:05.737: INFO: rc: 1 Oct 30 01:15:05.737: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:15:06.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:15:06.982: INFO: rc: 1 Oct 30 01:15:06.982: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:15:07.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:15:08.275: INFO: rc: 1 Oct 30 01:15:08.275: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:15:08.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:15:09.402: INFO: rc: 1 Oct 30 01:15:09.402: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:15:09.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:15:10.306: INFO: rc: 1 Oct 30 01:15:10.306: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:15:10.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:15:10.996: INFO: rc: 1 Oct 30 01:15:10.996: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:15:11.740: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:15:12.038: INFO: rc: 1 Oct 30 01:15:12.038: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:15:12.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:15:13.081: INFO: rc: 1 Oct 30 01:15:13.081: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:15:13.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:15:13.993: INFO: rc: 1 Oct 30 01:15:13.993: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:15:14.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:15:14.987: INFO: rc: 1 Oct 30 01:15:14.987: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:15:15.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:15:15.982: INFO: rc: 1 Oct 30 01:15:15.982: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:15:16.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:15:16.980: INFO: rc: 1 Oct 30 01:15:16.980: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:15:17.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:15:17.957: INFO: rc: 1 Oct 30 01:15:17.957: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:15:18.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:15:18.984: INFO: rc: 1 Oct 30 01:15:18.984: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:15:19.740: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:15:19.980: INFO: rc: 1 Oct 30 01:15:19.980: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31616 + echo hostName nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:15:20.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:15:20.980: INFO: rc: 1 Oct 30 01:15:20.980: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:15:21.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:15:21.983: INFO: rc: 1 Oct 30 01:15:21.983: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:15:22.740: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:15:22.974: INFO: rc: 1 Oct 30 01:15:22.974: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:15:23.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:15:24.279: INFO: rc: 1 Oct 30 01:15:24.279: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:15:24.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:15:24.997: INFO: rc: 1 Oct 30 01:15:24.998: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:15:25.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:15:25.988: INFO: rc: 1 Oct 30 01:15:25.988: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:15:26.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:15:26.980: INFO: rc: 1 Oct 30 01:15:26.980: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:15:27.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:15:27.974: INFO: rc: 1 Oct 30 01:15:27.974: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:15:28.740: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:15:28.975: INFO: rc: 1 Oct 30 01:15:28.975: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:15:29.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:15:29.996: INFO: rc: 1 Oct 30 01:15:29.996: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:15:30.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:15:30.966: INFO: rc: 1 Oct 30 01:15:30.966: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:15:31.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:15:31.980: INFO: rc: 1 Oct 30 01:15:31.980: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:15:32.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:15:32.996: INFO: rc: 1 Oct 30 01:15:32.996: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:15:33.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:15:33.994: INFO: rc: 1 Oct 30 01:15:33.994: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:15:34.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:15:34.971: INFO: rc: 1 Oct 30 01:15:34.972: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:15:35.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:15:35.989: INFO: rc: 1 Oct 30 01:15:35.989: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:15:36.740: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:15:36.976: INFO: rc: 1 Oct 30 01:15:36.976: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:15:37.740: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:15:37.984: INFO: rc: 1 Oct 30 01:15:37.984: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:15:38.740: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:15:38.997: INFO: rc: 1 Oct 30 01:15:38.997: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:15:39.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:15:40.030: INFO: rc: 1 Oct 30 01:15:40.030: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:15:40.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:15:41.363: INFO: rc: 1 Oct 30 01:15:41.363: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31616 + echo hostName nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:15:41.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:15:43.501: INFO: rc: 1 Oct 30 01:15:43.501: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:15:43.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:15:44.078: INFO: rc: 1 Oct 30 01:15:44.079: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:15:44.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:15:45.235: INFO: rc: 1 Oct 30 01:15:45.235: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:15:45.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:15:45.979: INFO: rc: 1 Oct 30 01:15:45.979: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:15:46.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:15:47.009: INFO: rc: 1 Oct 30 01:15:47.009: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:15:47.740: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:15:47.973: INFO: rc: 1 Oct 30 01:15:47.973: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:15:48.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:15:48.973: INFO: rc: 1 Oct 30 01:15:48.974: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:15:49.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:15:50.257: INFO: rc: 1 Oct 30 01:15:50.257: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:15:50.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:15:51.021: INFO: rc: 1 Oct 30 01:15:51.021: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:15:51.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:15:51.972: INFO: rc: 1 Oct 30 01:15:51.972: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:15:52.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:15:53.013: INFO: rc: 1 Oct 30 01:15:53.013: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:15:53.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:15:53.981: INFO: rc: 1 Oct 30 01:15:53.981: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:15:54.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:15:55.035: INFO: rc: 1 Oct 30 01:15:55.035: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:15:55.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:15:56.089: INFO: rc: 1 Oct 30 01:15:56.090: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:15:56.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:15:57.217: INFO: rc: 1 Oct 30 01:15:57.217: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:15:57.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:15:57.991: INFO: rc: 1 Oct 30 01:15:57.991: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31616 + echo hostName nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:15:58.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:15:59.006: INFO: rc: 1 Oct 30 01:15:59.006: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:15:59.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:15:59.971: INFO: rc: 1 Oct 30 01:15:59.971: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:16:00.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:16:01.284: INFO: rc: 1 Oct 30 01:16:01.284: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:16:01.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:16:02.186: INFO: rc: 1 Oct 30 01:16:02.186: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:16:02.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:16:03.029: INFO: rc: 1 Oct 30 01:16:03.029: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:16:03.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:16:03.985: INFO: rc: 1 Oct 30 01:16:03.985: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:16:04.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:16:04.964: INFO: rc: 1 Oct 30 01:16:04.965: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:16:05.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:16:05.982: INFO: rc: 1 Oct 30 01:16:05.982: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:16:06.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:16:07.144: INFO: rc: 1 Oct 30 01:16:07.144: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:16:07.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:16:07.984: INFO: rc: 1 Oct 30 01:16:07.984: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:16:08.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:16:08.962: INFO: rc: 1 Oct 30 01:16:08.962: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:16:09.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:16:10.191: INFO: rc: 1 Oct 30 01:16:10.191: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:16:10.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:16:12.351: INFO: rc: 1 Oct 30 01:16:12.351: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:16:12.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:16:13.621: INFO: rc: 1 Oct 30 01:16:13.622: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:16:13.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:16:14.326: INFO: rc: 1 Oct 30 01:16:14.327: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:16:14.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:16:15.379: INFO: rc: 1 Oct 30 01:16:15.379: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:16:15.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:16:17.415: INFO: rc: 1 Oct 30 01:16:17.415: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:16:17.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:16:18.352: INFO: rc: 1 Oct 30 01:16:18.352: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:16:18.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:16:19.172: INFO: rc: 1 Oct 30 01:16:19.172: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:16:19.740: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:16:20.323: INFO: rc: 1 Oct 30 01:16:20.323: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:16:20.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:16:21.048: INFO: rc: 1 Oct 30 01:16:21.048: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:16:21.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:16:22.023: INFO: rc: 1 Oct 30 01:16:22.023: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:16:22.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:16:23.001: INFO: rc: 1 Oct 30 01:16:23.001: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:16:23.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:16:23.983: INFO: rc: 1 Oct 30 01:16:23.983: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:16:24.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:16:24.998: INFO: rc: 1 Oct 30 01:16:24.998: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:16:25.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:16:25.980: INFO: rc: 1 Oct 30 01:16:25.980: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:16:26.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:16:26.992: INFO: rc: 1 Oct 30 01:16:26.992: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:16:27.740: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:16:27.980: INFO: rc: 1 Oct 30 01:16:27.980: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:16:28.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:16:29.014: INFO: rc: 1 Oct 30 01:16:29.014: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:16:29.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:16:30.071: INFO: rc: 1 Oct 30 01:16:30.071: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:16:30.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:16:30.997: INFO: rc: 1 Oct 30 01:16:30.997: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:16:31.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:16:31.986: INFO: rc: 1 Oct 30 01:16:31.986: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:16:32.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:16:32.989: INFO: rc: 1 Oct 30 01:16:32.989: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:16:33.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:16:33.993: INFO: rc: 1 Oct 30 01:16:33.993: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:16:34.740: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:16:34.961: INFO: rc: 1 Oct 30 01:16:34.961: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:16:35.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:16:35.982: INFO: rc: 1 Oct 30 01:16:35.982: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:16:36.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:16:36.990: INFO: rc: 1 Oct 30 01:16:36.990: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:16:37.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:16:37.997: INFO: rc: 1 Oct 30 01:16:37.997: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31616 + echo hostName nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:16:38.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:16:38.999: INFO: rc: 1 Oct 30 01:16:38.999: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:16:39.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:16:39.983: INFO: rc: 1 Oct 30 01:16:39.983: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:16:40.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:16:41.094: INFO: rc: 1 Oct 30 01:16:41.094: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:16:41.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:16:42.691: INFO: rc: 1 Oct 30 01:16:42.691: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:16:42.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:16:43.328: INFO: rc: 1 Oct 30 01:16:43.328: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:16:43.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:16:43.981: INFO: rc: 1 Oct 30 01:16:43.981: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:16:44.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:16:44.995: INFO: rc: 1 Oct 30 01:16:44.995: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:16:45.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:16:46.067: INFO: rc: 1 Oct 30 01:16:46.067: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:16:46.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:16:46.978: INFO: rc: 1 Oct 30 01:16:46.978: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:16:47.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:16:47.986: INFO: rc: 1 Oct 30 01:16:47.986: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:16:48.740: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:16:49.105: INFO: rc: 1 Oct 30 01:16:49.105: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:16:49.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:16:49.990: INFO: rc: 1 Oct 30 01:16:49.990: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:16:50.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:16:50.998: INFO: rc: 1 Oct 30 01:16:50.998: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:16:51.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:16:52.040: INFO: rc: 1 Oct 30 01:16:52.040: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:16:52.740: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:16:53.057: INFO: rc: 1 Oct 30 01:16:53.057: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:16:53.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:16:54.328: INFO: rc: 1 Oct 30 01:16:54.328: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:16:54.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:16:55.017: INFO: rc: 1 Oct 30 01:16:55.017: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:16:55.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:16:55.976: INFO: rc: 1 Oct 30 01:16:55.976: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:16:56.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:16:56.963: INFO: rc: 1 Oct 30 01:16:56.963: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:16:57.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:16:57.997: INFO: rc: 1 Oct 30 01:16:57.997: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:16:58.740: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:16:58.982: INFO: rc: 1 Oct 30 01:16:58.982: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:16:59.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:16:59.983: INFO: rc: 1 Oct 30 01:16:59.983: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:17:00.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:17:00.961: INFO: rc: 1 Oct 30 01:17:00.961: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:17:01.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:17:01.972: INFO: rc: 1 Oct 30 01:17:01.972: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:17:02.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:17:02.976: INFO: rc: 1 Oct 30 01:17:02.976: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:17:03.740: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:17:03.989: INFO: rc: 1 Oct 30 01:17:03.989: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:17:04.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:17:05.120: INFO: rc: 1 Oct 30 01:17:05.120: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:17:05.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:17:06.051: INFO: rc: 1 Oct 30 01:17:06.051: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:17:06.051: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616' Oct 30 01:17:06.269: INFO: rc: 1 Oct 30 01:17:06.269: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6896 exec execpodtvtst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31616: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31616 nc: connect to 10.10.190.207 port 31616 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:17:06.269: FAIL: Unexpected error: <*errors.errorString | 0xc004423380>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31616 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31616 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func24.11() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1169 +0x265 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000269800) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc000269800) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc000269800, 0x70e7b58) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-6896". STEP: Found 17 events. Oct 30 01:17:06.275: INFO: At 2021-10-30 01:14:52 +0000 UTC - event for nodeport-test: {replication-controller } SuccessfulCreate: Created pod: nodeport-test-2tzcl Oct 30 01:17:06.275: INFO: At 2021-10-30 01:14:52 +0000 UTC - event for nodeport-test: {replication-controller } SuccessfulCreate: Created pod: nodeport-test-fsrr5 Oct 30 01:17:06.275: INFO: At 2021-10-30 01:14:52 +0000 UTC - event for nodeport-test-2tzcl: {default-scheduler } Scheduled: Successfully assigned services-6896/nodeport-test-2tzcl to node1 Oct 30 01:17:06.275: INFO: At 2021-10-30 01:14:52 +0000 UTC - event for nodeport-test-fsrr5: {default-scheduler } Scheduled: Successfully assigned services-6896/nodeport-test-fsrr5 to node2 Oct 30 01:17:06.275: INFO: At 2021-10-30 01:14:53 +0000 UTC - event for nodeport-test-fsrr5: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Oct 30 01:17:06.275: INFO: At 2021-10-30 01:14:54 +0000 UTC - event for nodeport-test-2tzcl: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Oct 30 01:17:06.275: INFO: At 2021-10-30 01:14:55 +0000 UTC - event for nodeport-test-fsrr5: {kubelet node2} Created: Created container nodeport-test Oct 30 01:17:06.275: INFO: At 2021-10-30 01:14:55 +0000 UTC - event for nodeport-test-fsrr5: {kubelet node2} Started: Started container nodeport-test Oct 30 01:17:06.275: INFO: At 2021-10-30 01:14:55 +0000 UTC - event for nodeport-test-fsrr5: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 1.451958667s Oct 30 01:17:06.275: INFO: At 2021-10-30 01:14:56 +0000 UTC - event for nodeport-test-2tzcl: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 1.828282271s Oct 30 01:17:06.275: INFO: At 2021-10-30 01:14:56 +0000 UTC - event for nodeport-test-2tzcl: {kubelet node1} Started: Started container nodeport-test Oct 30 01:17:06.275: INFO: At 2021-10-30 01:14:56 +0000 UTC - event for nodeport-test-2tzcl: {kubelet node1} Created: Created container nodeport-test Oct 30 01:17:06.275: INFO: At 2021-10-30 01:14:58 +0000 UTC - event for execpodtvtst: {default-scheduler } Scheduled: Successfully assigned services-6896/execpodtvtst to node1 Oct 30 01:17:06.275: INFO: At 2021-10-30 01:15:00 +0000 UTC - event for execpodtvtst: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Oct 30 01:17:06.275: INFO: At 2021-10-30 01:15:01 +0000 UTC - event for execpodtvtst: {kubelet node1} Started: Started container agnhost-container Oct 30 01:17:06.275: INFO: At 2021-10-30 01:15:01 +0000 UTC - event for execpodtvtst: {kubelet node1} Created: Created container agnhost-container Oct 30 01:17:06.275: INFO: At 2021-10-30 01:15:01 +0000 UTC - event for execpodtvtst: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 497.021395ms Oct 30 01:17:06.278: INFO: POD NODE PHASE GRACE CONDITIONS Oct 30 01:17:06.278: INFO: execpodtvtst node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:14:58 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:15:01 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:15:01 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:14:58 +0000 UTC }] Oct 30 01:17:06.278: INFO: nodeport-test-2tzcl node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:14:52 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:14:56 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:14:56 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:14:52 +0000 UTC }] Oct 30 01:17:06.278: INFO: nodeport-test-fsrr5 node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:14:52 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:14:56 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:14:56 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:14:52 +0000 UTC }] Oct 30 01:17:06.278: INFO: Oct 30 01:17:06.282: INFO: Logging node info for node master1 Oct 30 01:17:06.284: INFO: Node Info: &Node{ObjectMeta:{master1 b47c04d5-47a7-4a95-8e97-481e6e60af54 89297 0 2021-10-29 21:05:34 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-10-29 21:05:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2021-10-29 21:05:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2021-10-29 21:08:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kubelet Update v1 2021-10-29 21:13:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:27 +0000 UTC,LastTransitionTime:2021-10-29 21:11:27 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 01:17:02 +0000 UTC,LastTransitionTime:2021-10-29 21:05:32 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 01:17:02 +0000 UTC,LastTransitionTime:2021-10-29 21:05:32 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 01:17:02 +0000 UTC,LastTransitionTime:2021-10-29 21:05:32 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 01:17:02 +0000 UTC,LastTransitionTime:2021-10-29 21:08:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5d3ed60c561e427db72df14bd9006ed0,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:01b9d6bc-4126-4864-a1df-901a1bee4906,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:b7d2fa8154ac5d9cff45866e4d3d210a7d390f8576611c301a2eed2b57273227 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:2012644549052fa07c43b0d19f320c871a25e105d0b23e33645e4f1bcf8fcd97 nginx:1.20.1-alpine],SizeBytes:22650454,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 30 01:17:06.285: INFO: Logging kubelet events for node master1 Oct 30 01:17:06.287: INFO: Logging pods the kubelet thinks is on node master1 Oct 30 01:17:06.313: INFO: kube-controller-manager-master1 started at 2021-10-29 21:14:43 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:06.313: INFO: Container kube-controller-manager ready: true, restart count 2 Oct 30 01:17:06.313: INFO: kube-flannel-d4pmt started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded) Oct 30 01:17:06.313: INFO: Init container install-cni ready: true, restart count 0 Oct 30 01:17:06.313: INFO: Container kube-flannel ready: true, restart count 2 Oct 30 01:17:06.313: INFO: kube-multus-ds-amd64-wgkfq started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:06.313: INFO: Container kube-multus ready: true, restart count 1 Oct 30 01:17:06.313: INFO: kube-apiserver-master1 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:06.313: INFO: Container kube-apiserver ready: true, restart count 0 Oct 30 01:17:06.313: INFO: kube-proxy-z5k8p started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:06.313: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 01:17:06.313: INFO: coredns-8474476ff8-lczbr started at 2021-10-29 21:09:04 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:06.313: INFO: Container coredns ready: true, restart count 1 Oct 30 01:17:06.313: INFO: container-registry-65d7c44b96-zzkfl started at 2021-10-29 21:12:56 +0000 UTC (0+2 container statuses recorded) Oct 30 01:17:06.313: INFO: Container docker-registry ready: true, restart count 0 Oct 30 01:17:06.313: INFO: Container nginx ready: true, restart count 0 Oct 30 01:17:06.313: INFO: node-exporter-fv84w started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded) Oct 30 01:17:06.313: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 01:17:06.313: INFO: Container node-exporter ready: true, restart count 0 Oct 30 01:17:06.313: INFO: kube-scheduler-master1 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:06.313: INFO: Container kube-scheduler ready: true, restart count 0 W1030 01:17:06.329800 27 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 30 01:17:06.405: INFO: Latency metrics for node master1 Oct 30 01:17:06.405: INFO: Logging node info for node master2 Oct 30 01:17:06.408: INFO: Node Info: &Node{ObjectMeta:{master2 208792d3-d365-4ddb-83d4-10e6e818079c 89249 0 2021-10-29 21:06:06 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-10-29 21:06:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-10-29 21:08:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-10-29 21:08:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-10-29 21:18:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:19 +0000 UTC,LastTransitionTime:2021-10-29 21:11:19 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 01:16:59 +0000 UTC,LastTransitionTime:2021-10-29 21:06:06 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 01:16:59 +0000 UTC,LastTransitionTime:2021-10-29 21:06:06 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 01:16:59 +0000 UTC,LastTransitionTime:2021-10-29 21:06:06 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 01:16:59 +0000 UTC,LastTransitionTime:2021-10-29 21:08:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:12290c1916d84ddda20431c28083da6a,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:314e82b8-9747-4131-b883-220496309995,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 30 01:17:06.408: INFO: Logging kubelet events for node master2 Oct 30 01:17:06.411: INFO: Logging pods the kubelet thinks is on node master2 Oct 30 01:17:06.423: INFO: kube-apiserver-master2 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:06.423: INFO: Container kube-apiserver ready: true, restart count 0 Oct 30 01:17:06.423: INFO: kube-controller-manager-master2 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:06.423: INFO: Container kube-controller-manager ready: true, restart count 3 Oct 30 01:17:06.423: INFO: kube-scheduler-master2 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:06.423: INFO: Container kube-scheduler ready: true, restart count 2 Oct 30 01:17:06.423: INFO: kube-proxy-5gz4v started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:06.423: INFO: Container kube-proxy ready: true, restart count 2 Oct 30 01:17:06.423: INFO: kube-flannel-qvqll started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded) Oct 30 01:17:06.423: INFO: Init container install-cni ready: true, restart count 2 Oct 30 01:17:06.423: INFO: Container kube-flannel ready: true, restart count 1 Oct 30 01:17:06.423: INFO: kube-multus-ds-amd64-brkpk started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:06.423: INFO: Container kube-multus ready: true, restart count 1 Oct 30 01:17:06.423: INFO: node-exporter-lc9kk started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded) Oct 30 01:17:06.423: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 01:17:06.423: INFO: Container node-exporter ready: true, restart count 0 W1030 01:17:06.438453 27 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 30 01:17:06.494: INFO: Latency metrics for node master2 Oct 30 01:17:06.494: INFO: Logging node info for node master3 Oct 30 01:17:06.497: INFO: Node Info: &Node{ObjectMeta:{master3 168f1589-e029-47ae-b194-10215fc22d6a 89210 0 2021-10-29 21:06:17 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-10-29 21:06:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-10-29 21:08:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-10-29 21:08:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-10-29 21:16:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2021-10-29 21:16:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:36 +0000 UTC,LastTransitionTime:2021-10-29 21:11:36 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 01:16:58 +0000 UTC,LastTransitionTime:2021-10-29 21:06:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 01:16:58 +0000 UTC,LastTransitionTime:2021-10-29 21:06:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 01:16:58 +0000 UTC,LastTransitionTime:2021-10-29 21:06:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 01:16:58 +0000 UTC,LastTransitionTime:2021-10-29 21:08:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:de18dcb6cb4c493e9f4d987da2c8b3fd,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:89235c4b-b1f5-4716-bbd7-18b41c0bde74,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 30 01:17:06.497: INFO: Logging kubelet events for node master3 Oct 30 01:17:06.500: INFO: Logging pods the kubelet thinks is on node master3 Oct 30 01:17:06.515: INFO: kube-apiserver-master3 started at 2021-10-29 21:11:10 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:06.515: INFO: Container kube-apiserver ready: true, restart count 0 Oct 30 01:17:06.515: INFO: kube-scheduler-master3 started at 2021-10-29 21:14:43 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:06.515: INFO: Container kube-scheduler ready: true, restart count 2 Oct 30 01:17:06.515: INFO: dns-autoscaler-7df78bfcfb-phsdx started at 2021-10-29 21:09:02 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:06.515: INFO: Container autoscaler ready: true, restart count 1 Oct 30 01:17:06.515: INFO: node-feature-discovery-controller-cff799f9f-qq7g4 started at 2021-10-29 21:15:58 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:06.515: INFO: Container nfd-controller ready: true, restart count 0 Oct 30 01:17:06.515: INFO: coredns-8474476ff8-wrwwv started at 2021-10-29 21:09:00 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:06.515: INFO: Container coredns ready: true, restart count 1 Oct 30 01:17:06.515: INFO: prometheus-operator-585ccfb458-czbr2 started at 2021-10-29 21:21:06 +0000 UTC (0+2 container statuses recorded) Oct 30 01:17:06.515: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 01:17:06.515: INFO: Container prometheus-operator ready: true, restart count 0 Oct 30 01:17:06.515: INFO: node-exporter-bv946 started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded) Oct 30 01:17:06.515: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 01:17:06.515: INFO: Container node-exporter ready: true, restart count 0 Oct 30 01:17:06.515: INFO: kube-controller-manager-master3 started at 2021-10-29 21:14:43 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:06.515: INFO: Container kube-controller-manager ready: true, restart count 1 Oct 30 01:17:06.515: INFO: kube-proxy-r6fpx started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:06.515: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 01:17:06.515: INFO: kube-flannel-rbdlt started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded) Oct 30 01:17:06.515: INFO: Init container install-cni ready: true, restart count 2 Oct 30 01:17:06.515: INFO: Container kube-flannel ready: true, restart count 2 Oct 30 01:17:06.515: INFO: kube-multus-ds-amd64-bdwh9 started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:06.515: INFO: Container kube-multus ready: true, restart count 1 W1030 01:17:06.529666 27 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 30 01:17:06.617: INFO: Latency metrics for node master3 Oct 30 01:17:06.617: INFO: Logging node info for node node1 Oct 30 01:17:06.620: INFO: Node Info: &Node{ObjectMeta:{node1 ddef9269-94c5-4165-81fb-a3b0c4ac5c75 89200 0 2021-10-29 21:07:27 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-10-29 21:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-10-29 21:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-10-29 21:08:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-10-29 21:16:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-10-29 21:19:31 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-10-29 21:19:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:38 +0000 UTC,LastTransitionTime:2021-10-29 21:11:38 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 01:16:57 +0000 UTC,LastTransitionTime:2021-10-29 21:07:27 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 01:16:57 +0000 UTC,LastTransitionTime:2021-10-29 21:07:27 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 01:16:57 +0000 UTC,LastTransitionTime:2021-10-29 21:07:27 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 01:16:57 +0000 UTC,LastTransitionTime:2021-10-29 21:08:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3bf4179125e4495c89c046ed0ae7baf7,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:ce868148-dc5e-4c7c-a555-42ee929547f7,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003432289,},ContainerImage{Names:[localhost:30500/cmk@sha256:430843a71fa03faf488543c9f5b50d3efbef49988d6784f9f48b8077cc806f60 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:2f1ff7ac170c0ac8079e232ea4ee89d23b7906d1b824d901927acb4e399c52c9 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:a296b4c6f6ee2b88f095b61e95c7ef4f51ba25598835b4978c9256d8c8ace48a alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 30 01:17:06.621: INFO: Logging kubelet events for node node1 Oct 30 01:17:06.623: INFO: Logging pods the kubelet thinks is on node node1 Oct 30 01:17:06.636: INFO: cmk-init-discover-node1-n4mcc started at 2021-10-29 21:19:28 +0000 UTC (0+3 container statuses recorded) Oct 30 01:17:06.636: INFO: Container discover ready: false, restart count 0 Oct 30 01:17:06.636: INFO: Container init ready: false, restart count 0 Oct 30 01:17:06.636: INFO: Container install ready: false, restart count 0 Oct 30 01:17:06.636: INFO: cmk-89lqq started at 2021-10-29 21:20:10 +0000 UTC (0+2 container statuses recorded) Oct 30 01:17:06.636: INFO: Container nodereport ready: true, restart count 0 Oct 30 01:17:06.636: INFO: Container reconcile ready: true, restart count 0 Oct 30 01:17:06.636: INFO: execpodtvtst started at 2021-10-30 01:14:58 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:06.636: INFO: Container agnhost-container ready: true, restart count 0 Oct 30 01:17:06.636: INFO: ss2-2 started at 2021-10-30 01:16:52 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:06.636: INFO: Container webserver ready: true, restart count 0 Oct 30 01:17:06.636: INFO: nginx-proxy-node1 started at 2021-10-29 21:07:28 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:06.636: INFO: Container nginx-proxy ready: true, restart count 2 Oct 30 01:17:06.636: INFO: test-rollover-controller-c4qln started at 2021-10-30 01:16:48 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:06.636: INFO: Container httpd ready: true, restart count 0 Oct 30 01:17:06.636: INFO: pod-6e51ae46-f158-4a8b-82cc-12e3f045672d started at 2021-10-30 01:17:04 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:06.636: INFO: Container test-container ready: false, restart count 0 Oct 30 01:17:06.636: INFO: kube-multus-ds-amd64-68wrz started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:06.636: INFO: Container kube-multus ready: true, restart count 1 Oct 30 01:17:06.636: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-t789r started at 2021-10-29 21:17:10 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:06.636: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 30 01:17:06.636: INFO: node-exporter-256wm started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded) Oct 30 01:17:06.636: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 01:17:06.636: INFO: Container node-exporter ready: true, restart count 0 Oct 30 01:17:06.636: INFO: prometheus-k8s-0 started at 2021-10-29 21:21:17 +0000 UTC (0+4 container statuses recorded) Oct 30 01:17:06.636: INFO: Container config-reloader ready: true, restart count 0 Oct 30 01:17:06.636: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Oct 30 01:17:06.636: INFO: Container grafana ready: true, restart count 0 Oct 30 01:17:06.636: INFO: Container prometheus ready: true, restart count 1 Oct 30 01:17:06.636: INFO: liveness-0dabe50d-844e-45e6-afe9-2a19099ecbd0 started at 2021-10-30 01:13:15 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:06.636: INFO: Container agnhost-container ready: true, restart count 0 Oct 30 01:17:06.636: INFO: kubernetes-metrics-scraper-5558854cb-5rmjw started at 2021-10-29 21:09:04 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:06.636: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Oct 30 01:17:06.636: INFO: node-feature-discovery-worker-w5vdb started at 2021-10-29 21:15:58 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:06.636: INFO: Container nfd-worker ready: true, restart count 0 Oct 30 01:17:06.636: INFO: nodeport-test-2tzcl started at 2021-10-30 01:14:52 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:06.636: INFO: Container nodeport-test ready: true, restart count 0 Oct 30 01:17:06.636: INFO: kube-flannel-phg88 started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded) Oct 30 01:17:06.636: INFO: Init container install-cni ready: true, restart count 2 Oct 30 01:17:06.636: INFO: Container kube-flannel ready: true, restart count 2 Oct 30 01:17:06.636: INFO: collectd-d45rv started at 2021-10-29 21:25:13 +0000 UTC (0+3 container statuses recorded) Oct 30 01:17:06.636: INFO: Container collectd ready: true, restart count 0 Oct 30 01:17:06.636: INFO: Container collectd-exporter ready: true, restart count 0 Oct 30 01:17:06.636: INFO: Container rbac-proxy ready: true, restart count 0 Oct 30 01:17:06.636: INFO: ss2-0 started at 2021-10-30 01:16:10 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:06.636: INFO: Container webserver ready: true, restart count 0 Oct 30 01:17:06.636: INFO: kube-proxy-z5hqt started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:06.636: INFO: Container kube-proxy ready: true, restart count 1 W1030 01:17:06.649929 27 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 30 01:17:06.883: INFO: Latency metrics for node node1 Oct 30 01:17:06.883: INFO: Logging node info for node node2 Oct 30 01:17:06.886: INFO: Node Info: &Node{ObjectMeta:{node2 3b49ad19-ba56-4f4a-b1fa-eef102063de9 89266 0 2021-10-29 21:07:28 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-10-29 21:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-10-29 21:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-10-29 21:08:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-10-29 21:16:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-10-29 21:19:53 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-10-29 21:19:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:34 +0000 UTC,LastTransitionTime:2021-10-29 21:11:34 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 01:17:01 +0000 UTC,LastTransitionTime:2021-10-29 21:07:28 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 01:17:01 +0000 UTC,LastTransitionTime:2021-10-29 21:07:28 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 01:17:01 +0000 UTC,LastTransitionTime:2021-10-29 21:07:28 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 01:17:01 +0000 UTC,LastTransitionTime:2021-10-29 21:08:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7283436dd9e34722a6e4df817add95ed,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:c219e7bd-582b-4d6c-b379-1161acc70676,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[localhost:30500/cmk@sha256:430843a71fa03faf488543c9f5b50d3efbef49988d6784f9f48b8077cc806f60 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:2f1ff7ac170c0ac8079e232ea4ee89d23b7906d1b824d901927acb4e399c52c9 localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[localhost:30500/tasextender@sha256:b7d2fa8154ac5d9cff45866e4d3d210a7d390f8576611c301a2eed2b57273227 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 30 01:17:06.887: INFO: Logging kubelet events for node node2 Oct 30 01:17:06.889: INFO: Logging pods the kubelet thinks is on node node2 Oct 30 01:17:06.904: INFO: cmk-8bpbf started at 2021-10-29 21:20:11 +0000 UTC (0+2 container statuses recorded) Oct 30 01:17:06.904: INFO: Container nodereport ready: true, restart count 0 Oct 30 01:17:06.904: INFO: Container reconcile ready: true, restart count 0 Oct 30 01:17:06.904: INFO: node-exporter-r77s4 started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded) Oct 30 01:17:06.904: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 01:17:06.904: INFO: Container node-exporter ready: true, restart count 0 Oct 30 01:17:06.904: INFO: tas-telemetry-aware-scheduling-84ff454dfb-989mh started at 2021-10-29 21:24:23 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:06.904: INFO: Container tas-extender ready: true, restart count 0 Oct 30 01:17:06.904: INFO: collectd-flvhl started at 2021-10-29 21:25:13 +0000 UTC (0+3 container statuses recorded) Oct 30 01:17:06.904: INFO: Container collectd ready: true, restart count 0 Oct 30 01:17:06.904: INFO: Container collectd-exporter ready: true, restart count 0 Oct 30 01:17:06.904: INFO: Container rbac-proxy ready: true, restart count 0 Oct 30 01:17:06.904: INFO: ss2-1 started at 2021-10-30 01:16:05 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:06.904: INFO: Container webserver ready: false, restart count 0 Oct 30 01:17:06.904: INFO: replace-27259277-nr4dl started at 2021-10-30 01:17:00 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:06.904: INFO: Container c ready: true, restart count 0 Oct 30 01:17:06.904: INFO: ss-0 started at 2021-10-30 01:16:37 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:06.904: INFO: Container webserver ready: false, restart count 0 Oct 30 01:17:06.904: INFO: kube-proxy-76285 started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:06.904: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 01:17:06.904: INFO: test-pod started at 2021-10-30 01:12:21 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:06.904: INFO: Container webserver ready: true, restart count 0 Oct 30 01:17:06.904: INFO: node-feature-discovery-worker-h6lcp started at 2021-10-29 21:15:58 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:06.904: INFO: Container nfd-worker ready: true, restart count 0 Oct 30 01:17:06.904: INFO: nodeport-test-fsrr5 started at 2021-10-30 01:14:52 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:06.904: INFO: Container nodeport-test ready: true, restart count 0 Oct 30 01:17:06.904: INFO: test-rollover-deployment-98c5f4599-v69sm started at 2021-10-30 01:16:57 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:06.904: INFO: Container agnhost ready: true, restart count 0 Oct 30 01:17:06.904: INFO: pod-service-account-defaultsa-nomountspec started at 2021-10-30 01:16:12 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:06.904: INFO: Container token-test ready: true, restart count 0 Oct 30 01:17:06.904: INFO: kube-flannel-f6s5v started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded) Oct 30 01:17:06.904: INFO: Init container install-cni ready: true, restart count 2 Oct 30 01:17:06.904: INFO: Container kube-flannel ready: true, restart count 3 Oct 30 01:17:06.904: INFO: kube-multus-ds-amd64-7tvbl started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:06.904: INFO: Container kube-multus ready: true, restart count 1 Oct 30 01:17:06.904: INFO: cmk-webhook-6c9d5f8578-ffk66 started at 2021-10-29 21:20:11 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:06.904: INFO: Container cmk-webhook ready: true, restart count 0 Oct 30 01:17:06.904: INFO: externalname-service-jrfd9 started at 2021-10-30 01:16:58 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:06.904: INFO: Container externalname-service ready: true, restart count 0 Oct 30 01:17:06.904: INFO: frontend-685fc574d5-d28w2 started at 2021-10-30 01:17:06 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:06.904: INFO: Container guestbook-frontend ready: false, restart count 0 Oct 30 01:17:06.904: INFO: nginx-proxy-node2 started at 2021-10-29 21:07:28 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:06.904: INFO: Container nginx-proxy ready: true, restart count 2 Oct 30 01:17:06.905: INFO: kubernetes-dashboard-785dcbb76d-pbjjt started at 2021-10-29 21:09:04 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:06.905: INFO: Container kubernetes-dashboard ready: true, restart count 1 Oct 30 01:17:06.905: INFO: cmk-init-discover-node2-2fmmt started at 2021-10-29 21:19:48 +0000 UTC (0+3 container statuses recorded) Oct 30 01:17:06.905: INFO: Container discover ready: false, restart count 0 Oct 30 01:17:06.905: INFO: Container init ready: false, restart count 0 Oct 30 01:17:06.905: INFO: Container install ready: false, restart count 0 Oct 30 01:17:06.905: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-69pkg started at 2021-10-29 21:17:10 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:06.905: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 30 01:17:06.905: INFO: pod-service-account-mountsa-nomountspec started at 2021-10-30 01:16:12 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:06.905: INFO: Container token-test ready: true, restart count 0 Oct 30 01:17:06.905: INFO: externalname-service-hrqms started at 2021-10-30 01:16:58 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:06.905: INFO: Container externalname-service ready: true, restart count 0 W1030 01:17:06.918898 27 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 30 01:17:07.241: INFO: Latency metrics for node node2 Oct 30 01:17:07.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6896" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [134.929 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to create a functioning NodePort service [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 01:17:06.269: Unexpected error: <*errors.errorString | 0xc004423380>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31616 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31616 over TCP protocol occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1169 ------------------------------ {"msg":"FAILED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":-1,"completed":10,"skipped":105,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:17:07.260: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should test the lifecycle of an Endpoint [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating an Endpoint STEP: waiting for available Endpoint STEP: listing all Endpoints STEP: updating the Endpoint STEP: fetching the Endpoint STEP: patching the Endpoint STEP: fetching the Endpoint STEP: deleting the Endpoint by Collection STEP: waiting for Endpoint deletion STEP: fetching the Endpoint [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:17:07.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4718" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • ------------------------------ {"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":-1,"completed":11,"skipped":105,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:17:07.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64 [It] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod with one valid and two invalid sysctls [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:17:07.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-6141" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":12,"skipped":136,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SS ------------------------------ [BeforeEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:17:07.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of pod templates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create set of pod templates Oct 30 01:17:07.455: INFO: created test-podtemplate-1 Oct 30 01:17:07.458: INFO: created test-podtemplate-2 Oct 30 01:17:07.461: INFO: created test-podtemplate-3 STEP: get a list of pod templates with a label in the current namespace STEP: delete collection of pod templates Oct 30 01:17:07.464: INFO: requesting DeleteCollection of pod templates STEP: check that the list of pod templates matches the requested quantity Oct 30 01:17:07.474: INFO: requesting list of pod templates to confirm quantity [AfterEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:17:07.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-7934" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":-1,"completed":13,"skipped":138,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:17:07.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63 W1030 01:17:07.536326 27 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob [It] should support CronJob API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a cronjob STEP: creating STEP: getting STEP: listing STEP: watching Oct 30 01:17:07.544: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching Oct 30 01:17:07.547: INFO: starting watch STEP: patching STEP: updating Oct 30 01:17:07.557: INFO: waiting for watch events with expected annotations Oct 30 01:17:07.557: INFO: saw patched and updated annotations STEP: patching /status STEP: updating /status STEP: get /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:17:07.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-9378" for this suite. • ------------------------------ {"msg":"PASSED [sig-apps] CronJob should support CronJob API operations [Conformance]","total":-1,"completed":14,"skipped":154,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:17:04.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0666 on tmpfs Oct 30 01:17:04.428: INFO: Waiting up to 5m0s for pod "pod-6e51ae46-f158-4a8b-82cc-12e3f045672d" in namespace "emptydir-3968" to be "Succeeded or Failed" Oct 30 01:17:04.430: INFO: Pod "pod-6e51ae46-f158-4a8b-82cc-12e3f045672d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081455ms Oct 30 01:17:06.433: INFO: Pod "pod-6e51ae46-f158-4a8b-82cc-12e3f045672d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005746105s Oct 30 01:17:08.439: INFO: Pod "pod-6e51ae46-f158-4a8b-82cc-12e3f045672d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011732904s Oct 30 01:17:10.442: INFO: Pod "pod-6e51ae46-f158-4a8b-82cc-12e3f045672d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014831721s STEP: Saw pod success Oct 30 01:17:10.442: INFO: Pod "pod-6e51ae46-f158-4a8b-82cc-12e3f045672d" satisfied condition "Succeeded or Failed" Oct 30 01:17:10.445: INFO: Trying to get logs from node node1 pod pod-6e51ae46-f158-4a8b-82cc-12e3f045672d container test-container: STEP: delete the pod Oct 30 01:17:10.620: INFO: Waiting for pod pod-6e51ae46-f158-4a8b-82cc-12e3f045672d to disappear Oct 30 01:17:10.622: INFO: Pod pod-6e51ae46-f158-4a8b-82cc-12e3f045672d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:17:10.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3968" for this suite. • [SLOW TEST:6.239 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":278,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSS ------------------------------ [BeforeEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:17:10.638: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingressclass STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingressclass.go:149 [It] should support creating IngressClass API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Oct 30 01:17:10.672: INFO: starting watch STEP: patching STEP: updating Oct 30 01:17:10.678: INFO: waiting for watch events with expected annotations Oct 30 01:17:10.678: INFO: saw patched and updated annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:17:10.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingressclass-4294" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] IngressClass API should support creating IngressClass API operations [Conformance]","total":-1,"completed":22,"skipped":281,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":-1,"completed":18,"skipped":282,"failed":0} [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:16:48.340: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] deployment should support rollover [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 01:16:48.376: INFO: Pod name rollover-pod: Found 0 pods out of 1 Oct 30 01:16:53.380: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Oct 30 01:16:53.380: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Oct 30 01:16:55.382: INFO: Creating deployment "test-rollover-deployment" Oct 30 01:16:55.388: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Oct 30 01:16:57.395: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Oct 30 01:16:57.399: INFO: Ensure that both replica sets have 1 created replica Oct 30 01:16:57.405: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Oct 30 01:16:57.412: INFO: Updating deployment test-rollover-deployment Oct 30 01:16:57.412: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Oct 30 01:16:59.419: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Oct 30 01:16:59.424: INFO: Make sure deployment "test-rollover-deployment" is complete Oct 30 01:16:59.429: INFO: all replica sets need to contain the pod-template-hash label Oct 30 01:16:59.429: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153415, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153415, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153417, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153415, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 30 01:17:01.438: INFO: all replica sets need to contain the pod-template-hash label Oct 30 01:17:01.438: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153415, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153415, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153417, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153415, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 30 01:17:03.436: INFO: all replica sets need to contain the pod-template-hash label Oct 30 01:17:03.436: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153415, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153415, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153417, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153415, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 30 01:17:05.439: INFO: all replica sets need to contain the pod-template-hash label Oct 30 01:17:05.439: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153415, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153415, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153423, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153415, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 30 01:17:07.434: INFO: all replica sets need to contain the pod-template-hash label Oct 30 01:17:07.434: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153415, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153415, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153423, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153415, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 30 01:17:09.436: INFO: all replica sets need to contain the pod-template-hash label Oct 30 01:17:09.436: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153415, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153415, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153423, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153415, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 30 01:17:11.461: INFO: all replica sets need to contain the pod-template-hash label Oct 30 01:17:11.461: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153415, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153415, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153423, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153415, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 30 01:17:13.442: INFO: all replica sets need to contain the pod-template-hash label Oct 30 01:17:13.442: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153415, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153415, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153423, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153415, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 30 01:17:15.435: INFO: Oct 30 01:17:15.435: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 Oct 30 01:17:15.442: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-2663 0de401da-a0f9-4dad-8737-4aaa88c82846 89769 2 2021-10-30 01:16:55 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2021-10-30 01:16:57 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-10-30 01:17:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00549d4b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2021-10-30 01:16:55 +0000 UTC,LastTransitionTime:2021-10-30 01:16:55 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-98c5f4599" has successfully progressed.,LastUpdateTime:2021-10-30 01:17:13 +0000 UTC,LastTransitionTime:2021-10-30 01:16:55 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Oct 30 01:17:15.446: INFO: New ReplicaSet "test-rollover-deployment-98c5f4599" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-98c5f4599 deployment-2663 796ef071-a2e9-4ccf-a617-514fc29404d4 89759 2 2021-10-30 01:16:57 +0000 UTC map[name:rollover-pod pod-template-hash:98c5f4599] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 0de401da-a0f9-4dad-8737-4aaa88c82846 0xc00549da30 0xc00549da31}] [] [{kube-controller-manager Update apps/v1 2021-10-30 01:17:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0de401da-a0f9-4dad-8737-4aaa88c82846\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 98c5f4599,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:98c5f4599] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00549daa8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Oct 30 01:17:15.446: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Oct 30 01:17:15.446: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-2663 0a3e9b53-8454-4167-bc6d-bc72fbc12936 89767 2 2021-10-30 01:16:48 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 0de401da-a0f9-4dad-8737-4aaa88c82846 0xc00549d827 0xc00549d828}] [] [{e2e.test Update apps/v1 2021-10-30 01:16:48 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-10-30 01:17:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0de401da-a0f9-4dad-8737-4aaa88c82846\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00549d8c8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Oct 30 01:17:15.446: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-78bc8b888c deployment-2663 c7a762c7-b0cb-457d-9a70-81136f154596 89190 2 2021-10-30 01:16:55 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 0de401da-a0f9-4dad-8737-4aaa88c82846 0xc00549d937 0xc00549d938}] [] [{kube-controller-manager Update apps/v1 2021-10-30 01:16:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0de401da-a0f9-4dad-8737-4aaa88c82846\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 78bc8b888c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00549d9c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Oct 30 01:17:15.451: INFO: Pod "test-rollover-deployment-98c5f4599-v69sm" is available: &Pod{ObjectMeta:{test-rollover-deployment-98c5f4599-v69sm test-rollover-deployment-98c5f4599- deployment-2663 d2211b80-8afa-4ae7-bbd0-b7e5382d7370 89335 0 2021-10-30 01:16:57 +0000 UTC map[name:rollover-pod pod-template-hash:98c5f4599] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.231" ], "mac": "c6:6f:be:b3:bd:60", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.231" ], "mac": "c6:6f:be:b3:bd:60", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-rollover-deployment-98c5f4599 796ef071-a2e9-4ccf-a617-514fc29404d4 0xc00549df9f 0xc00549dfb0}] [] [{kube-controller-manager Update v1 2021-10-30 01:16:57 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"796ef071-a2e9-4ccf-a617-514fc29404d4\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-10-30 01:17:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-10-30 01:17:03 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.231\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-gd4fh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gd4fh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:16:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:17:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:17:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:16:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.231,StartTime:2021-10-30 01:16:57 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-30 01:17:02 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1,ContainerID:docker://fe9bcd2ac8c1d4ead8d4931df0cb94945ca16a94a175b0155482b91514599b48,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.231,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:17:15.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2663" for this suite. • [SLOW TEST:27.119 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":-1,"completed":19,"skipped":282,"failed":0} SS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:17:05.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should create and stop a working application [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating all guestbook components Oct 30 01:17:05.325: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-replica labels: app: agnhost role: replica tier: backend spec: ports: - port: 6379 selector: app: agnhost role: replica tier: backend Oct 30 01:17:05.325: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-550 create -f -' Oct 30 01:17:05.707: INFO: stderr: "" Oct 30 01:17:05.707: INFO: stdout: "service/agnhost-replica created\n" Oct 30 01:17:05.707: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-primary labels: app: agnhost role: primary tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: primary tier: backend Oct 30 01:17:05.707: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-550 create -f -' Oct 30 01:17:06.029: INFO: stderr: "" Oct 30 01:17:06.029: INFO: stdout: "service/agnhost-primary created\n" Oct 30 01:17:06.029: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Oct 30 01:17:06.029: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-550 create -f -' Oct 30 01:17:06.354: INFO: stderr: "" Oct 30 01:17:06.354: INFO: stdout: "service/frontend created\n" Oct 30 01:17:06.354: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: k8s.gcr.io/e2e-test-images/agnhost:2.32 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Oct 30 01:17:06.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-550 create -f -' Oct 30 01:17:06.654: INFO: stderr: "" Oct 30 01:17:06.654: INFO: stdout: "deployment.apps/frontend created\n" Oct 30 01:17:06.654: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-primary spec: replicas: 1 selector: matchLabels: app: agnhost role: primary tier: backend template: metadata: labels: app: agnhost role: primary tier: backend spec: containers: - name: primary image: k8s.gcr.io/e2e-test-images/agnhost:2.32 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Oct 30 01:17:06.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-550 create -f -' Oct 30 01:17:06.961: INFO: stderr: "" Oct 30 01:17:06.962: INFO: stdout: "deployment.apps/agnhost-primary created\n" Oct 30 01:17:06.962: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-replica spec: replicas: 2 selector: matchLabels: app: agnhost role: replica tier: backend template: metadata: labels: app: agnhost role: replica tier: backend spec: containers: - name: replica image: k8s.gcr.io/e2e-test-images/agnhost:2.32 args: [ "guestbook", "--replicaof", "agnhost-primary", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Oct 30 01:17:06.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-550 create -f -' Oct 30 01:17:07.230: INFO: stderr: "" Oct 30 01:17:07.230: INFO: stdout: "deployment.apps/agnhost-replica created\n" STEP: validating guestbook app Oct 30 01:17:07.230: INFO: Waiting for all frontend pods to be Running. Oct 30 01:17:17.282: INFO: Waiting for frontend to serve content. Oct 30 01:17:17.288: INFO: Trying to add a new entry to the guestbook. Oct 30 01:17:17.295: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Oct 30 01:17:17.299: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-550 delete --grace-period=0 --force -f -' Oct 30 01:17:17.435: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Oct 30 01:17:17.436: INFO: stdout: "service \"agnhost-replica\" force deleted\n" STEP: using delete to clean up resources Oct 30 01:17:17.436: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-550 delete --grace-period=0 --force -f -' Oct 30 01:17:17.567: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Oct 30 01:17:17.567: INFO: stdout: "service \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources Oct 30 01:17:17.567: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-550 delete --grace-period=0 --force -f -' Oct 30 01:17:17.698: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Oct 30 01:17:17.698: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Oct 30 01:17:17.698: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-550 delete --grace-period=0 --force -f -' Oct 30 01:17:17.836: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Oct 30 01:17:17.836: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Oct 30 01:17:17.836: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-550 delete --grace-period=0 --force -f -' Oct 30 01:17:17.960: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Oct 30 01:17:17.960: INFO: stdout: "deployment.apps \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources Oct 30 01:17:17.960: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-550 delete --grace-period=0 --force -f -' Oct 30 01:17:18.093: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Oct 30 01:17:18.093: INFO: stdout: "deployment.apps \"agnhost-replica\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:17:18.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-550" for this suite. • [SLOW TEST:12.800 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:336 should create and stop a working application [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":-1,"completed":25,"skipped":359,"failed":0} S ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:17:10.761: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 30 01:17:11.055: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 30 01:17:13.064: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153431, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153431, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153431, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153431, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 30 01:17:15.069: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153431, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153431, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153431, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153431, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 30 01:17:18.074: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:17:18.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5616" for this suite. STEP: Destroying namespace "webhook-5616-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.448 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":-1,"completed":23,"skipped":314,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:17:18.235: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should run through a ConfigMap lifecycle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a ConfigMap STEP: fetching the ConfigMap STEP: patching the ConfigMap STEP: listing all ConfigMaps in all namespaces with a label selector STEP: deleting the ConfigMap by collection with a label selector STEP: listing all ConfigMaps in test namespace [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:17:18.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3905" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":-1,"completed":24,"skipped":328,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:16:58.097: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a service externalname-service with the type=ExternalName in namespace services-6238 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-6238 I1030 01:16:58.132842 30 runners.go:190] Created replication controller with name: externalname-service, namespace: services-6238, replica count: 2 I1030 01:17:01.184244 30 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1030 01:17:04.185893 30 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1030 01:17:07.186968 30 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 30 01:17:07.187: INFO: Creating new exec pod Oct 30 01:17:16.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6238 exec execpodm5dj8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 30 01:17:16.519: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 30 01:17:16.519: INFO: stdout: "" Oct 30 01:17:17.520: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6238 exec execpodm5dj8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 30 01:17:17.859: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 30 01:17:17.859: INFO: stdout: "" Oct 30 01:17:18.519: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6238 exec execpodm5dj8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 30 01:17:19.506: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 30 01:17:19.506: INFO: stdout: "" Oct 30 01:17:19.519: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6238 exec execpodm5dj8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 30 01:17:19.907: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 30 01:17:19.907: INFO: stdout: "externalname-service-jrfd9" Oct 30 01:17:19.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6238 exec execpodm5dj8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.5.68 80' Oct 30 01:17:20.154: INFO: stderr: "+ nc -v -t -w 2 10.233.5.68 80\n+ echo hostName\nConnection to 10.233.5.68 80 port [tcp/http] succeeded!\n" Oct 30 01:17:20.154: INFO: stdout: "externalname-service-hrqms" Oct 30 01:17:20.154: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:17:20.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6238" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:22.076 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":-1,"completed":30,"skipped":553,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:13:15.679: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod liveness-0dabe50d-844e-45e6-afe9-2a19099ecbd0 in namespace container-probe-7438 Oct 30 01:13:19.722: INFO: Started pod liveness-0dabe50d-844e-45e6-afe9-2a19099ecbd0 in namespace container-probe-7438 STEP: checking the pod's current state and verifying that restartCount is present Oct 30 01:13:19.726: INFO: Initial restart count of pod liveness-0dabe50d-844e-45e6-afe9-2a19099ecbd0 is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:17:20.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7438" for this suite. • [SLOW TEST:244.540 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":137,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:17:20.242: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be immutable if `immutable` field is set [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:17:20.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5181" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":31,"skipped":588,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:17:20.328: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:17:20.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1048" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":-1,"completed":32,"skipped":605,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:17:18.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward api env vars Oct 30 01:17:18.140: INFO: Waiting up to 5m0s for pod "downward-api-73549d8b-843e-409a-9ab2-345c923a80bc" in namespace "downward-api-7394" to be "Succeeded or Failed" Oct 30 01:17:18.143: INFO: Pod "downward-api-73549d8b-843e-409a-9ab2-345c923a80bc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.312536ms Oct 30 01:17:20.147: INFO: Pod "downward-api-73549d8b-843e-409a-9ab2-345c923a80bc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006401963s Oct 30 01:17:22.150: INFO: Pod "downward-api-73549d8b-843e-409a-9ab2-345c923a80bc": Phase="Running", Reason="", readiness=true. Elapsed: 4.009752331s Oct 30 01:17:24.154: INFO: Pod "downward-api-73549d8b-843e-409a-9ab2-345c923a80bc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013307944s STEP: Saw pod success Oct 30 01:17:24.154: INFO: Pod "downward-api-73549d8b-843e-409a-9ab2-345c923a80bc" satisfied condition "Succeeded or Failed" Oct 30 01:17:24.156: INFO: Trying to get logs from node node2 pod downward-api-73549d8b-843e-409a-9ab2-345c923a80bc container dapi-container: STEP: delete the pod Oct 30 01:17:24.169: INFO: Waiting for pod downward-api-73549d8b-843e-409a-9ab2-345c923a80bc to disappear Oct 30 01:17:24.171: INFO: Pod downward-api-73549d8b-843e-409a-9ab2-345c923a80bc no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:17:24.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7394" for this suite. • [SLOW TEST:6.072 seconds] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should provide host IP as an env var [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":360,"failed":0} SS ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:12:21.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-4408 [It] Should recreate evicted statefulset [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-4408 STEP: Creating statefulset with conflicting port in namespace statefulset-4408 STEP: Waiting until pod test-pod will start running in namespace statefulset-4408 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-4408 Oct 30 01:17:25.200: FAIL: Pod ss-0 expected to be re-created at least once Full Stack Trace k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000183c80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc000183c80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc000183c80, 0x70e7b58) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 Oct 30 01:17:25.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-4408 describe po test-pod' Oct 30 01:17:25.396: INFO: stderr: "" Oct 30 01:17:25.396: INFO: stdout: "Name: test-pod\nNamespace: statefulset-4408\nPriority: 0\nNode: node2/10.10.190.208\nStart Time: Sat, 30 Oct 2021 01:12:21 +0000\nLabels: \nAnnotations: k8s.v1.cni.cncf.io/network-status:\n [{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.4.148\"\n ],\n \"mac\": \"5e:af:93:bf:02:ab\",\n \"default\": true,\n \"dns\": {}\n }]\n k8s.v1.cni.cncf.io/networks-status:\n [{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.4.148\"\n ],\n \"mac\": \"5e:af:93:bf:02:ab\",\n \"default\": true,\n \"dns\": {}\n }]\n kubernetes.io/psp: privileged\nStatus: Running\nIP: 10.244.4.148\nIPs:\n IP: 10.244.4.148\nContainers:\n webserver:\n Container ID: docker://dd55e3045197b33786837cdd5c62bc3973b4f7c40a70ef739c9abcdf9587f14d\n Image: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\n Image ID: docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50\n Port: 21017/TCP\n Host Port: 21017/TCP\n State: Running\n Started: Sat, 30 Oct 2021 01:12:23 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bvhrz (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-bvhrz:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: \n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Pulling 5m2s kubelet Pulling image \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\"\n Normal Pulled 5m2s kubelet Successfully pulled image \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\" in 260.751885ms\n Normal Created 5m2s kubelet Created container webserver\n Normal Started 5m2s kubelet Started container webserver\n" Oct 30 01:17:25.396: INFO: Output of kubectl describe test-pod: Name: test-pod Namespace: statefulset-4408 Priority: 0 Node: node2/10.10.190.208 Start Time: Sat, 30 Oct 2021 01:12:21 +0000 Labels: Annotations: k8s.v1.cni.cncf.io/network-status: [{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.148" ], "mac": "5e:af:93:bf:02:ab", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status: [{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.148" ], "mac": "5e:af:93:bf:02:ab", "default": true, "dns": {} }] kubernetes.io/psp: privileged Status: Running IP: 10.244.4.148 IPs: IP: 10.244.4.148 Containers: webserver: Container ID: docker://dd55e3045197b33786837cdd5c62bc3973b4f7c40a70ef739c9abcdf9587f14d Image: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 Image ID: docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 Port: 21017/TCP Host Port: 21017/TCP State: Running Started: Sat, 30 Oct 2021 01:12:23 +0000 Ready: True Restart Count: 0 Environment: Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bvhrz (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-bvhrz: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true QoS Class: BestEffort Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Pulling 5m2s kubelet Pulling image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-1" Normal Pulled 5m2s kubelet Successfully pulled image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-1" in 260.751885ms Normal Created 5m2s kubelet Created container webserver Normal Started 5m2s kubelet Started container webserver Oct 30 01:17:25.396: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-4408 logs test-pod --tail=100' Oct 30 01:17:25.674: INFO: stderr: "" Oct 30 01:17:25.674: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.4.148. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.4.148. Set the 'ServerName' directive globally to suppress this message\n[Sat Oct 30 01:12:23.928791 2021] [mpm_event:notice] [pid 1:tid 140056951417704] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Sat Oct 30 01:12:23.928822 2021] [core:notice] [pid 1:tid 140056951417704] AH00094: Command line: 'httpd -D FOREGROUND'\n" Oct 30 01:17:25.674: INFO: Last 100 log lines of test-pod: AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.4.148. Set the 'ServerName' directive globally to suppress this message AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.4.148. Set the 'ServerName' directive globally to suppress this message [Sat Oct 30 01:12:23.928791 2021] [mpm_event:notice] [pid 1:tid 140056951417704] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations [Sat Oct 30 01:12:23.928822 2021] [core:notice] [pid 1:tid 140056951417704] AH00094: Command line: 'httpd -D FOREGROUND' Oct 30 01:17:25.674: INFO: Deleting all statefulset in ns statefulset-4408 Oct 30 01:17:25.676: INFO: Scaling statefulset ss to 0 Oct 30 01:17:25.684: INFO: Waiting for statefulset status.replicas updated to 0 Oct 30 01:17:25.686: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "statefulset-4408". STEP: Found 6 events. Oct 30 01:17:25.696: INFO: At 2021-10-30 01:12:21 +0000 UTC - event for ss: {statefulset-controller } FailedCreate: create Pod ss-0 in StatefulSet ss failed error: pods "ss-0" is forbidden: PodSecurityPolicy: unable to admit pod: [spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [9103-9104] spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [9100] spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: []] Oct 30 01:17:25.696: INFO: At 2021-10-30 01:12:21 +0000 UTC - event for ss: {statefulset-controller } FailedCreate: create Pod ss-0 in StatefulSet ss failed error: pods "ss-0" is forbidden: PodSecurityPolicy: unable to admit pod: [spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [9100] spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [] spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [9103-9104]] Oct 30 01:17:25.696: INFO: At 2021-10-30 01:12:23 +0000 UTC - event for test-pod: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-1" Oct 30 01:17:25.696: INFO: At 2021-10-30 01:12:23 +0000 UTC - event for test-pod: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-1" in 260.751885ms Oct 30 01:17:25.696: INFO: At 2021-10-30 01:12:23 +0000 UTC - event for test-pod: {kubelet node2} Created: Created container webserver Oct 30 01:17:25.696: INFO: At 2021-10-30 01:12:23 +0000 UTC - event for test-pod: {kubelet node2} Started: Started container webserver Oct 30 01:17:25.698: INFO: POD NODE PHASE GRACE CONDITIONS Oct 30 01:17:25.698: INFO: test-pod node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:12:21 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:12:24 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:12:24 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:12:21 +0000 UTC }] Oct 30 01:17:25.698: INFO: Oct 30 01:17:25.701: INFO: Logging node info for node master1 Oct 30 01:17:25.704: INFO: Node Info: &Node{ObjectMeta:{master1 b47c04d5-47a7-4a95-8e97-481e6e60af54 90101 0 2021-10-29 21:05:34 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-10-29 21:05:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2021-10-29 21:05:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2021-10-29 21:08:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kubelet Update v1 2021-10-29 21:13:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:27 +0000 UTC,LastTransitionTime:2021-10-29 21:11:27 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 01:17:22 +0000 UTC,LastTransitionTime:2021-10-29 21:05:32 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 01:17:22 +0000 UTC,LastTransitionTime:2021-10-29 21:05:32 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 01:17:22 +0000 UTC,LastTransitionTime:2021-10-29 21:05:32 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 01:17:22 +0000 UTC,LastTransitionTime:2021-10-29 21:08:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5d3ed60c561e427db72df14bd9006ed0,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:01b9d6bc-4126-4864-a1df-901a1bee4906,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:b7d2fa8154ac5d9cff45866e4d3d210a7d390f8576611c301a2eed2b57273227 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:2012644549052fa07c43b0d19f320c871a25e105d0b23e33645e4f1bcf8fcd97 nginx:1.20.1-alpine],SizeBytes:22650454,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 30 01:17:25.704: INFO: Logging kubelet events for node master1 Oct 30 01:17:25.706: INFO: Logging pods the kubelet thinks is on node master1 Oct 30 01:17:25.728: INFO: kube-apiserver-master1 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:25.728: INFO: Container kube-apiserver ready: true, restart count 0 Oct 30 01:17:25.728: INFO: kube-controller-manager-master1 started at 2021-10-29 21:14:43 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:25.728: INFO: Container kube-controller-manager ready: true, restart count 2 Oct 30 01:17:25.728: INFO: kube-flannel-d4pmt started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded) Oct 30 01:17:25.728: INFO: Init container install-cni ready: true, restart count 0 Oct 30 01:17:25.728: INFO: Container kube-flannel ready: true, restart count 2 Oct 30 01:17:25.728: INFO: kube-multus-ds-amd64-wgkfq started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:25.728: INFO: Container kube-multus ready: true, restart count 1 Oct 30 01:17:25.728: INFO: kube-scheduler-master1 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:25.728: INFO: Container kube-scheduler ready: true, restart count 0 Oct 30 01:17:25.728: INFO: kube-proxy-z5k8p started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:25.728: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 01:17:25.728: INFO: coredns-8474476ff8-lczbr started at 2021-10-29 21:09:04 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:25.728: INFO: Container coredns ready: true, restart count 1 Oct 30 01:17:25.728: INFO: container-registry-65d7c44b96-zzkfl started at 2021-10-29 21:12:56 +0000 UTC (0+2 container statuses recorded) Oct 30 01:17:25.728: INFO: Container docker-registry ready: true, restart count 0 Oct 30 01:17:25.728: INFO: Container nginx ready: true, restart count 0 Oct 30 01:17:25.728: INFO: node-exporter-fv84w started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded) Oct 30 01:17:25.729: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 01:17:25.729: INFO: Container node-exporter ready: true, restart count 0 W1030 01:17:25.742539 23 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 30 01:17:25.818: INFO: Latency metrics for node master1 Oct 30 01:17:25.818: INFO: Logging node info for node master2 Oct 30 01:17:25.820: INFO: Node Info: &Node{ObjectMeta:{master2 208792d3-d365-4ddb-83d4-10e6e818079c 89986 0 2021-10-29 21:06:06 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-10-29 21:06:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-10-29 21:08:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-10-29 21:08:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-10-29 21:18:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:19 +0000 UTC,LastTransitionTime:2021-10-29 21:11:19 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 01:17:19 +0000 UTC,LastTransitionTime:2021-10-29 21:06:06 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 01:17:19 +0000 UTC,LastTransitionTime:2021-10-29 21:06:06 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 01:17:19 +0000 UTC,LastTransitionTime:2021-10-29 21:06:06 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 01:17:19 +0000 UTC,LastTransitionTime:2021-10-29 21:08:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:12290c1916d84ddda20431c28083da6a,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:314e82b8-9747-4131-b883-220496309995,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 30 01:17:25.820: INFO: Logging kubelet events for node master2 Oct 30 01:17:25.822: INFO: Logging pods the kubelet thinks is on node master2 Oct 30 01:17:25.828: INFO: kube-controller-manager-master2 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:25.828: INFO: Container kube-controller-manager ready: true, restart count 3 Oct 30 01:17:25.828: INFO: kube-scheduler-master2 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:25.828: INFO: Container kube-scheduler ready: true, restart count 2 Oct 30 01:17:25.828: INFO: kube-proxy-5gz4v started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:25.828: INFO: Container kube-proxy ready: true, restart count 2 Oct 30 01:17:25.828: INFO: kube-flannel-qvqll started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded) Oct 30 01:17:25.828: INFO: Init container install-cni ready: true, restart count 2 Oct 30 01:17:25.828: INFO: Container kube-flannel ready: true, restart count 1 Oct 30 01:17:25.828: INFO: kube-multus-ds-amd64-brkpk started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:25.828: INFO: Container kube-multus ready: true, restart count 1 Oct 30 01:17:25.828: INFO: node-exporter-lc9kk started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded) Oct 30 01:17:25.828: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 01:17:25.828: INFO: Container node-exporter ready: true, restart count 0 Oct 30 01:17:25.828: INFO: kube-apiserver-master2 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:25.828: INFO: Container kube-apiserver ready: true, restart count 0 W1030 01:17:25.845870 23 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 30 01:17:25.917: INFO: Latency metrics for node master2 Oct 30 01:17:25.917: INFO: Logging node info for node master3 Oct 30 01:17:25.920: INFO: Node Info: &Node{ObjectMeta:{master3 168f1589-e029-47ae-b194-10215fc22d6a 89933 0 2021-10-29 21:06:17 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-10-29 21:06:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-10-29 21:08:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-10-29 21:08:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-10-29 21:16:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2021-10-29 21:16:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:36 +0000 UTC,LastTransitionTime:2021-10-29 21:11:36 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 01:17:18 +0000 UTC,LastTransitionTime:2021-10-29 21:06:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 01:17:18 +0000 UTC,LastTransitionTime:2021-10-29 21:06:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 01:17:18 +0000 UTC,LastTransitionTime:2021-10-29 21:06:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 01:17:18 +0000 UTC,LastTransitionTime:2021-10-29 21:08:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:de18dcb6cb4c493e9f4d987da2c8b3fd,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:89235c4b-b1f5-4716-bbd7-18b41c0bde74,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 30 01:17:25.920: INFO: Logging kubelet events for node master3 Oct 30 01:17:25.922: INFO: Logging pods the kubelet thinks is on node master3 Oct 30 01:17:25.932: INFO: kube-controller-manager-master3 started at 2021-10-29 21:14:43 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:25.932: INFO: Container kube-controller-manager ready: true, restart count 1 Oct 30 01:17:25.932: INFO: kube-proxy-r6fpx started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:25.932: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 01:17:25.932: INFO: kube-flannel-rbdlt started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded) Oct 30 01:17:25.933: INFO: Init container install-cni ready: true, restart count 2 Oct 30 01:17:25.933: INFO: Container kube-flannel ready: true, restart count 2 Oct 30 01:17:25.933: INFO: kube-multus-ds-amd64-bdwh9 started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:25.933: INFO: Container kube-multus ready: true, restart count 1 Oct 30 01:17:25.933: INFO: coredns-8474476ff8-wrwwv started at 2021-10-29 21:09:00 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:25.933: INFO: Container coredns ready: true, restart count 1 Oct 30 01:17:25.933: INFO: prometheus-operator-585ccfb458-czbr2 started at 2021-10-29 21:21:06 +0000 UTC (0+2 container statuses recorded) Oct 30 01:17:25.933: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 01:17:25.933: INFO: Container prometheus-operator ready: true, restart count 0 Oct 30 01:17:25.933: INFO: node-exporter-bv946 started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded) Oct 30 01:17:25.933: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 01:17:25.933: INFO: Container node-exporter ready: true, restart count 0 Oct 30 01:17:25.933: INFO: kube-apiserver-master3 started at 2021-10-29 21:11:10 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:25.933: INFO: Container kube-apiserver ready: true, restart count 0 Oct 30 01:17:25.933: INFO: kube-scheduler-master3 started at 2021-10-29 21:14:43 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:25.933: INFO: Container kube-scheduler ready: true, restart count 2 Oct 30 01:17:25.933: INFO: dns-autoscaler-7df78bfcfb-phsdx started at 2021-10-29 21:09:02 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:25.933: INFO: Container autoscaler ready: true, restart count 1 Oct 30 01:17:25.933: INFO: node-feature-discovery-controller-cff799f9f-qq7g4 started at 2021-10-29 21:15:58 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:25.933: INFO: Container nfd-controller ready: true, restart count 0 W1030 01:17:25.948509 23 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 30 01:17:26.031: INFO: Latency metrics for node master3 Oct 30 01:17:26.031: INFO: Logging node info for node node1 Oct 30 01:17:26.033: INFO: Node Info: &Node{ObjectMeta:{node1 ddef9269-94c5-4165-81fb-a3b0c4ac5c75 89877 0 2021-10-29 21:07:27 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-10-29 21:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-10-29 21:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-10-29 21:08:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-10-29 21:16:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-10-29 21:19:31 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-10-29 21:19:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:38 +0000 UTC,LastTransitionTime:2021-10-29 21:11:38 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 01:17:17 +0000 UTC,LastTransitionTime:2021-10-29 21:07:27 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 01:17:17 +0000 UTC,LastTransitionTime:2021-10-29 21:07:27 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 01:17:17 +0000 UTC,LastTransitionTime:2021-10-29 21:07:27 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 01:17:17 +0000 UTC,LastTransitionTime:2021-10-29 21:08:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3bf4179125e4495c89c046ed0ae7baf7,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:ce868148-dc5e-4c7c-a555-42ee929547f7,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003432289,},ContainerImage{Names:[localhost:30500/cmk@sha256:430843a71fa03faf488543c9f5b50d3efbef49988d6784f9f48b8077cc806f60 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:2f1ff7ac170c0ac8079e232ea4ee89d23b7906d1b824d901927acb4e399c52c9 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:a296b4c6f6ee2b88f095b61e95c7ef4f51ba25598835b4978c9256d8c8ace48a alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 30 01:17:26.034: INFO: Logging kubelet events for node node1 Oct 30 01:17:26.036: INFO: Logging pods the kubelet thinks is on node node1 Oct 30 01:17:26.577: INFO: node-exporter-256wm started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded) Oct 30 01:17:26.577: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 01:17:26.577: INFO: Container node-exporter ready: true, restart count 0 Oct 30 01:17:26.577: INFO: prometheus-k8s-0 started at 2021-10-29 21:21:17 +0000 UTC (0+4 container statuses recorded) Oct 30 01:17:26.577: INFO: Container config-reloader ready: true, restart count 0 Oct 30 01:17:26.577: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Oct 30 01:17:26.577: INFO: Container grafana ready: true, restart count 0 Oct 30 01:17:26.577: INFO: Container prometheus ready: true, restart count 1 Oct 30 01:17:26.577: INFO: ss-1 started at 2021-10-30 01:17:08 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:26.577: INFO: Container webserver ready: true, restart count 0 Oct 30 01:17:26.577: INFO: kubernetes-metrics-scraper-5558854cb-5rmjw started at 2021-10-29 21:09:04 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:26.577: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Oct 30 01:17:26.577: INFO: execpodm5dj8 started at 2021-10-30 01:17:07 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:26.577: INFO: Container agnhost-container ready: true, restart count 0 Oct 30 01:17:26.577: INFO: node-feature-discovery-worker-w5vdb started at 2021-10-29 21:15:58 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:26.577: INFO: Container nfd-worker ready: true, restart count 0 Oct 30 01:17:26.577: INFO: frontend-685fc574d5-zslvl started at 2021-10-30 01:17:06 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:26.577: INFO: Container guestbook-frontend ready: false, restart count 0 Oct 30 01:17:26.577: INFO: kube-flannel-phg88 started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded) Oct 30 01:17:26.577: INFO: Init container install-cni ready: true, restart count 2 Oct 30 01:17:26.577: INFO: Container kube-flannel ready: true, restart count 2 Oct 30 01:17:26.577: INFO: collectd-d45rv started at 2021-10-29 21:25:13 +0000 UTC (0+3 container statuses recorded) Oct 30 01:17:26.577: INFO: Container collectd ready: true, restart count 0 Oct 30 01:17:26.577: INFO: Container collectd-exporter ready: true, restart count 0 Oct 30 01:17:26.577: INFO: Container rbac-proxy ready: true, restart count 0 Oct 30 01:17:26.577: INFO: ss2-0 started at 2021-10-30 01:16:10 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:26.577: INFO: Container webserver ready: false, restart count 0 Oct 30 01:17:26.577: INFO: kube-proxy-z5hqt started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:26.577: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 01:17:26.577: INFO: cmk-init-discover-node1-n4mcc started at 2021-10-29 21:19:28 +0000 UTC (0+3 container statuses recorded) Oct 30 01:17:26.577: INFO: Container discover ready: false, restart count 0 Oct 30 01:17:26.577: INFO: Container init ready: false, restart count 0 Oct 30 01:17:26.577: INFO: Container install ready: false, restart count 0 Oct 30 01:17:26.577: INFO: cmk-89lqq started at 2021-10-29 21:20:10 +0000 UTC (0+2 container statuses recorded) Oct 30 01:17:26.577: INFO: Container nodereport ready: true, restart count 0 Oct 30 01:17:26.577: INFO: Container reconcile ready: true, restart count 0 Oct 30 01:17:26.577: INFO: ss2-2 started at 2021-10-30 01:16:52 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:26.577: INFO: Container webserver ready: true, restart count 0 Oct 30 01:17:26.577: INFO: nginx-proxy-node1 started at 2021-10-29 21:07:28 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:26.577: INFO: Container nginx-proxy ready: true, restart count 2 Oct 30 01:17:26.577: INFO: ss-2 started at 2021-10-30 01:17:15 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:26.577: INFO: Container webserver ready: true, restart count 0 Oct 30 01:17:26.577: INFO: pod1 started at 2021-10-30 01:17:18 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:26.577: INFO: Container agnhost-container ready: false, restart count 0 Oct 30 01:17:26.577: INFO: var-expansion-afdb7726-47d4-41ee-80ea-1040e733f8da started at 2021-10-30 01:17:20 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:26.577: INFO: Container dapi-container ready: false, restart count 0 Oct 30 01:17:26.577: INFO: kube-multus-ds-amd64-68wrz started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:26.577: INFO: Container kube-multus ready: true, restart count 1 Oct 30 01:17:26.577: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-t789r started at 2021-10-29 21:17:10 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:26.577: INFO: Container kube-sriovdp ready: true, restart count 0 W1030 01:17:26.592240 23 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 30 01:17:27.035: INFO: Latency metrics for node node1 Oct 30 01:17:27.035: INFO: Logging node info for node node2 Oct 30 01:17:27.038: INFO: Node Info: &Node{ObjectMeta:{node2 3b49ad19-ba56-4f4a-b1fa-eef102063de9 90098 0 2021-10-29 21:07:28 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-10-29 21:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-10-29 21:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-10-29 21:08:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-10-29 21:16:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-10-29 21:19:53 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-10-29 21:19:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:34 +0000 UTC,LastTransitionTime:2021-10-29 21:11:34 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 01:17:22 +0000 UTC,LastTransitionTime:2021-10-29 21:07:28 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 01:17:22 +0000 UTC,LastTransitionTime:2021-10-29 21:07:28 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 01:17:22 +0000 UTC,LastTransitionTime:2021-10-29 21:07:28 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 01:17:22 +0000 UTC,LastTransitionTime:2021-10-29 21:08:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7283436dd9e34722a6e4df817add95ed,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:c219e7bd-582b-4d6c-b379-1161acc70676,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[localhost:30500/cmk@sha256:430843a71fa03faf488543c9f5b50d3efbef49988d6784f9f48b8077cc806f60 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:2f1ff7ac170c0ac8079e232ea4ee89d23b7906d1b824d901927acb4e399c52c9 localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[localhost:30500/tasextender@sha256:b7d2fa8154ac5d9cff45866e4d3d210a7d390f8576611c301a2eed2b57273227 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 30 01:17:27.038: INFO: Logging kubelet events for node node2 Oct 30 01:17:27.040: INFO: Logging pods the kubelet thinks is on node node2 Oct 30 01:17:27.055: INFO: collectd-flvhl started at 2021-10-29 21:25:13 +0000 UTC (0+3 container statuses recorded) Oct 30 01:17:27.055: INFO: Container collectd ready: true, restart count 0 Oct 30 01:17:27.055: INFO: Container collectd-exporter ready: true, restart count 0 Oct 30 01:17:27.055: INFO: Container rbac-proxy ready: true, restart count 0 Oct 30 01:17:27.055: INFO: replace-27259277-nr4dl started at 2021-10-30 01:17:00 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:27.055: INFO: Container c ready: true, restart count 0 Oct 30 01:17:27.055: INFO: ss-0 started at 2021-10-30 01:16:37 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:27.055: INFO: Container webserver ready: true, restart count 0 Oct 30 01:17:27.055: INFO: kube-proxy-76285 started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:27.055: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 01:17:27.055: INFO: test-pod started at 2021-10-30 01:12:21 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:27.055: INFO: Container webserver ready: true, restart count 0 Oct 30 01:17:27.055: INFO: logs-generator started at 2021-10-30 01:17:24 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:27.055: INFO: Container logs-generator ready: true, restart count 0 Oct 30 01:17:27.055: INFO: busybox-3c5eb356-7c9f-422a-a988-72ef8715868d started at 2021-10-30 01:17:15 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:27.055: INFO: Container busybox ready: true, restart count 0 Oct 30 01:17:27.055: INFO: node-feature-discovery-worker-h6lcp started at 2021-10-29 21:15:58 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:27.055: INFO: Container nfd-worker ready: true, restart count 0 Oct 30 01:17:27.055: INFO: pod-service-account-defaultsa-nomountspec started at 2021-10-30 01:16:12 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:27.055: INFO: Container token-test ready: false, restart count 0 Oct 30 01:17:27.055: INFO: kube-flannel-f6s5v started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded) Oct 30 01:17:27.055: INFO: Init container install-cni ready: true, restart count 2 Oct 30 01:17:27.055: INFO: Container kube-flannel ready: true, restart count 3 Oct 30 01:17:27.055: INFO: kube-multus-ds-amd64-7tvbl started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:27.055: INFO: Container kube-multus ready: true, restart count 1 Oct 30 01:17:27.055: INFO: cmk-webhook-6c9d5f8578-ffk66 started at 2021-10-29 21:20:11 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:27.055: INFO: Container cmk-webhook ready: true, restart count 0 Oct 30 01:17:27.055: INFO: nginx-proxy-node2 started at 2021-10-29 21:07:28 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:27.055: INFO: Container nginx-proxy ready: true, restart count 2 Oct 30 01:17:27.055: INFO: kubernetes-dashboard-785dcbb76d-pbjjt started at 2021-10-29 21:09:04 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:27.055: INFO: Container kubernetes-dashboard ready: true, restart count 1 Oct 30 01:17:27.055: INFO: cmk-init-discover-node2-2fmmt started at 2021-10-29 21:19:48 +0000 UTC (0+3 container statuses recorded) Oct 30 01:17:27.055: INFO: Container discover ready: false, restart count 0 Oct 30 01:17:27.055: INFO: Container init ready: false, restart count 0 Oct 30 01:17:27.055: INFO: Container install ready: false, restart count 0 Oct 30 01:17:27.055: INFO: agnhost-replica-6bcf79b489-xrqk9 started at 2021-10-30 01:17:07 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:27.055: INFO: Container replica ready: false, restart count 0 Oct 30 01:17:27.055: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-69pkg started at 2021-10-29 21:17:10 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:27.055: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 30 01:17:27.055: INFO: pod-service-account-mountsa-nomountspec started at 2021-10-30 01:16:12 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:27.055: INFO: Container token-test ready: true, restart count 0 Oct 30 01:17:27.055: INFO: cmk-8bpbf started at 2021-10-29 21:20:11 +0000 UTC (0+2 container statuses recorded) Oct 30 01:17:27.055: INFO: Container nodereport ready: true, restart count 0 Oct 30 01:17:27.055: INFO: Container reconcile ready: true, restart count 0 Oct 30 01:17:27.055: INFO: node-exporter-r77s4 started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded) Oct 30 01:17:27.055: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 01:17:27.055: INFO: Container node-exporter ready: true, restart count 0 Oct 30 01:17:27.055: INFO: tas-telemetry-aware-scheduling-84ff454dfb-989mh started at 2021-10-29 21:24:23 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:27.055: INFO: Container tas-extender ready: true, restart count 0 Oct 30 01:17:27.055: INFO: agnhost-primary-5db8ddd565-85qn7 started at 2021-10-30 01:17:06 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:27.055: INFO: Container primary ready: false, restart count 0 Oct 30 01:17:27.055: INFO: ss2-1 started at 2021-10-30 01:17:12 +0000 UTC (0+1 container statuses recorded) Oct 30 01:17:27.055: INFO: Container webserver ready: true, restart count 0 W1030 01:17:27.074828 23 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 30 01:17:27.405: INFO: Latency metrics for node node2 Oct 30 01:17:27.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4408" for this suite. • Failure [306.287 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 Should recreate evicted statefulset [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 01:17:25.200: Pod ss-0 expected to be re-created at least once /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 ------------------------------ {"msg":"FAILED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":-1,"completed":11,"skipped":177,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:17:27.452: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 01:17:27.476: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Oct 30 01:17:28.499: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:17:29.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-131" for this suite. • ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":-1,"completed":12,"skipped":199,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:17:29.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Oct 30 01:17:29.560: INFO: Waiting up to 5m0s for pod "security-context-377fdf03-6d42-4489-8d54-f5b0d2b4be04" in namespace "security-context-7701" to be "Succeeded or Failed" Oct 30 01:17:29.567: INFO: Pod "security-context-377fdf03-6d42-4489-8d54-f5b0d2b4be04": Phase="Pending", Reason="", readiness=false. Elapsed: 7.12512ms Oct 30 01:17:31.571: INFO: Pod "security-context-377fdf03-6d42-4489-8d54-f5b0d2b4be04": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010453235s Oct 30 01:17:33.575: INFO: Pod "security-context-377fdf03-6d42-4489-8d54-f5b0d2b4be04": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014810765s STEP: Saw pod success Oct 30 01:17:33.575: INFO: Pod "security-context-377fdf03-6d42-4489-8d54-f5b0d2b4be04" satisfied condition "Succeeded or Failed" Oct 30 01:17:33.577: INFO: Trying to get logs from node node2 pod security-context-377fdf03-6d42-4489-8d54-f5b0d2b4be04 container test-container: STEP: delete the pod Oct 30 01:17:33.589: INFO: Waiting for pod security-context-377fdf03-6d42-4489-8d54-f5b0d2b4be04 to disappear Oct 30 01:17:33.591: INFO: Pod security-context-377fdf03-6d42-4489-8d54-f5b0d2b4be04 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:17:33.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-7701" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":13,"skipped":206,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:17:18.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should serve multiport endpoints from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service multi-endpoint-test in namespace services-8047 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8047 to expose endpoints map[] Oct 30 01:17:18.361: INFO: successfully validated that service multi-endpoint-test in namespace services-8047 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-8047 Oct 30 01:17:18.375: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:17:20.381: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:17:22.379: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:17:24.379: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:17:26.378: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:17:28.379: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:17:30.378: INFO: The status of Pod pod1 is Running (Ready = true) STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8047 to expose endpoints map[pod1:[100]] Oct 30 01:17:30.387: INFO: successfully validated that service multi-endpoint-test in namespace services-8047 exposes endpoints map[pod1:[100]] STEP: Creating pod pod2 in namespace services-8047 Oct 30 01:17:30.398: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:17:32.402: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:17:34.403: INFO: The status of Pod pod2 is Running (Ready = true) STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8047 to expose endpoints map[pod1:[100] pod2:[101]] Oct 30 01:17:34.415: INFO: successfully validated that service multi-endpoint-test in namespace services-8047 exposes endpoints map[pod1:[100] pod2:[101]] STEP: Deleting pod pod1 in namespace services-8047 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8047 to expose endpoints map[pod2:[101]] Oct 30 01:17:34.431: INFO: successfully validated that service multi-endpoint-test in namespace services-8047 exposes endpoints map[pod2:[101]] STEP: Deleting pod pod2 in namespace services-8047 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8047 to expose endpoints map[] Oct 30 01:17:34.443: INFO: successfully validated that service multi-endpoint-test in namespace services-8047 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:17:34.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8047" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:16.128 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should serve multiport endpoints from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":-1,"completed":25,"skipped":334,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:17:07.610: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Oct 30 01:17:07.631: INFO: >>> kubeConfig: /root/.kube/config Oct 30 01:17:16.143: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:17:35.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5070" for this suite. • [SLOW TEST:28.356 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":-1,"completed":15,"skipped":164,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:17:36.029: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Oct 30 01:17:36.059: INFO: Waiting up to 5m0s for pod "downwardapi-volume-62908a23-3d2d-463f-9ab5-1b0774a9ee65" in namespace "downward-api-4706" to be "Succeeded or Failed" Oct 30 01:17:36.061: INFO: Pod "downwardapi-volume-62908a23-3d2d-463f-9ab5-1b0774a9ee65": Phase="Pending", Reason="", readiness=false. Elapsed: 2.146773ms Oct 30 01:17:38.064: INFO: Pod "downwardapi-volume-62908a23-3d2d-463f-9ab5-1b0774a9ee65": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004560603s Oct 30 01:17:40.067: INFO: Pod "downwardapi-volume-62908a23-3d2d-463f-9ab5-1b0774a9ee65": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007806516s STEP: Saw pod success Oct 30 01:17:40.067: INFO: Pod "downwardapi-volume-62908a23-3d2d-463f-9ab5-1b0774a9ee65" satisfied condition "Succeeded or Failed" Oct 30 01:17:40.069: INFO: Trying to get logs from node node2 pod downwardapi-volume-62908a23-3d2d-463f-9ab5-1b0774a9ee65 container client-container: STEP: delete the pod Oct 30 01:17:40.083: INFO: Waiting for pod downwardapi-volume-62908a23-3d2d-463f-9ab5-1b0774a9ee65 to disappear Oct 30 01:17:40.085: INFO: Pod downwardapi-volume-62908a23-3d2d-463f-9ab5-1b0774a9ee65 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:17:40.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4706" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":201,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:17:24.185: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1386 STEP: creating an pod Oct 30 01:17:24.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2466 run logs-generator --image=k8s.gcr.io/e2e-test-images/agnhost:2.32 --restart=Never -- logs-generator --log-lines-total 100 --run-duration 20s' Oct 30 01:17:24.360: INFO: stderr: "" Oct 30 01:17:24.360: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Waiting for log generator to start. Oct 30 01:17:24.361: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Oct 30 01:17:24.361: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-2466" to be "running and ready, or succeeded" Oct 30 01:17:24.363: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.319871ms Oct 30 01:17:26.366: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005649745s Oct 30 01:17:28.374: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.013096834s Oct 30 01:17:28.374: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Oct 30 01:17:28.374: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Oct 30 01:17:28.374: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2466 logs logs-generator logs-generator' Oct 30 01:17:28.542: INFO: stderr: "" Oct 30 01:17:28.543: INFO: stdout: "I1030 01:17:26.276886 1 logs_generator.go:76] 0 GET /api/v1/namespaces/ns/pods/f9w 575\nI1030 01:17:26.477906 1 logs_generator.go:76] 1 GET /api/v1/namespaces/kube-system/pods/mnwt 222\nI1030 01:17:26.677238 1 logs_generator.go:76] 2 GET /api/v1/namespaces/kube-system/pods/gtq7 438\nI1030 01:17:26.877620 1 logs_generator.go:76] 3 GET /api/v1/namespaces/kube-system/pods/4cbd 519\nI1030 01:17:27.076948 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/kube-system/pods/qqxp 434\nI1030 01:17:27.277207 1 logs_generator.go:76] 5 GET /api/v1/namespaces/ns/pods/7555 275\nI1030 01:17:27.477479 1 logs_generator.go:76] 6 POST /api/v1/namespaces/kube-system/pods/9ll8 592\nI1030 01:17:27.677823 1 logs_generator.go:76] 7 GET /api/v1/namespaces/kube-system/pods/kkw 470\nI1030 01:17:27.876935 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/kube-system/pods/tk9z 332\nI1030 01:17:28.077245 1 logs_generator.go:76] 9 GET /api/v1/namespaces/kube-system/pods/zfk 577\nI1030 01:17:28.277655 1 logs_generator.go:76] 10 GET /api/v1/namespaces/kube-system/pods/v56b 324\nI1030 01:17:28.476934 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/kube-system/pods/j2lp 365\n" STEP: limiting log lines Oct 30 01:17:28.543: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2466 logs logs-generator logs-generator --tail=1' Oct 30 01:17:28.714: INFO: stderr: "" Oct 30 01:17:28.714: INFO: stdout: "I1030 01:17:28.677219 1 logs_generator.go:76] 12 GET /api/v1/namespaces/kube-system/pods/7z9j 241\n" Oct 30 01:17:28.714: INFO: got output "I1030 01:17:28.677219 1 logs_generator.go:76] 12 GET /api/v1/namespaces/kube-system/pods/7z9j 241\n" STEP: limiting log bytes Oct 30 01:17:28.714: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2466 logs logs-generator logs-generator --limit-bytes=1' Oct 30 01:17:28.879: INFO: stderr: "" Oct 30 01:17:28.879: INFO: stdout: "I" Oct 30 01:17:28.879: INFO: got output "I" STEP: exposing timestamps Oct 30 01:17:28.879: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2466 logs logs-generator logs-generator --tail=1 --timestamps' Oct 30 01:17:29.036: INFO: stderr: "" Oct 30 01:17:29.036: INFO: stdout: "2021-10-30T01:17:28.877687144Z I1030 01:17:28.877606 1 logs_generator.go:76] 13 PUT /api/v1/namespaces/default/pods/rmz9 254\n" Oct 30 01:17:29.036: INFO: got output "2021-10-30T01:17:28.877687144Z I1030 01:17:28.877606 1 logs_generator.go:76] 13 PUT /api/v1/namespaces/default/pods/rmz9 254\n" STEP: restricting to a time range Oct 30 01:17:31.538: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2466 logs logs-generator logs-generator --since=1s' Oct 30 01:17:31.702: INFO: stderr: "" Oct 30 01:17:31.702: INFO: stdout: "I1030 01:17:30.877192 1 logs_generator.go:76] 23 PUT /api/v1/namespaces/kube-system/pods/c9z 367\nI1030 01:17:31.077590 1 logs_generator.go:76] 24 GET /api/v1/namespaces/default/pods/zkxp 389\nI1030 01:17:31.277035 1 logs_generator.go:76] 25 PUT /api/v1/namespaces/default/pods/czb 405\nI1030 01:17:31.477363 1 logs_generator.go:76] 26 PUT /api/v1/namespaces/ns/pods/9t9z 471\nI1030 01:17:31.677803 1 logs_generator.go:76] 27 POST /api/v1/namespaces/kube-system/pods/wb7d 509\n" Oct 30 01:17:31.702: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2466 logs logs-generator logs-generator --since=24h' Oct 30 01:17:31.891: INFO: stderr: "" Oct 30 01:17:31.891: INFO: stdout: "I1030 01:17:26.276886 1 logs_generator.go:76] 0 GET /api/v1/namespaces/ns/pods/f9w 575\nI1030 01:17:26.477906 1 logs_generator.go:76] 1 GET /api/v1/namespaces/kube-system/pods/mnwt 222\nI1030 01:17:26.677238 1 logs_generator.go:76] 2 GET /api/v1/namespaces/kube-system/pods/gtq7 438\nI1030 01:17:26.877620 1 logs_generator.go:76] 3 GET /api/v1/namespaces/kube-system/pods/4cbd 519\nI1030 01:17:27.076948 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/kube-system/pods/qqxp 434\nI1030 01:17:27.277207 1 logs_generator.go:76] 5 GET /api/v1/namespaces/ns/pods/7555 275\nI1030 01:17:27.477479 1 logs_generator.go:76] 6 POST /api/v1/namespaces/kube-system/pods/9ll8 592\nI1030 01:17:27.677823 1 logs_generator.go:76] 7 GET /api/v1/namespaces/kube-system/pods/kkw 470\nI1030 01:17:27.876935 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/kube-system/pods/tk9z 332\nI1030 01:17:28.077245 1 logs_generator.go:76] 9 GET /api/v1/namespaces/kube-system/pods/zfk 577\nI1030 01:17:28.277655 1 logs_generator.go:76] 10 GET /api/v1/namespaces/kube-system/pods/v56b 324\nI1030 01:17:28.476934 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/kube-system/pods/j2lp 365\nI1030 01:17:28.677219 1 logs_generator.go:76] 12 GET /api/v1/namespaces/kube-system/pods/7z9j 241\nI1030 01:17:28.877606 1 logs_generator.go:76] 13 PUT /api/v1/namespaces/default/pods/rmz9 254\nI1030 01:17:29.076938 1 logs_generator.go:76] 14 POST /api/v1/namespaces/default/pods/64f 471\nI1030 01:17:29.277243 1 logs_generator.go:76] 15 POST /api/v1/namespaces/default/pods/nsg 546\nI1030 01:17:29.477626 1 logs_generator.go:76] 16 POST /api/v1/namespaces/kube-system/pods/922 339\nI1030 01:17:29.676963 1 logs_generator.go:76] 17 PUT /api/v1/namespaces/kube-system/pods/hfsq 291\nI1030 01:17:29.877238 1 logs_generator.go:76] 18 GET /api/v1/namespaces/ns/pods/8vlw 509\nI1030 01:17:30.093370 1 logs_generator.go:76] 19 POST /api/v1/namespaces/ns/pods/tn2 232\nI1030 01:17:30.277688 1 logs_generator.go:76] 20 POST /api/v1/namespaces/kube-system/pods/mjw5 451\nI1030 01:17:30.477741 1 logs_generator.go:76] 21 GET /api/v1/namespaces/default/pods/zb8h 215\nI1030 01:17:30.677010 1 logs_generator.go:76] 22 POST /api/v1/namespaces/ns/pods/w2pf 325\nI1030 01:17:30.877192 1 logs_generator.go:76] 23 PUT /api/v1/namespaces/kube-system/pods/c9z 367\nI1030 01:17:31.077590 1 logs_generator.go:76] 24 GET /api/v1/namespaces/default/pods/zkxp 389\nI1030 01:17:31.277035 1 logs_generator.go:76] 25 PUT /api/v1/namespaces/default/pods/czb 405\nI1030 01:17:31.477363 1 logs_generator.go:76] 26 PUT /api/v1/namespaces/ns/pods/9t9z 471\nI1030 01:17:31.677803 1 logs_generator.go:76] 27 POST /api/v1/namespaces/kube-system/pods/wb7d 509\nI1030 01:17:31.877074 1 logs_generator.go:76] 28 PUT /api/v1/namespaces/kube-system/pods/prn 205\n" [AfterEach] Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1391 Oct 30 01:17:31.891: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2466 delete pod logs-generator' Oct 30 01:17:42.925: INFO: stderr: "" Oct 30 01:17:42.925: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:17:42.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2466" for this suite. • [SLOW TEST:18.750 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1383 should be able to retrieve and filter logs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":-1,"completed":27,"skipped":362,"failed":0} SSS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:17:20.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 01:17:28.275: INFO: Deleting pod "var-expansion-afdb7726-47d4-41ee-80ea-1040e733f8da" in namespace "var-expansion-5864" Oct 30 01:17:28.279: INFO: Wait up to 5m0s for pod "var-expansion-afdb7726-47d4-41ee-80ea-1040e733f8da" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:17:44.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5864" for this suite. • [SLOW TEST:24.063 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:17:40.150: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-8e65d160-d9fe-480e-92ff-fd2f1f3d7e5e STEP: Creating a pod to test consume configMaps Oct 30 01:17:40.185: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b13b27d6-085c-4d02-8c3f-3a0954410a10" in namespace "projected-314" to be "Succeeded or Failed" Oct 30 01:17:40.187: INFO: Pod "pod-projected-configmaps-b13b27d6-085c-4d02-8c3f-3a0954410a10": Phase="Pending", Reason="", readiness=false. Elapsed: 1.924642ms Oct 30 01:17:42.190: INFO: Pod "pod-projected-configmaps-b13b27d6-085c-4d02-8c3f-3a0954410a10": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005416609s Oct 30 01:17:44.193: INFO: Pod "pod-projected-configmaps-b13b27d6-085c-4d02-8c3f-3a0954410a10": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008216067s Oct 30 01:17:46.196: INFO: Pod "pod-projected-configmaps-b13b27d6-085c-4d02-8c3f-3a0954410a10": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.011602011s STEP: Saw pod success Oct 30 01:17:46.197: INFO: Pod "pod-projected-configmaps-b13b27d6-085c-4d02-8c3f-3a0954410a10" satisfied condition "Succeeded or Failed" Oct 30 01:17:46.199: INFO: Trying to get logs from node node1 pod pod-projected-configmaps-b13b27d6-085c-4d02-8c3f-3a0954410a10 container agnhost-container: STEP: delete the pod Oct 30 01:17:46.211: INFO: Waiting for pod pod-projected-configmaps-b13b27d6-085c-4d02-8c3f-3a0954410a10 to disappear Oct 30 01:17:46.213: INFO: Pod pod-projected-configmaps-b13b27d6-085c-4d02-8c3f-3a0954410a10 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:17:46.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-314" for this suite. • [SLOW TEST:6.070 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":232,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:17:42.944: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64 [It] should support unsafe sysctls which are actually allowed [MinimumKubeletVersion:1.21] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod with the kernel.shm_rmid_forced sysctl STEP: Watching for error events or started pod STEP: Waiting for pod completion STEP: Checking that the pod succeeded STEP: Getting logs from the pod STEP: Checking that the sysctl is actually updated [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:17:46.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-2254" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support unsafe sysctls which are actually allowed [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":28,"skipped":365,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:15:25.132: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-717 [It] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a new StatefulSet Oct 30 01:15:25.162: INFO: Found 0 stateful pods, waiting for 3 Oct 30 01:15:35.169: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Oct 30 01:15:35.169: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Oct 30 01:15:35.169: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Oct 30 01:15:35.177: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-717 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 30 01:15:35.403: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Oct 30 01:15:35.403: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 30 01:15:35.403: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 to k8s.gcr.io/e2e-test-images/httpd:2.4.39-1 Oct 30 01:15:45.431: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Oct 30 01:15:55.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-717 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:15:55.687: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Oct 30 01:15:55.687: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Oct 30 01:15:55.687: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Oct 30 01:16:15.704: INFO: Waiting for StatefulSet statefulset-717/ss2 to complete update STEP: Rolling back to a previous revision Oct 30 01:16:25.710: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-717 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 30 01:16:26.314: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Oct 30 01:16:26.314: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 30 01:16:26.314: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Oct 30 01:16:36.347: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Oct 30 01:16:46.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-717 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:16:46.809: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Oct 30 01:16:46.810: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Oct 30 01:16:46.810: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Oct 30 01:16:56.826: INFO: Waiting for StatefulSet statefulset-717/ss2 to complete update Oct 30 01:16:56.826: INFO: Waiting for Pod statefulset-717/ss2-0 to have revision ss2-677d6db895 update revision ss2-5bbbc9fc94 Oct 30 01:16:56.826: INFO: Waiting for Pod statefulset-717/ss2-1 to have revision ss2-677d6db895 update revision ss2-5bbbc9fc94 Oct 30 01:17:06.834: INFO: Waiting for StatefulSet statefulset-717/ss2 to complete update Oct 30 01:17:06.834: INFO: Waiting for Pod statefulset-717/ss2-0 to have revision ss2-677d6db895 update revision ss2-5bbbc9fc94 Oct 30 01:17:06.834: INFO: Waiting for Pod statefulset-717/ss2-1 to have revision ss2-677d6db895 update revision ss2-5bbbc9fc94 Oct 30 01:17:16.833: INFO: Waiting for StatefulSet statefulset-717/ss2 to complete update Oct 30 01:17:16.833: INFO: Waiting for Pod statefulset-717/ss2-0 to have revision ss2-677d6db895 update revision ss2-5bbbc9fc94 Oct 30 01:17:26.832: INFO: Waiting for StatefulSet statefulset-717/ss2 to complete update [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 Oct 30 01:17:36.833: INFO: Deleting all statefulset in ns statefulset-717 Oct 30 01:17:36.835: INFO: Scaling statefulset ss2 to 0 Oct 30 01:17:56.848: INFO: Waiting for statefulset status.replicas updated to 0 Oct 30 01:17:56.851: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:17:56.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-717" for this suite. • [SLOW TEST:151.737 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 should perform rolling updates and roll backs of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":-1,"completed":28,"skipped":661,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]","total":-1,"completed":12,"skipped":144,"failed":0} [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:17:44.300: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 30 01:17:44.779: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created Oct 30 01:17:46.787: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153464, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153464, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153464, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153464, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 30 01:17:49.798: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 01:17:49.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-3134-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:17:57.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2617" for this suite. STEP: Destroying namespace "webhook-2617-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:13.669 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":-1,"completed":13,"skipped":144,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:17:34.487: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: set up a multi version CRD Oct 30 01:17:34.507: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:17:58.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4348" for this suite. • [SLOW TEST:23.735 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":-1,"completed":26,"skipped":349,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:17:56.950: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Oct 30 01:17:56.983: INFO: Waiting up to 5m0s for pod "security-context-e0a5757f-ed12-4903-98c6-2da48c73e7dd" in namespace "security-context-7409" to be "Succeeded or Failed" Oct 30 01:17:56.985: INFO: Pod "security-context-e0a5757f-ed12-4903-98c6-2da48c73e7dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026147ms Oct 30 01:17:58.989: INFO: Pod "security-context-e0a5757f-ed12-4903-98c6-2da48c73e7dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005452995s Oct 30 01:18:00.993: INFO: Pod "security-context-e0a5757f-ed12-4903-98c6-2da48c73e7dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01024713s STEP: Saw pod success Oct 30 01:18:00.993: INFO: Pod "security-context-e0a5757f-ed12-4903-98c6-2da48c73e7dd" satisfied condition "Succeeded or Failed" Oct 30 01:18:00.996: INFO: Trying to get logs from node node1 pod security-context-e0a5757f-ed12-4903-98c6-2da48c73e7dd container test-container: STEP: delete the pod Oct 30 01:18:01.011: INFO: Waiting for pod security-context-e0a5757f-ed12-4903-98c6-2da48c73e7dd to disappear Oct 30 01:18:01.013: INFO: Pod security-context-e0a5757f-ed12-4903-98c6-2da48c73e7dd no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:18:01.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-7409" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":29,"skipped":702,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:16:25.647: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63 W1030 01:16:25.667512 35 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob [It] should replace jobs when ReplaceConcurrent [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a ReplaceConcurrent cronjob STEP: Ensuring a job is scheduled STEP: Ensuring exactly one is scheduled STEP: Ensuring exactly one running job exists by listing jobs explicitly STEP: Ensuring the job is replaced with a new one STEP: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:18:01.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-6474" for this suite. • [SLOW TEST:96.046 seconds] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should replace jobs when ReplaceConcurrent [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","total":-1,"completed":13,"skipped":290,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:17:57.988: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 01:17:58.038: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"2f9f6b46-ef68-429f-abb9-340b32544bdf", Controller:(*bool)(0xc005ec3b92), BlockOwnerDeletion:(*bool)(0xc005ec3b93)}} Oct 30 01:17:58.043: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"6708aec3-47af-4ffe-9847-d739cb04c869", Controller:(*bool)(0xc003b59eca), BlockOwnerDeletion:(*bool)(0xc003b59ecb)}} Oct 30 01:17:58.046: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"49d225e8-a222-4c05-9b29-a704c1bc6349", Controller:(*bool)(0xc005cd8a12), BlockOwnerDeletion:(*bool)(0xc005cd8a13)}} [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:18:03.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4988" for this suite. • [SLOW TEST:5.075 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":-1,"completed":14,"skipped":154,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:18:01.101: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Oct 30 01:18:01.134: INFO: Waiting up to 5m0s for pod "downwardapi-volume-80e9d628-66a7-487c-b65e-3e484f3e8d44" in namespace "downward-api-9147" to be "Succeeded or Failed" Oct 30 01:18:01.138: INFO: Pod "downwardapi-volume-80e9d628-66a7-487c-b65e-3e484f3e8d44": Phase="Pending", Reason="", readiness=false. Elapsed: 3.70405ms Oct 30 01:18:03.143: INFO: Pod "downwardapi-volume-80e9d628-66a7-487c-b65e-3e484f3e8d44": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008165148s Oct 30 01:18:05.146: INFO: Pod "downwardapi-volume-80e9d628-66a7-487c-b65e-3e484f3e8d44": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011152057s STEP: Saw pod success Oct 30 01:18:05.146: INFO: Pod "downwardapi-volume-80e9d628-66a7-487c-b65e-3e484f3e8d44" satisfied condition "Succeeded or Failed" Oct 30 01:18:05.148: INFO: Trying to get logs from node node2 pod downwardapi-volume-80e9d628-66a7-487c-b65e-3e484f3e8d44 container client-container: STEP: delete the pod Oct 30 01:18:05.160: INFO: Waiting for pod downwardapi-volume-80e9d628-66a7-487c-b65e-3e484f3e8d44 to disappear Oct 30 01:18:05.163: INFO: Pod downwardapi-volume-80e9d628-66a7-487c-b65e-3e484f3e8d44 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:18:05.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9147" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":30,"skipped":750,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:17:15.466: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod busybox-3c5eb356-7c9f-422a-a988-72ef8715868d in namespace container-probe-5383 Oct 30 01:17:19.505: INFO: Started pod busybox-3c5eb356-7c9f-422a-a988-72ef8715868d in namespace container-probe-5383 STEP: checking the pod's current state and verifying that restartCount is present Oct 30 01:17:19.507: INFO: Initial restart count of pod busybox-3c5eb356-7c9f-422a-a988-72ef8715868d is 0 Oct 30 01:18:09.689: INFO: Restart count of pod container-probe-5383/busybox-3c5eb356-7c9f-422a-a988-72ef8715868d is now 1 (50.182378516s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:18:09.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5383" for this suite. • [SLOW TEST:54.239 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":284,"failed":0} SSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:17:46.242: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Performing setup for networking test in namespace pod-network-test-5122 STEP: creating a selector STEP: Creating the service pods in kubernetes Oct 30 01:17:46.260: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Oct 30 01:17:46.289: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:17:48.292: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:17:50.292: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 30 01:17:52.291: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 30 01:17:54.293: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 30 01:17:56.293: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 30 01:17:58.292: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 30 01:18:00.294: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 30 01:18:02.292: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 30 01:18:04.298: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 30 01:18:06.293: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 30 01:18:08.292: INFO: The status of Pod netserver-0 is Running (Ready = true) Oct 30 01:18:08.297: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Oct 30 01:18:14.320: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 Oct 30 01:18:14.320: INFO: Breadth first check of 10.244.3.132 on host 10.10.190.207... Oct 30 01:18:14.322: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.250:9080/dial?request=hostname&protocol=http&host=10.244.3.132&port=8080&tries=1'] Namespace:pod-network-test-5122 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 01:18:14.322: INFO: >>> kubeConfig: /root/.kube/config Oct 30 01:18:14.447: INFO: Waiting for responses: map[] Oct 30 01:18:14.447: INFO: reached 10.244.3.132 after 0/1 tries Oct 30 01:18:14.448: INFO: Breadth first check of 10.244.4.245 on host 10.10.190.208... Oct 30 01:18:14.452: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.250:9080/dial?request=hostname&protocol=http&host=10.244.4.245&port=8080&tries=1'] Namespace:pod-network-test-5122 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 01:18:14.452: INFO: >>> kubeConfig: /root/.kube/config Oct 30 01:18:14.534: INFO: Waiting for responses: map[] Oct 30 01:18:14.534: INFO: reached 10.244.4.245 after 0/1 tries Oct 30 01:18:14.534: INFO: Going to retry 0 out of 2 pods.... [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:18:14.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-5122" for this suite. • [SLOW TEST:28.302 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30 should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":242,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:18:09.713: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating Pod STEP: Reading file content from the nginx-container Oct 30 01:18:15.752: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-6273 PodName:pod-sharedvolume-d83de3f8-d379-4f1b-abbb-2798a32a560e ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 01:18:15.752: INFO: >>> kubeConfig: /root/.kube/config Oct 30 01:18:15.836: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:18:15.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6273" for this suite. • [SLOW TEST:6.132 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 pod should support shared volumes between containers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":-1,"completed":21,"skipped":287,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:18:15.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0777 on node default medium Oct 30 01:18:15.889: INFO: Waiting up to 5m0s for pod "pod-0a3bc47f-ec76-4b45-afd1-d094c878703b" in namespace "emptydir-5697" to be "Succeeded or Failed" Oct 30 01:18:15.891: INFO: Pod "pod-0a3bc47f-ec76-4b45-afd1-d094c878703b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.569508ms Oct 30 01:18:17.896: INFO: Pod "pod-0a3bc47f-ec76-4b45-afd1-d094c878703b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007491507s Oct 30 01:18:19.900: INFO: Pod "pod-0a3bc47f-ec76-4b45-afd1-d094c878703b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011605241s STEP: Saw pod success Oct 30 01:18:19.900: INFO: Pod "pod-0a3bc47f-ec76-4b45-afd1-d094c878703b" satisfied condition "Succeeded or Failed" Oct 30 01:18:19.904: INFO: Trying to get logs from node node2 pod pod-0a3bc47f-ec76-4b45-afd1-d094c878703b container test-container: STEP: delete the pod Oct 30 01:18:19.934: INFO: Waiting for pod pod-0a3bc47f-ec76-4b45-afd1-d094c878703b to disappear Oct 30 01:18:19.937: INFO: Pod pod-0a3bc47f-ec76-4b45-afd1-d094c878703b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:18:19.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5697" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":292,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:18:14.550: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1514 [It] should create a pod from an image when restart is Never [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 Oct 30 01:18:14.571: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4622 run e2e-test-httpd-pod --restart=Never --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-1' Oct 30 01:18:14.739: INFO: stderr: "" Oct 30 01:18:14.739: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1518 Oct 30 01:18:14.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4622 delete pods e2e-test-httpd-pod' Oct 30 01:18:22.822: INFO: stderr: "" Oct 30 01:18:22.822: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:18:22.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4622" for this suite. • [SLOW TEST:8.283 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1511 should create a pod from an image when restart is Never [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":-1,"completed":19,"skipped":244,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:17:58.295: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Performing setup for networking test in namespace pod-network-test-5672 STEP: creating a selector STEP: Creating the service pods in kubernetes Oct 30 01:17:58.315: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Oct 30 01:17:58.344: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:18:00.346: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:18:02.347: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 30 01:18:04.348: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 30 01:18:06.349: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 30 01:18:08.350: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 30 01:18:10.353: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 30 01:18:12.347: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 30 01:18:14.350: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 30 01:18:16.347: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 30 01:18:18.350: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 30 01:18:20.348: INFO: The status of Pod netserver-0 is Running (Ready = true) Oct 30 01:18:20.354: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Oct 30 01:18:24.379: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 Oct 30 01:18:24.379: INFO: Breadth first check of 10.244.3.134 on host 10.10.190.207... Oct 30 01:18:24.382: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.140:9080/dial?request=hostname&protocol=udp&host=10.244.3.134&port=8081&tries=1'] Namespace:pod-network-test-5672 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 01:18:24.382: INFO: >>> kubeConfig: /root/.kube/config Oct 30 01:18:24.470: INFO: Waiting for responses: map[] Oct 30 01:18:24.470: INFO: reached 10.244.3.134 after 0/1 tries Oct 30 01:18:24.471: INFO: Breadth first check of 10.244.4.246 on host 10.10.190.208... Oct 30 01:18:24.474: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.140:9080/dial?request=hostname&protocol=udp&host=10.244.4.246&port=8081&tries=1'] Namespace:pod-network-test-5672 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 01:18:24.474: INFO: >>> kubeConfig: /root/.kube/config Oct 30 01:18:24.555: INFO: Waiting for responses: map[] Oct 30 01:18:24.555: INFO: reached 10.244.4.246 after 0/1 tries Oct 30 01:18:24.555: INFO: Going to retry 0 out of 2 pods.... [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:18:24.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-5672" for this suite. • [SLOW TEST:26.269 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30 should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":390,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:18:20.016: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] Replicaset should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating replica set "test-rs" that asks for more than the allowed pod quota Oct 30 01:18:20.041: INFO: Pod name sample-pod: Found 0 pods out of 1 Oct 30 01:18:25.051: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the replicaset Spec.Replicas was modified STEP: Patch a scale subresource [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:18:25.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-4026" for this suite. • [SLOW TEST:5.059 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Replicaset should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet Replicaset should have a working scale subresource [Conformance]","total":-1,"completed":23,"skipped":333,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:17:20.461: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 01:17:20.495: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Oct 30 01:17:28.043: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-10-30T01:17:28Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-10-30T01:17:28Z]] name:name1 resourceVersion:90330 uid:2ae0f2e4-5520-4af1-958e-3f7a3462fba1] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Oct 30 01:17:38.048: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-10-30T01:17:38Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-10-30T01:17:38Z]] name:name2 resourceVersion:90579 uid:98c90cb6-8192-4012-8007-6bd65f1f0ae7] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Oct 30 01:17:48.053: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-10-30T01:17:28Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-10-30T01:17:48Z]] name:name1 resourceVersion:90836 uid:2ae0f2e4-5520-4af1-958e-3f7a3462fba1] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Oct 30 01:17:58.058: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-10-30T01:17:38Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-10-30T01:17:58Z]] name:name2 resourceVersion:91006 uid:98c90cb6-8192-4012-8007-6bd65f1f0ae7] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Oct 30 01:18:08.064: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-10-30T01:17:28Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-10-30T01:17:48Z]] name:name1 resourceVersion:91327 uid:2ae0f2e4-5520-4af1-958e-3f7a3462fba1] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Oct 30 01:18:18.070: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-10-30T01:17:38Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-10-30T01:17:58Z]] name:name2 resourceVersion:91467 uid:98c90cb6-8192-4012-8007-6bd65f1f0ae7] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:18:28.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-4502" for this suite. • [SLOW TEST:68.125 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":-1,"completed":33,"skipped":648,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:18:28.705: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:18:28.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6554" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Secrets should patch a secret [Conformance]","total":-1,"completed":34,"skipped":715,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:18:22.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-35.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-35.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Oct 30 01:18:28.923: INFO: DNS probes using dns-35/dns-test-3af38218-5869-4055-ac31-ef2a98a3fa95 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:18:28.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-35" for this suite. • [SLOW TEST:6.084 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for the cluster [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":-1,"completed":20,"skipped":254,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} S ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:18:05.186: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod pod-subpath-test-configmap-7kxx STEP: Creating a pod to test atomic-volume-subpath Oct 30 01:18:05.228: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-7kxx" in namespace "subpath-164" to be "Succeeded or Failed" Oct 30 01:18:05.231: INFO: Pod "pod-subpath-test-configmap-7kxx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.09265ms Oct 30 01:18:07.234: INFO: Pod "pod-subpath-test-configmap-7kxx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005532956s Oct 30 01:18:09.239: INFO: Pod "pod-subpath-test-configmap-7kxx": Phase="Running", Reason="", readiness=true. Elapsed: 4.010187367s Oct 30 01:18:11.242: INFO: Pod "pod-subpath-test-configmap-7kxx": Phase="Running", Reason="", readiness=true. Elapsed: 6.013811687s Oct 30 01:18:13.246: INFO: Pod "pod-subpath-test-configmap-7kxx": Phase="Running", Reason="", readiness=true. Elapsed: 8.017912811s Oct 30 01:18:15.251: INFO: Pod "pod-subpath-test-configmap-7kxx": Phase="Running", Reason="", readiness=true. Elapsed: 10.022967973s Oct 30 01:18:17.255: INFO: Pod "pod-subpath-test-configmap-7kxx": Phase="Running", Reason="", readiness=true. Elapsed: 12.026369013s Oct 30 01:18:19.263: INFO: Pod "pod-subpath-test-configmap-7kxx": Phase="Running", Reason="", readiness=true. Elapsed: 14.034938005s Oct 30 01:18:21.268: INFO: Pod "pod-subpath-test-configmap-7kxx": Phase="Running", Reason="", readiness=true. Elapsed: 16.039485665s Oct 30 01:18:23.273: INFO: Pod "pod-subpath-test-configmap-7kxx": Phase="Running", Reason="", readiness=true. Elapsed: 18.044786062s Oct 30 01:18:25.277: INFO: Pod "pod-subpath-test-configmap-7kxx": Phase="Running", Reason="", readiness=true. Elapsed: 20.04842757s Oct 30 01:18:27.281: INFO: Pod "pod-subpath-test-configmap-7kxx": Phase="Running", Reason="", readiness=true. Elapsed: 22.052436817s Oct 30 01:18:29.285: INFO: Pod "pod-subpath-test-configmap-7kxx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.056360606s STEP: Saw pod success Oct 30 01:18:29.285: INFO: Pod "pod-subpath-test-configmap-7kxx" satisfied condition "Succeeded or Failed" Oct 30 01:18:29.287: INFO: Trying to get logs from node node2 pod pod-subpath-test-configmap-7kxx container test-container-subpath-configmap-7kxx: STEP: delete the pod Oct 30 01:18:29.304: INFO: Waiting for pod pod-subpath-test-configmap-7kxx to disappear Oct 30 01:18:29.306: INFO: Pod pod-subpath-test-configmap-7kxx no longer exists STEP: Deleting pod pod-subpath-test-configmap-7kxx Oct 30 01:18:29.306: INFO: Deleting pod "pod-subpath-test-configmap-7kxx" in namespace "subpath-164" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:18:29.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-164" for this suite. • [SLOW TEST:24.131 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":-1,"completed":31,"skipped":756,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:16:37.287: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-9605 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-9605 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-9605 Oct 30 01:16:37.317: INFO: Found 0 stateful pods, waiting for 1 Oct 30 01:16:47.321: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Oct 30 01:16:47.324: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9605 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 30 01:16:47.605: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Oct 30 01:16:47.605: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 30 01:16:47.605: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Oct 30 01:16:47.608: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Oct 30 01:16:57.611: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Oct 30 01:16:57.611: INFO: Waiting for statefulset status.replicas updated to 0 Oct 30 01:16:57.621: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999439s Oct 30 01:16:58.624: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.997559153s Oct 30 01:16:59.628: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.994070022s Oct 30 01:17:00.632: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.991050019s Oct 30 01:17:01.635: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.986720569s Oct 30 01:17:02.638: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.984091371s Oct 30 01:17:03.642: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.981008428s Oct 30 01:17:04.645: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.976650387s Oct 30 01:17:05.649: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.973397682s Oct 30 01:17:06.653: INFO: Verifying statefulset ss doesn't scale past 1 for another 968.972516ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-9605 Oct 30 01:17:07.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9605 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:17:08.331: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Oct 30 01:17:08.331: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Oct 30 01:17:08.331: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Oct 30 01:17:08.333: INFO: Found 1 stateful pods, waiting for 3 Oct 30 01:17:18.337: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Oct 30 01:17:18.337: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Oct 30 01:17:18.337: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false Oct 30 01:17:28.337: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Oct 30 01:17:28.337: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Oct 30 01:17:28.337: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Oct 30 01:17:28.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9605 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 30 01:17:28.646: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Oct 30 01:17:28.647: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 30 01:17:28.647: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Oct 30 01:17:28.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9605 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 30 01:17:29.193: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Oct 30 01:17:29.193: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 30 01:17:29.193: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Oct 30 01:17:29.193: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9605 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 30 01:17:29.908: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Oct 30 01:17:29.908: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 30 01:17:29.908: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Oct 30 01:17:29.908: INFO: Waiting for statefulset status.replicas updated to 0 Oct 30 01:17:29.911: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Oct 30 01:17:39.925: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Oct 30 01:17:39.925: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Oct 30 01:17:39.925: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Oct 30 01:17:39.933: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999452s Oct 30 01:17:40.937: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.996901976s Oct 30 01:17:41.941: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.993494397s Oct 30 01:17:42.947: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.989371114s Oct 30 01:17:43.952: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.982733275s Oct 30 01:17:44.959: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.978236487s Oct 30 01:17:45.963: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.970790807s Oct 30 01:17:46.967: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.96699044s Oct 30 01:17:47.971: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.962311695s Oct 30 01:17:48.979: INFO: Verifying statefulset ss doesn't scale past 3 for another 958.389049ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-9605 Oct 30 01:17:49.983: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9605 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:17:50.203: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Oct 30 01:17:50.203: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Oct 30 01:17:50.203: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Oct 30 01:17:50.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9605 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:17:50.442: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Oct 30 01:17:50.442: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Oct 30 01:17:50.442: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Oct 30 01:17:50.442: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9605 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:17:50.679: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Oct 30 01:17:50.679: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Oct 30 01:17:50.679: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Oct 30 01:17:50.679: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 Oct 30 01:18:30.690: INFO: Deleting all statefulset in ns statefulset-9605 Oct 30 01:18:30.693: INFO: Scaling statefulset ss to 0 Oct 30 01:18:30.702: INFO: Waiting for statefulset status.replicas updated to 0 Oct 30 01:18:30.704: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:18:30.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9605" for this suite. • [SLOW TEST:113.435 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":-1,"completed":19,"skipped":338,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} S ------------------------------ [BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:18:30.728: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename certificates STEP: Waiting for a default service account to be provisioned in namespace [It] should support CSR API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting /apis STEP: getting /apis/certificates.k8s.io STEP: getting /apis/certificates.k8s.io/v1 STEP: creating STEP: getting STEP: listing STEP: watching Oct 30 01:18:31.329: INFO: starting watch STEP: patching STEP: updating Oct 30 01:18:31.336: INFO: waiting for watch events with expected annotations Oct 30 01:18:31.336: INFO: saw patched and updated annotations STEP: getting /approval STEP: patching /approval STEP: updating /approval STEP: getting /status STEP: patching /status STEP: updating /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:18:31.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "certificates-5933" for this suite. • ------------------------------ {"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":-1,"completed":20,"skipped":339,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:18:29.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name projected-secret-test-ec041b3e-f269-4ef3-8447-97f43e3c0257 STEP: Creating a pod to test consume secrets Oct 30 01:18:29.368: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-bd61faed-8347-4668-a111-beac93eacc7f" in namespace "projected-1791" to be "Succeeded or Failed" Oct 30 01:18:29.370: INFO: Pod "pod-projected-secrets-bd61faed-8347-4668-a111-beac93eacc7f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.11377ms Oct 30 01:18:31.374: INFO: Pod "pod-projected-secrets-bd61faed-8347-4668-a111-beac93eacc7f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005450805s Oct 30 01:18:33.378: INFO: Pod "pod-projected-secrets-bd61faed-8347-4668-a111-beac93eacc7f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009637176s Oct 30 01:18:35.382: INFO: Pod "pod-projected-secrets-bd61faed-8347-4668-a111-beac93eacc7f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013351488s STEP: Saw pod success Oct 30 01:18:35.382: INFO: Pod "pod-projected-secrets-bd61faed-8347-4668-a111-beac93eacc7f" satisfied condition "Succeeded or Failed" Oct 30 01:18:35.384: INFO: Trying to get logs from node node2 pod pod-projected-secrets-bd61faed-8347-4668-a111-beac93eacc7f container projected-secret-volume-test: STEP: delete the pod Oct 30 01:18:35.399: INFO: Waiting for pod pod-projected-secrets-bd61faed-8347-4668-a111-beac93eacc7f to disappear Oct 30 01:18:35.401: INFO: Pod pod-projected-secrets-bd61faed-8347-4668-a111-beac93eacc7f no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:18:35.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1791" for this suite. • [SLOW TEST:6.073 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":32,"skipped":766,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:18:25.135: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 01:18:25.156: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Oct 30 01:18:33.685: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9212 --namespace=crd-publish-openapi-9212 create -f -' Oct 30 01:18:34.174: INFO: stderr: "" Oct 30 01:18:34.174: INFO: stdout: "e2e-test-crd-publish-openapi-8316-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Oct 30 01:18:34.174: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9212 --namespace=crd-publish-openapi-9212 delete e2e-test-crd-publish-openapi-8316-crds test-cr' Oct 30 01:18:34.329: INFO: stderr: "" Oct 30 01:18:34.329: INFO: stdout: "e2e-test-crd-publish-openapi-8316-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Oct 30 01:18:34.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9212 --namespace=crd-publish-openapi-9212 apply -f -' Oct 30 01:18:34.614: INFO: stderr: "" Oct 30 01:18:34.614: INFO: stdout: "e2e-test-crd-publish-openapi-8316-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Oct 30 01:18:34.614: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9212 --namespace=crd-publish-openapi-9212 delete e2e-test-crd-publish-openapi-8316-crds test-cr' Oct 30 01:18:34.785: INFO: stderr: "" Oct 30 01:18:34.785: INFO: stdout: "e2e-test-crd-publish-openapi-8316-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Oct 30 01:18:34.785: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9212 explain e2e-test-crd-publish-openapi-8316-crds' Oct 30 01:18:35.097: INFO: stderr: "" Oct 30 01:18:35.097: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8316-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:18:38.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9212" for this suite. • [SLOW TEST:13.507 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":-1,"completed":24,"skipped":369,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:18:28.784: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating the pod Oct 30 01:18:28.816: INFO: The status of Pod labelsupdate196cba7f-9f9a-4b0d-9413-ef7dbb17e5e8 is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:18:30.820: INFO: The status of Pod labelsupdate196cba7f-9f9a-4b0d-9413-ef7dbb17e5e8 is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:18:32.820: INFO: The status of Pod labelsupdate196cba7f-9f9a-4b0d-9413-ef7dbb17e5e8 is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:18:34.819: INFO: The status of Pod labelsupdate196cba7f-9f9a-4b0d-9413-ef7dbb17e5e8 is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:18:36.821: INFO: The status of Pod labelsupdate196cba7f-9f9a-4b0d-9413-ef7dbb17e5e8 is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:18:38.819: INFO: The status of Pod labelsupdate196cba7f-9f9a-4b0d-9413-ef7dbb17e5e8 is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:18:40.819: INFO: The status of Pod labelsupdate196cba7f-9f9a-4b0d-9413-ef7dbb17e5e8 is Running (Ready = true) Oct 30 01:18:41.335: INFO: Successfully updated pod "labelsupdate196cba7f-9f9a-4b0d-9413-ef7dbb17e5e8" [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:18:43.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-555" for this suite. • [SLOW TEST:14.587 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":35,"skipped":720,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:18:31.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Oct 30 01:18:31.938: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Oct 30 01:18:33.949: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153511, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153511, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153511, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153511, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-697cdbd8f4\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 30 01:18:36.958: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 01:18:36.961: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:18:45.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-4842" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:13.624 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":-1,"completed":21,"skipped":368,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:18:45.069: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename endpointslice STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 [It] should have Endpoints and EndpointSlices pointing to API Server [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:18:45.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslice-4641" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","total":-1,"completed":22,"skipped":379,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:17:47.056: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Oct 30 01:17:47.079: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-616 afa6a68b-d61f-405a-8ac2-9099c30b0a37 90807 0 2021-10-30 01:17:47 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-10-30 01:17:47 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Oct 30 01:17:47.079: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-616 afa6a68b-d61f-405a-8ac2-9099c30b0a37 90807 0 2021-10-30 01:17:47 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-10-30 01:17:47 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification Oct 30 01:17:57.086: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-616 afa6a68b-d61f-405a-8ac2-9099c30b0a37 90959 0 2021-10-30 01:17:47 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-10-30 01:17:57 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Oct 30 01:17:57.086: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-616 afa6a68b-d61f-405a-8ac2-9099c30b0a37 90959 0 2021-10-30 01:17:47 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-10-30 01:17:57 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Oct 30 01:18:07.092: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-616 afa6a68b-d61f-405a-8ac2-9099c30b0a37 91316 0 2021-10-30 01:17:47 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-10-30 01:17:57 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Oct 30 01:18:07.093: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-616 afa6a68b-d61f-405a-8ac2-9099c30b0a37 91316 0 2021-10-30 01:17:47 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-10-30 01:17:57 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification Oct 30 01:18:17.098: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-616 afa6a68b-d61f-405a-8ac2-9099c30b0a37 91458 0 2021-10-30 01:17:47 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-10-30 01:17:57 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Oct 30 01:18:17.098: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-616 afa6a68b-d61f-405a-8ac2-9099c30b0a37 91458 0 2021-10-30 01:17:47 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-10-30 01:17:57 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Oct 30 01:18:27.105: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-616 b3da6498-5e68-46f5-8539-17091f6bd1fb 91653 0 2021-10-30 01:18:27 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-10-30 01:18:27 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Oct 30 01:18:27.105: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-616 b3da6498-5e68-46f5-8539-17091f6bd1fb 91653 0 2021-10-30 01:18:27 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-10-30 01:18:27 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification Oct 30 01:18:37.110: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-616 b3da6498-5e68-46f5-8539-17091f6bd1fb 92020 0 2021-10-30 01:18:27 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-10-30 01:18:27 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Oct 30 01:18:37.110: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-616 b3da6498-5e68-46f5-8539-17091f6bd1fb 92020 0 2021-10-30 01:18:27 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-10-30 01:18:27 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:18:47.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-616" for this suite. • [SLOW TEST:60.063 seconds] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":-1,"completed":29,"skipped":402,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:18:47.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a test event STEP: listing events in all namespaces STEP: listing events in test namespace STEP: listing events with field selection filtering on source STEP: listing events with field selection filtering on reportingController STEP: getting the test event STEP: patching the test event STEP: getting the test event STEP: updating the test event STEP: getting the test event STEP: deleting the test event STEP: listing events in all namespaces STEP: listing events in test namespace [AfterEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:18:47.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-9518" for this suite. • ------------------------------ {"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":30,"skipped":409,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:18:38.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting the auto-created API token STEP: reading a file in the container Oct 30 01:18:47.227: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3355 pod-service-account-617d0ddc-51ca-4468-8e37-a6d6c53fd6f9 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Oct 30 01:18:47.608: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3355 pod-service-account-617d0ddc-51ca-4468-8e37-a6d6c53fd6f9 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Oct 30 01:18:47.864: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3355 pod-service-account-617d0ddc-51ca-4468-8e37-a6d6c53fd6f9 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:18:48.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-3355" for this suite. • [SLOW TEST:9.446 seconds] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":-1,"completed":25,"skipped":391,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:18:24.628: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod pod-subpath-test-downwardapi-phwn STEP: Creating a pod to test atomic-volume-subpath Oct 30 01:18:24.669: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-phwn" in namespace "subpath-4539" to be "Succeeded or Failed" Oct 30 01:18:24.680: INFO: Pod "pod-subpath-test-downwardapi-phwn": Phase="Pending", Reason="", readiness=false. Elapsed: 10.870354ms Oct 30 01:18:26.684: INFO: Pod "pod-subpath-test-downwardapi-phwn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015409894s Oct 30 01:18:28.688: INFO: Pod "pod-subpath-test-downwardapi-phwn": Phase="Running", Reason="", readiness=true. Elapsed: 4.018870079s Oct 30 01:18:30.691: INFO: Pod "pod-subpath-test-downwardapi-phwn": Phase="Running", Reason="", readiness=true. Elapsed: 6.02250256s Oct 30 01:18:32.694: INFO: Pod "pod-subpath-test-downwardapi-phwn": Phase="Running", Reason="", readiness=true. Elapsed: 8.025440187s Oct 30 01:18:34.698: INFO: Pod "pod-subpath-test-downwardapi-phwn": Phase="Running", Reason="", readiness=true. Elapsed: 10.02947847s Oct 30 01:18:36.703: INFO: Pod "pod-subpath-test-downwardapi-phwn": Phase="Running", Reason="", readiness=true. Elapsed: 12.033685862s Oct 30 01:18:38.707: INFO: Pod "pod-subpath-test-downwardapi-phwn": Phase="Running", Reason="", readiness=true. Elapsed: 14.037613725s Oct 30 01:18:40.710: INFO: Pod "pod-subpath-test-downwardapi-phwn": Phase="Running", Reason="", readiness=true. Elapsed: 16.041490839s Oct 30 01:18:42.714: INFO: Pod "pod-subpath-test-downwardapi-phwn": Phase="Running", Reason="", readiness=true. Elapsed: 18.04524471s Oct 30 01:18:44.718: INFO: Pod "pod-subpath-test-downwardapi-phwn": Phase="Running", Reason="", readiness=true. Elapsed: 20.049234894s Oct 30 01:18:46.726: INFO: Pod "pod-subpath-test-downwardapi-phwn": Phase="Running", Reason="", readiness=true. Elapsed: 22.057191195s Oct 30 01:18:48.731: INFO: Pod "pod-subpath-test-downwardapi-phwn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.061597093s STEP: Saw pod success Oct 30 01:18:48.731: INFO: Pod "pod-subpath-test-downwardapi-phwn" satisfied condition "Succeeded or Failed" Oct 30 01:18:48.734: INFO: Trying to get logs from node node1 pod pod-subpath-test-downwardapi-phwn container test-container-subpath-downwardapi-phwn: STEP: delete the pod Oct 30 01:18:48.747: INFO: Waiting for pod pod-subpath-test-downwardapi-phwn to disappear Oct 30 01:18:48.749: INFO: Pod pod-subpath-test-downwardapi-phwn no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-phwn Oct 30 01:18:48.749: INFO: Deleting pod "pod-subpath-test-downwardapi-phwn" in namespace "subpath-4539" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:18:48.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4539" for this suite. • [SLOW TEST:24.132 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":-1,"completed":28,"skipped":425,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:18:48.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should provide secure master service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:18:48.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3855" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • ------------------------------ {"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":-1,"completed":29,"skipped":446,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSS ------------------------------ [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:18:45.182: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename endpointslice STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 [It] should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:18:49.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslice-9402" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","total":-1,"completed":23,"skipped":423,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:18:35.426: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] deployment should support proportional scaling [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 01:18:35.449: INFO: Creating deployment "webserver-deployment" Oct 30 01:18:35.452: INFO: Waiting for observed generation 1 Oct 30 01:18:37.458: INFO: Waiting for all required pods to come up Oct 30 01:18:37.462: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Oct 30 01:18:49.471: INFO: Waiting for deployment "webserver-deployment" to complete Oct 30 01:18:49.479: INFO: Updating deployment "webserver-deployment" with a non-existent image Oct 30 01:18:49.485: INFO: Updating deployment webserver-deployment Oct 30 01:18:49.485: INFO: Waiting for observed generation 2 Oct 30 01:18:51.492: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Oct 30 01:18:51.494: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Oct 30 01:18:51.496: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Oct 30 01:18:51.503: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Oct 30 01:18:51.503: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Oct 30 01:18:51.505: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Oct 30 01:18:51.509: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Oct 30 01:18:51.509: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Oct 30 01:18:51.516: INFO: Updating deployment webserver-deployment Oct 30 01:18:51.516: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Oct 30 01:18:51.521: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Oct 30 01:18:51.525: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 Oct 30 01:18:51.536: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-3674 9f15b51f-7553-43a0-8e97-ebf78f1975fb 92484 3 2021-10-30 01:18:35 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2021-10-30 01:18:35 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-10-30 01:18:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc001404b18 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:5,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-795d758f88" is progressing.,LastUpdateTime:2021-10-30 01:18:49 +0000 UTC,LastTransitionTime:2021-10-30 01:18:35 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2021-10-30 01:18:51 +0000 UTC,LastTransitionTime:2021-10-30 01:18:51 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Oct 30 01:18:51.541: INFO: New ReplicaSet "webserver-deployment-795d758f88" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-795d758f88 deployment-3674 596b940d-5f67-4671-8997-5e3bdb04e20c 92481 3 2021-10-30 01:18:49 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 9f15b51f-7553-43a0-8e97-ebf78f1975fb 0xc001404f07 0xc001404f08}] [] [{kube-controller-manager Update apps/v1 2021-10-30 01:18:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9f15b51f-7553-43a0-8e97-ebf78f1975fb\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 795d758f88,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc001404f88 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Oct 30 01:18:51.541: INFO: All old ReplicaSets of Deployment "webserver-deployment": Oct 30 01:18:51.542: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-847dcfb7fb deployment-3674 e9199f76-e149-4290-958f-3303a9700b1d 92479 3 2021-10-30 01:18:35 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 9f15b51f-7553-43a0-8e97-ebf78f1975fb 0xc001404fe7 0xc001404fe8}] [] [{kube-controller-manager Update apps/v1 2021-10-30 01:18:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9f15b51f-7553-43a0-8e97-ebf78f1975fb\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 847dcfb7fb,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc001405058 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:2,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Oct 30 01:18:51.547: INFO: Pod "webserver-deployment-795d758f88-8lmfd" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-8lmfd webserver-deployment-795d758f88- deployment-3674 d104ea7b-cc5d-4d09-85f8-d1c4ee4a3295 92476 0 2021-10-30 01:18:49 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 596b940d-5f67-4671-8997-5e3bdb04e20c 0xc0048f80cf 0xc0048f80e0}] [] [{kube-controller-manager Update v1 2021-10-30 01:18:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"596b940d-5f67-4671-8997-5e3bdb04e20c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-10-30 01:18:51 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-8k2nq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8k2nq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:18:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:18:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:18:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:18:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:,StartTime:2021-10-30 01:18:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 30 01:18:51.548: INFO: Pod "webserver-deployment-795d758f88-lhk27" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-lhk27 webserver-deployment-795d758f88- deployment-3674 abb31144-b11a-4a41-8410-4f3b93cb8568 92420 0 2021-10-30 01:18:49 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 596b940d-5f67-4671-8997-5e3bdb04e20c 0xc0048f82af 0xc0048f82c0}] [] [{kube-controller-manager Update v1 2021-10-30 01:18:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"596b940d-5f67-4671-8997-5e3bdb04e20c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-wcdqt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wcdqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:18:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 30 01:18:51.548: INFO: Pod "webserver-deployment-795d758f88-q5w5b" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-q5w5b webserver-deployment-795d758f88- deployment-3674 6520df53-92a6-4a4f-8d68-e43b43c4d30f 92416 0 2021-10-30 01:18:49 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 596b940d-5f67-4671-8997-5e3bdb04e20c 0xc0048f842f 0xc0048f8440}] [] [{kube-controller-manager Update v1 2021-10-30 01:18:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"596b940d-5f67-4671-8997-5e3bdb04e20c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-10-30 01:18:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-mj9vf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mj9vf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:18:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:18:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:18:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:18:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:,StartTime:2021-10-30 01:18:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 30 01:18:51.548: INFO: Pod "webserver-deployment-795d758f88-qqkn4" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-qqkn4 webserver-deployment-795d758f88- deployment-3674 48d5ddea-57da-4e4c-8e95-28b872b81737 92428 0 2021-10-30 01:18:49 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 596b940d-5f67-4671-8997-5e3bdb04e20c 0xc0048f861f 0xc0048f8630}] [] [{kube-controller-manager Update v1 2021-10-30 01:18:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"596b940d-5f67-4671-8997-5e3bdb04e20c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-10-30 01:18:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-bxd6g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bxd6g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:18:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:18:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:18:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:18:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:,StartTime:2021-10-30 01:18:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 30 01:18:51.549: INFO: Pod "webserver-deployment-795d758f88-sjpqj" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-sjpqj webserver-deployment-795d758f88- deployment-3674 b5298611-3762-470e-bb7a-abdaad6a15be 92434 0 2021-10-30 01:18:49 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 596b940d-5f67-4671-8997-5e3bdb04e20c 0xc0048f87ff 0xc0048f8820}] [] [{kube-controller-manager Update v1 2021-10-30 01:18:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"596b940d-5f67-4671-8997-5e3bdb04e20c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-10-30 01:18:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-6vmdn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6vmdn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:18:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:18:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:18:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:18:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:,StartTime:2021-10-30 01:18:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 30 01:18:51.549: INFO: Pod "webserver-deployment-795d758f88-v9wgr" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-v9wgr webserver-deployment-795d758f88- deployment-3674 6755ac3c-e336-4f12-96b4-bc2a271bb7bf 92490 0 2021-10-30 01:18:51 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 596b940d-5f67-4671-8997-5e3bdb04e20c 0xc0048f89ef 0xc0048f8a00}] [] [{kube-controller-manager Update v1 2021-10-30 01:18:51 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"596b940d-5f67-4671-8997-5e3bdb04e20c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-mj9nv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mj9nv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:18:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 30 01:18:51.549: INFO: Pod "webserver-deployment-847dcfb7fb-2xwfp" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-2xwfp webserver-deployment-847dcfb7fb- deployment-3674 abcfc692-8d4a-41eb-900c-336c3b478c95 92072 0 2021-10-30 01:18:35 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.5" ], "mac": "d6:ff:ee:11:78:89", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.5" ], "mac": "d6:ff:ee:11:78:89", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb e9199f76-e149-4290-958f-3303a9700b1d 0xc0048f8b6f 0xc0048f8b80}] [] [{kube-controller-manager Update v1 2021-10-30 01:18:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e9199f76-e149-4290-958f-3303a9700b1d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-10-30 01:18:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-10-30 01:18:40 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.5\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-h42nl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h42nl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:18:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:18:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:18:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:18:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.5,StartTime:2021-10-30 01:18:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-30 01:18:39 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://9ef4301a754209a13e2db6a34d6b14b9e4eca0af179f5ace372ea85dc109011b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.5,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 30 01:18:51.550: INFO: Pod "webserver-deployment-847dcfb7fb-5gkjh" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-5gkjh webserver-deployment-847dcfb7fb- deployment-3674 c3547970-1bfd-4b6e-863d-e20a6d36fe01 92494 0 2021-10-30 01:18:51 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb e9199f76-e149-4290-958f-3303a9700b1d 0xc0048f8d6f 0xc0048f8d80}] [] [{kube-controller-manager Update v1 2021-10-30 01:18:51 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e9199f76-e149-4290-958f-3303a9700b1d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-k75r9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k75r9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:18:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 30 01:18:51.550: INFO: Pod "webserver-deployment-847dcfb7fb-7qn2v" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-7qn2v webserver-deployment-847dcfb7fb- deployment-3674 bf774b2d-54ae-4e20-87a0-8c786e1fa812 92125 0 2021-10-30 01:18:35 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.10" ], "mac": "8a:8b:5a:86:db:c7", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.10" ], "mac": "8a:8b:5a:86:db:c7", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb e9199f76-e149-4290-958f-3303a9700b1d 0xc0048f8edf 0xc0048f8ef0}] [] [{kube-controller-manager Update v1 2021-10-30 01:18:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e9199f76-e149-4290-958f-3303a9700b1d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-10-30 01:18:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-10-30 01:18:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.10\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-wjcsg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wjcsg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:18:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:18:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:18:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:18:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.10,StartTime:2021-10-30 01:18:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-30 01:18:40 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://5789bd5e066e8d677ff1e65164f6051fd153aa261b89a67bba0364fce7842a77,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.10,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 30 01:18:51.550: INFO: Pod "webserver-deployment-847dcfb7fb-9g2fx" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-9g2fx webserver-deployment-847dcfb7fb- deployment-3674 dd8b06a8-94d5-4751-95a8-4aa622ceaf4d 92298 0 2021-10-30 01:18:35 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.143" ], "mac": "c2:f5:27:ef:56:52", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.143" ], "mac": "c2:f5:27:ef:56:52", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb e9199f76-e149-4290-958f-3303a9700b1d 0xc0048f90df 0xc0048f90f0}] [] [{kube-controller-manager Update v1 2021-10-30 01:18:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e9199f76-e149-4290-958f-3303a9700b1d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-10-30 01:18:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-10-30 01:18:47 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.143\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-q2mhp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q2mhp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:18:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:18:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:18:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:18:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.3.143,StartTime:2021-10-30 01:18:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-30 01:18:43 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://d91d49d138973d52389f069c879b2c0e4267ca79e826cd8d8fc60031c5b2f2b3,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.143,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 30 01:18:51.551: INFO: Pod "webserver-deployment-847dcfb7fb-jc2k6" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-jc2k6 webserver-deployment-847dcfb7fb- deployment-3674 fa6eee4a-666b-4dce-8345-8153ddb27fd7 92485 0 2021-10-30 01:18:51 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb e9199f76-e149-4290-958f-3303a9700b1d 0xc0048f92df 0xc0048f92f0}] [] [{kube-controller-manager Update v1 2021-10-30 01:18:51 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e9199f76-e149-4290-958f-3303a9700b1d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-jkjd9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jkjd9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:18:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 30 01:18:51.551: INFO: Pod "webserver-deployment-847dcfb7fb-k9545" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-k9545 webserver-deployment-847dcfb7fb- deployment-3674 3b0a9b7f-e059-45ba-8da9-e6fe140910e8 92303 0 2021-10-30 01:18:35 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.147" ], "mac": "8e:3f:fc:ad:d0:5d", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.147" ], "mac": "8e:3f:fc:ad:d0:5d", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb e9199f76-e149-4290-958f-3303a9700b1d 0xc0048f944f 0xc0048f9460}] [] [{kube-controller-manager Update v1 2021-10-30 01:18:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e9199f76-e149-4290-958f-3303a9700b1d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-10-30 01:18:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-10-30 01:18:47 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.147\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-fbm7m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fbm7m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:18:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:18:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:18:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:18:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.3.147,StartTime:2021-10-30 01:18:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-30 01:18:43 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://92e31471b797e0bb5672b89c6972b2db1cddcc63874ce1c12f0601c587040b8d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.147,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 30 01:18:51.551: INFO: Pod "webserver-deployment-847dcfb7fb-nzxsw" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-nzxsw webserver-deployment-847dcfb7fb- deployment-3674 87be3b71-6a12-4495-83f3-78a8a85f07f5 92119 0 2021-10-30 01:18:35 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.12" ], "mac": "82:a3:c2:7f:c4:2a", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.12" ], "mac": "82:a3:c2:7f:c4:2a", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb e9199f76-e149-4290-958f-3303a9700b1d 0xc0048f964f 0xc0048f9660}] [] [{kube-controller-manager Update v1 2021-10-30 01:18:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e9199f76-e149-4290-958f-3303a9700b1d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-10-30 01:18:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-10-30 01:18:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.12\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-jnzsc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jnzsc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:18:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:18:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:18:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:18:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.12,StartTime:2021-10-30 01:18:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-30 01:18:42 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://4541ecb3781bee77b3c1f3820bd9756cb81d824fd3de79a5cd16b390abe6a97c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.12,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 30 01:18:51.552: INFO: Pod "webserver-deployment-847dcfb7fb-szpzn" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-szpzn webserver-deployment-847dcfb7fb- deployment-3674 ed220b4a-5a92-47ce-8789-8e1422210cdf 92128 0 2021-10-30 01:18:35 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.9" ], "mac": "5a:8d:2a:16:15:6d", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.9" ], "mac": "5a:8d:2a:16:15:6d", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb e9199f76-e149-4290-958f-3303a9700b1d 0xc0048f9cef 0xc0048f9d00}] [] [{kube-controller-manager Update v1 2021-10-30 01:18:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e9199f76-e149-4290-958f-3303a9700b1d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-10-30 01:18:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-10-30 01:18:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.9\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-7rpf9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7rpf9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:18:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:18:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:18:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:18:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.9,StartTime:2021-10-30 01:18:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-30 01:18:40 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://4897fbf059d86f3c37b2c89f0556b214f4f32817a5b8001f808624de48fb459d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.9,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 30 01:18:51.552: INFO: Pod "webserver-deployment-847dcfb7fb-xmmzc" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-xmmzc webserver-deployment-847dcfb7fb- deployment-3674 cb67f746-3f4a-4bab-931a-465577ddda16 92232 0 2021-10-30 01:18:35 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.144" ], "mac": "8a:35:c4:86:31:bb", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.144" ], "mac": "8a:35:c4:86:31:bb", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb e9199f76-e149-4290-958f-3303a9700b1d 0xc0048f9f0f 0xc0048f9f20}] [] [{kube-controller-manager Update v1 2021-10-30 01:18:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e9199f76-e149-4290-958f-3303a9700b1d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-10-30 01:18:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-10-30 01:18:45 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.144\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-ltdgl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ltdgl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:18:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:18:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:18:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:18:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.3.144,StartTime:2021-10-30 01:18:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-30 01:18:43 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://09b249ec47a4295bf9bb32563915fec181d2b70654c1902decdf97e05adefc58,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.144,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 30 01:18:51.552: INFO: Pod "webserver-deployment-847dcfb7fb-z8ch7" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-z8ch7 webserver-deployment-847dcfb7fb- deployment-3674 6b1d5bf8-ffdb-46bf-9203-812ba9201ee5 92122 0 2021-10-30 01:18:35 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.11" ], "mac": "d2:07:a0:25:4a:68", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.11" ], "mac": "d2:07:a0:25:4a:68", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb e9199f76-e149-4290-958f-3303a9700b1d 0xc003e6411f 0xc003e64130}] [] [{kube-controller-manager Update v1 2021-10-30 01:18:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e9199f76-e149-4290-958f-3303a9700b1d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-10-30 01:18:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-10-30 01:18:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.11\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-d7xkw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-d7xkw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:18:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:18:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:18:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:18:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.11,StartTime:2021-10-30 01:18:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-30 01:18:40 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://b590b1880c61337f991f454787d4a47254ac1560233606dd3e67d23d0a26c924,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.11,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:18:51.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3674" for this suite. • [SLOW TEST:16.137 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":-1,"completed":33,"skipped":774,"failed":0} SSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:18:43.384: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name s-test-opt-del-9c826e72-81d6-477c-8da1-26ec361d4361 STEP: Creating secret with name s-test-opt-upd-201ac11e-2ab8-41f2-a2ba-4d801974afb1 STEP: Creating the pod Oct 30 01:18:43.436: INFO: The status of Pod pod-projected-secrets-0e43ccbf-75f0-4abc-b5dc-55413c44d3b5 is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:18:45.439: INFO: The status of Pod pod-projected-secrets-0e43ccbf-75f0-4abc-b5dc-55413c44d3b5 is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:18:47.440: INFO: The status of Pod pod-projected-secrets-0e43ccbf-75f0-4abc-b5dc-55413c44d3b5 is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:18:49.440: INFO: The status of Pod pod-projected-secrets-0e43ccbf-75f0-4abc-b5dc-55413c44d3b5 is Running (Ready = true) STEP: Deleting secret s-test-opt-del-9c826e72-81d6-477c-8da1-26ec361d4361 STEP: Updating secret s-test-opt-upd-201ac11e-2ab8-41f2-a2ba-4d801974afb1 STEP: Creating secret with name s-test-opt-create-4cf985cd-c1ea-41da-a0ac-574d023a47fe STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:18:52.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2437" for this suite. • [SLOW TEST:8.645 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":36,"skipped":726,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:18:28.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:18:57.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7725" for this suite. • [SLOW TEST:28.066 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":-1,"completed":21,"skipped":255,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:18:01.714: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod Oct 30 01:18:01.734: INFO: PodSpec: initContainers in spec.initContainers Oct 30 01:19:02.028: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-445e0745-cde0-4149-a78e-8858e3f38ae6", GenerateName:"", Namespace:"init-container-1167", SelfLink:"", UID:"6cbfd060-4dd5-4841-a464-cf4a58088fad", ResourceVersion:"92932", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63771153481, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"734906062"}, Annotations:map[string]string{"k8s.v1.cni.cncf.io/network-status":"[{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.3.136\"\n ],\n \"mac\": \"fe:d4:72:eb:8e:85\",\n \"default\": true,\n \"dns\": {}\n}]", "k8s.v1.cni.cncf.io/networks-status":"[{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.3.136\"\n ],\n \"mac\": \"fe:d4:72:eb:8e:85\",\n \"default\": true,\n \"dns\": {}\n}]", "kubernetes.io/psp":"collectd"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc000190a80), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000190a98)}, v1.ManagedFieldsEntry{Manager:"multus", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc000190ab0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000190af8)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc000190b10), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000190b28)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-api-access-5qsz8", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc0049a0080), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-5qsz8", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-5qsz8", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.4.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-5qsz8", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0001e0a18), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"node1", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc004a5e000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0001e0aa0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0001e0ac0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0001e0ac8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0001e0acc), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc00431e030), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153481, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153481, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153481, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153481, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.10.190.207", PodIP:"10.244.3.136", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.3.136"}}, StartTime:(*v1.Time)(0xc000190b58), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc004a5e0e0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc004a5e150)}, Ready:false, RestartCount:3, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", ImageID:"docker-pullable://k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592", ContainerID:"docker://a1cb735538fac191e4cc3ffd323eb2b2a3f626f633e69d4a4c2f06191b4199b6", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0049a0200), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0049a01a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.4.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc0001e0b4f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:19:02.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1167" for this suite. • [SLOW TEST:60.325 seconds] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":-1,"completed":14,"skipped":301,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:18:51.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-0b3325b4-410e-41fe-bd8a-404217c0fe5f STEP: Creating a pod to test consume configMaps Oct 30 01:18:51.611: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-83e5d717-2b5a-4a79-ae7f-44bb5fb72274" in namespace "projected-1618" to be "Succeeded or Failed" Oct 30 01:18:51.613: INFO: Pod "pod-projected-configmaps-83e5d717-2b5a-4a79-ae7f-44bb5fb72274": Phase="Pending", Reason="", readiness=false. Elapsed: 2.151158ms Oct 30 01:18:53.616: INFO: Pod "pod-projected-configmaps-83e5d717-2b5a-4a79-ae7f-44bb5fb72274": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005777062s Oct 30 01:18:55.620: INFO: Pod "pod-projected-configmaps-83e5d717-2b5a-4a79-ae7f-44bb5fb72274": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009061929s Oct 30 01:18:57.623: INFO: Pod "pod-projected-configmaps-83e5d717-2b5a-4a79-ae7f-44bb5fb72274": Phase="Pending", Reason="", readiness=false. Elapsed: 6.012043498s Oct 30 01:18:59.626: INFO: Pod "pod-projected-configmaps-83e5d717-2b5a-4a79-ae7f-44bb5fb72274": Phase="Pending", Reason="", readiness=false. Elapsed: 8.015867423s Oct 30 01:19:01.632: INFO: Pod "pod-projected-configmaps-83e5d717-2b5a-4a79-ae7f-44bb5fb72274": Phase="Pending", Reason="", readiness=false. Elapsed: 10.021119842s Oct 30 01:19:03.639: INFO: Pod "pod-projected-configmaps-83e5d717-2b5a-4a79-ae7f-44bb5fb72274": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.028340808s STEP: Saw pod success Oct 30 01:19:03.639: INFO: Pod "pod-projected-configmaps-83e5d717-2b5a-4a79-ae7f-44bb5fb72274" satisfied condition "Succeeded or Failed" Oct 30 01:19:03.641: INFO: Trying to get logs from node node2 pod pod-projected-configmaps-83e5d717-2b5a-4a79-ae7f-44bb5fb72274 container agnhost-container: STEP: delete the pod Oct 30 01:19:03.653: INFO: Waiting for pod pod-projected-configmaps-83e5d717-2b5a-4a79-ae7f-44bb5fb72274 to disappear Oct 30 01:19:03.655: INFO: Pod pod-projected-configmaps-83e5d717-2b5a-4a79-ae7f-44bb5fb72274 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:19:03.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1618" for this suite. • [SLOW TEST:12.087 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":34,"skipped":777,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:18:49.298: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should invoke init containers on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod Oct 30 01:18:49.319: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:19:06.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-4024" for this suite. • [SLOW TEST:17.214 seconds] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should invoke init containers on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":-1,"completed":24,"skipped":459,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:18:48.850: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 30 01:18:49.080: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 30 01:18:51.089: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153529, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153529, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153529, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153529, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 30 01:18:53.094: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153529, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153529, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153529, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153529, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 30 01:18:55.093: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153529, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153529, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153529, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153529, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 30 01:18:57.091: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153529, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153529, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153529, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153529, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 30 01:19:00.105: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 01:19:00.108: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-7089-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:19:08.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-773" for this suite. STEP: Destroying namespace "webhook-773-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:19.370 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":-1,"completed":30,"skipped":455,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:18:52.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication Oct 30 01:18:52.437: INFO: role binding webhook-auth-reader already exists STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 30 01:18:52.449: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 30 01:18:54.458: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153532, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153532, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153532, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153532, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 30 01:18:56.464: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153532, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153532, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153532, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153532, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 30 01:18:58.463: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153532, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153532, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153532, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153532, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 30 01:19:00.462: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153532, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153532, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153532, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153532, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 30 01:19:02.461: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153532, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153532, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153532, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153532, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 30 01:19:04.464: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153532, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153532, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153532, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153532, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 30 01:19:07.467: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:19:08.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7678" for this suite. STEP: Destroying namespace "webhook-7678-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:16.463 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":-1,"completed":37,"skipped":742,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-node] Lease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:19:08.536: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Lease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:19:08.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-1326" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Lease lease API should be available [Conformance]","total":-1,"completed":38,"skipped":746,"failed":0} SSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:19:02.056: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:19:09.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9515" for this suite. • [SLOW TEST:7.047 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":-1,"completed":15,"skipped":308,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:19:06.547: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Oct 30 01:19:06.576: INFO: Waiting up to 5m0s for pod "downwardapi-volume-23aca121-0268-490b-b9ef-b563fcbb6d01" in namespace "downward-api-428" to be "Succeeded or Failed" Oct 30 01:19:06.578: INFO: Pod "downwardapi-volume-23aca121-0268-490b-b9ef-b563fcbb6d01": Phase="Pending", Reason="", readiness=false. Elapsed: 2.195945ms Oct 30 01:19:08.581: INFO: Pod "downwardapi-volume-23aca121-0268-490b-b9ef-b563fcbb6d01": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005127433s Oct 30 01:19:10.585: INFO: Pod "downwardapi-volume-23aca121-0268-490b-b9ef-b563fcbb6d01": Phase="Running", Reason="", readiness=true. Elapsed: 4.009045811s Oct 30 01:19:12.589: INFO: Pod "downwardapi-volume-23aca121-0268-490b-b9ef-b563fcbb6d01": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013150875s STEP: Saw pod success Oct 30 01:19:12.589: INFO: Pod "downwardapi-volume-23aca121-0268-490b-b9ef-b563fcbb6d01" satisfied condition "Succeeded or Failed" Oct 30 01:19:12.592: INFO: Trying to get logs from node node2 pod downwardapi-volume-23aca121-0268-490b-b9ef-b563fcbb6d01 container client-container: STEP: delete the pod Oct 30 01:19:12.629: INFO: Waiting for pod downwardapi-volume-23aca121-0268-490b-b9ef-b563fcbb6d01 to disappear Oct 30 01:19:12.631: INFO: Pod downwardapi-volume-23aca121-0268-490b-b9ef-b563fcbb6d01 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:19:12.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-428" for this suite. • [SLOW TEST:6.092 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":476,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:19:08.255: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test override command Oct 30 01:19:08.290: INFO: Waiting up to 5m0s for pod "client-containers-ab46b2f9-d70a-4803-8f87-168f40fd680a" in namespace "containers-7027" to be "Succeeded or Failed" Oct 30 01:19:08.292: INFO: Pod "client-containers-ab46b2f9-d70a-4803-8f87-168f40fd680a": Phase="Pending", Reason="", readiness=false. Elapsed: 1.861056ms Oct 30 01:19:10.295: INFO: Pod "client-containers-ab46b2f9-d70a-4803-8f87-168f40fd680a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004845084s Oct 30 01:19:12.298: INFO: Pod "client-containers-ab46b2f9-d70a-4803-8f87-168f40fd680a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007489311s Oct 30 01:19:14.301: INFO: Pod "client-containers-ab46b2f9-d70a-4803-8f87-168f40fd680a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.011039219s STEP: Saw pod success Oct 30 01:19:14.301: INFO: Pod "client-containers-ab46b2f9-d70a-4803-8f87-168f40fd680a" satisfied condition "Succeeded or Failed" Oct 30 01:19:14.303: INFO: Trying to get logs from node node1 pod client-containers-ab46b2f9-d70a-4803-8f87-168f40fd680a container agnhost-container: STEP: delete the pod Oct 30 01:19:14.315: INFO: Waiting for pod client-containers-ab46b2f9-d70a-4803-8f87-168f40fd680a to disappear Oct 30 01:19:14.317: INFO: Pod client-containers-ab46b2f9-d70a-4803-8f87-168f40fd680a no longer exists [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:19:14.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7027" for this suite. • [SLOW TEST:6.070 seconds] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":-1,"completed":31,"skipped":474,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:19:08.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Oct 30 01:19:08.631: INFO: Waiting up to 5m0s for pod "downwardapi-volume-94659f92-a7d2-4d07-baf3-d3e724d14d53" in namespace "projected-6804" to be "Succeeded or Failed" Oct 30 01:19:08.633: INFO: Pod "downwardapi-volume-94659f92-a7d2-4d07-baf3-d3e724d14d53": Phase="Pending", Reason="", readiness=false. Elapsed: 2.217732ms Oct 30 01:19:10.636: INFO: Pod "downwardapi-volume-94659f92-a7d2-4d07-baf3-d3e724d14d53": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005420547s Oct 30 01:19:12.639: INFO: Pod "downwardapi-volume-94659f92-a7d2-4d07-baf3-d3e724d14d53": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008307375s Oct 30 01:19:14.643: INFO: Pod "downwardapi-volume-94659f92-a7d2-4d07-baf3-d3e724d14d53": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.01244216s STEP: Saw pod success Oct 30 01:19:14.643: INFO: Pod "downwardapi-volume-94659f92-a7d2-4d07-baf3-d3e724d14d53" satisfied condition "Succeeded or Failed" Oct 30 01:19:14.647: INFO: Trying to get logs from node node1 pod downwardapi-volume-94659f92-a7d2-4d07-baf3-d3e724d14d53 container client-container: STEP: delete the pod Oct 30 01:19:14.704: INFO: Waiting for pod downwardapi-volume-94659f92-a7d2-4d07-baf3-d3e724d14d53 to disappear Oct 30 01:19:14.706: INFO: Pod downwardapi-volume-94659f92-a7d2-4d07-baf3-d3e724d14d53 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:19:14.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6804" for this suite. • [SLOW TEST:6.113 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":39,"skipped":749,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:19:14.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename discovery STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/discovery.go:39 STEP: Setting up server cert [It] should validate PreferredVersion for each APIGroup [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 01:19:15.045: INFO: Checking APIGroup: apiregistration.k8s.io Oct 30 01:19:15.046: INFO: PreferredVersion.GroupVersion: apiregistration.k8s.io/v1 Oct 30 01:19:15.046: INFO: Versions found [{apiregistration.k8s.io/v1 v1} {apiregistration.k8s.io/v1beta1 v1beta1}] Oct 30 01:19:15.046: INFO: apiregistration.k8s.io/v1 matches apiregistration.k8s.io/v1 Oct 30 01:19:15.046: INFO: Checking APIGroup: apps Oct 30 01:19:15.046: INFO: PreferredVersion.GroupVersion: apps/v1 Oct 30 01:19:15.046: INFO: Versions found [{apps/v1 v1}] Oct 30 01:19:15.046: INFO: apps/v1 matches apps/v1 Oct 30 01:19:15.046: INFO: Checking APIGroup: events.k8s.io Oct 30 01:19:15.047: INFO: PreferredVersion.GroupVersion: events.k8s.io/v1 Oct 30 01:19:15.047: INFO: Versions found [{events.k8s.io/v1 v1} {events.k8s.io/v1beta1 v1beta1}] Oct 30 01:19:15.047: INFO: events.k8s.io/v1 matches events.k8s.io/v1 Oct 30 01:19:15.047: INFO: Checking APIGroup: authentication.k8s.io Oct 30 01:19:15.048: INFO: PreferredVersion.GroupVersion: authentication.k8s.io/v1 Oct 30 01:19:15.048: INFO: Versions found [{authentication.k8s.io/v1 v1} {authentication.k8s.io/v1beta1 v1beta1}] Oct 30 01:19:15.048: INFO: authentication.k8s.io/v1 matches authentication.k8s.io/v1 Oct 30 01:19:15.048: INFO: Checking APIGroup: authorization.k8s.io Oct 30 01:19:15.049: INFO: PreferredVersion.GroupVersion: authorization.k8s.io/v1 Oct 30 01:19:15.049: INFO: Versions found [{authorization.k8s.io/v1 v1} {authorization.k8s.io/v1beta1 v1beta1}] Oct 30 01:19:15.049: INFO: authorization.k8s.io/v1 matches authorization.k8s.io/v1 Oct 30 01:19:15.049: INFO: Checking APIGroup: autoscaling Oct 30 01:19:15.050: INFO: PreferredVersion.GroupVersion: autoscaling/v1 Oct 30 01:19:15.050: INFO: Versions found [{autoscaling/v1 v1} {autoscaling/v2beta1 v2beta1} {autoscaling/v2beta2 v2beta2}] Oct 30 01:19:15.050: INFO: autoscaling/v1 matches autoscaling/v1 Oct 30 01:19:15.050: INFO: Checking APIGroup: batch Oct 30 01:19:15.051: INFO: PreferredVersion.GroupVersion: batch/v1 Oct 30 01:19:15.051: INFO: Versions found [{batch/v1 v1} {batch/v1beta1 v1beta1}] Oct 30 01:19:15.051: INFO: batch/v1 matches batch/v1 Oct 30 01:19:15.051: INFO: Checking APIGroup: certificates.k8s.io Oct 30 01:19:15.052: INFO: PreferredVersion.GroupVersion: certificates.k8s.io/v1 Oct 30 01:19:15.052: INFO: Versions found [{certificates.k8s.io/v1 v1} {certificates.k8s.io/v1beta1 v1beta1}] Oct 30 01:19:15.052: INFO: certificates.k8s.io/v1 matches certificates.k8s.io/v1 Oct 30 01:19:15.052: INFO: Checking APIGroup: networking.k8s.io Oct 30 01:19:15.052: INFO: PreferredVersion.GroupVersion: networking.k8s.io/v1 Oct 30 01:19:15.053: INFO: Versions found [{networking.k8s.io/v1 v1} {networking.k8s.io/v1beta1 v1beta1}] Oct 30 01:19:15.053: INFO: networking.k8s.io/v1 matches networking.k8s.io/v1 Oct 30 01:19:15.053: INFO: Checking APIGroup: extensions Oct 30 01:19:15.053: INFO: PreferredVersion.GroupVersion: extensions/v1beta1 Oct 30 01:19:15.053: INFO: Versions found [{extensions/v1beta1 v1beta1}] Oct 30 01:19:15.053: INFO: extensions/v1beta1 matches extensions/v1beta1 Oct 30 01:19:15.053: INFO: Checking APIGroup: policy Oct 30 01:19:15.054: INFO: PreferredVersion.GroupVersion: policy/v1 Oct 30 01:19:15.054: INFO: Versions found [{policy/v1 v1} {policy/v1beta1 v1beta1}] Oct 30 01:19:15.054: INFO: policy/v1 matches policy/v1 Oct 30 01:19:15.054: INFO: Checking APIGroup: rbac.authorization.k8s.io Oct 30 01:19:15.055: INFO: PreferredVersion.GroupVersion: rbac.authorization.k8s.io/v1 Oct 30 01:19:15.055: INFO: Versions found [{rbac.authorization.k8s.io/v1 v1} {rbac.authorization.k8s.io/v1beta1 v1beta1}] Oct 30 01:19:15.055: INFO: rbac.authorization.k8s.io/v1 matches rbac.authorization.k8s.io/v1 Oct 30 01:19:15.055: INFO: Checking APIGroup: storage.k8s.io Oct 30 01:19:15.056: INFO: PreferredVersion.GroupVersion: storage.k8s.io/v1 Oct 30 01:19:15.056: INFO: Versions found [{storage.k8s.io/v1 v1} {storage.k8s.io/v1beta1 v1beta1}] Oct 30 01:19:15.056: INFO: storage.k8s.io/v1 matches storage.k8s.io/v1 Oct 30 01:19:15.056: INFO: Checking APIGroup: admissionregistration.k8s.io Oct 30 01:19:15.057: INFO: PreferredVersion.GroupVersion: admissionregistration.k8s.io/v1 Oct 30 01:19:15.057: INFO: Versions found [{admissionregistration.k8s.io/v1 v1} {admissionregistration.k8s.io/v1beta1 v1beta1}] Oct 30 01:19:15.057: INFO: admissionregistration.k8s.io/v1 matches admissionregistration.k8s.io/v1 Oct 30 01:19:15.057: INFO: Checking APIGroup: apiextensions.k8s.io Oct 30 01:19:15.058: INFO: PreferredVersion.GroupVersion: apiextensions.k8s.io/v1 Oct 30 01:19:15.058: INFO: Versions found [{apiextensions.k8s.io/v1 v1} {apiextensions.k8s.io/v1beta1 v1beta1}] Oct 30 01:19:15.058: INFO: apiextensions.k8s.io/v1 matches apiextensions.k8s.io/v1 Oct 30 01:19:15.058: INFO: Checking APIGroup: scheduling.k8s.io Oct 30 01:19:15.059: INFO: PreferredVersion.GroupVersion: scheduling.k8s.io/v1 Oct 30 01:19:15.059: INFO: Versions found [{scheduling.k8s.io/v1 v1} {scheduling.k8s.io/v1beta1 v1beta1}] Oct 30 01:19:15.059: INFO: scheduling.k8s.io/v1 matches scheduling.k8s.io/v1 Oct 30 01:19:15.059: INFO: Checking APIGroup: coordination.k8s.io Oct 30 01:19:15.060: INFO: PreferredVersion.GroupVersion: coordination.k8s.io/v1 Oct 30 01:19:15.060: INFO: Versions found [{coordination.k8s.io/v1 v1} {coordination.k8s.io/v1beta1 v1beta1}] Oct 30 01:19:15.060: INFO: coordination.k8s.io/v1 matches coordination.k8s.io/v1 Oct 30 01:19:15.060: INFO: Checking APIGroup: node.k8s.io Oct 30 01:19:15.060: INFO: PreferredVersion.GroupVersion: node.k8s.io/v1 Oct 30 01:19:15.060: INFO: Versions found [{node.k8s.io/v1 v1} {node.k8s.io/v1beta1 v1beta1}] Oct 30 01:19:15.060: INFO: node.k8s.io/v1 matches node.k8s.io/v1 Oct 30 01:19:15.060: INFO: Checking APIGroup: discovery.k8s.io Oct 30 01:19:15.061: INFO: PreferredVersion.GroupVersion: discovery.k8s.io/v1 Oct 30 01:19:15.061: INFO: Versions found [{discovery.k8s.io/v1 v1} {discovery.k8s.io/v1beta1 v1beta1}] Oct 30 01:19:15.061: INFO: discovery.k8s.io/v1 matches discovery.k8s.io/v1 Oct 30 01:19:15.061: INFO: Checking APIGroup: flowcontrol.apiserver.k8s.io Oct 30 01:19:15.062: INFO: PreferredVersion.GroupVersion: flowcontrol.apiserver.k8s.io/v1beta1 Oct 30 01:19:15.062: INFO: Versions found [{flowcontrol.apiserver.k8s.io/v1beta1 v1beta1}] Oct 30 01:19:15.062: INFO: flowcontrol.apiserver.k8s.io/v1beta1 matches flowcontrol.apiserver.k8s.io/v1beta1 Oct 30 01:19:15.062: INFO: Checking APIGroup: intel.com Oct 30 01:19:15.063: INFO: PreferredVersion.GroupVersion: intel.com/v1 Oct 30 01:19:15.063: INFO: Versions found [{intel.com/v1 v1}] Oct 30 01:19:15.063: INFO: intel.com/v1 matches intel.com/v1 Oct 30 01:19:15.063: INFO: Checking APIGroup: k8s.cni.cncf.io Oct 30 01:19:15.064: INFO: PreferredVersion.GroupVersion: k8s.cni.cncf.io/v1 Oct 30 01:19:15.064: INFO: Versions found [{k8s.cni.cncf.io/v1 v1}] Oct 30 01:19:15.064: INFO: k8s.cni.cncf.io/v1 matches k8s.cni.cncf.io/v1 Oct 30 01:19:15.064: INFO: Checking APIGroup: monitoring.coreos.com Oct 30 01:19:15.065: INFO: PreferredVersion.GroupVersion: monitoring.coreos.com/v1 Oct 30 01:19:15.065: INFO: Versions found [{monitoring.coreos.com/v1 v1} {monitoring.coreos.com/v1alpha1 v1alpha1}] Oct 30 01:19:15.065: INFO: monitoring.coreos.com/v1 matches monitoring.coreos.com/v1 Oct 30 01:19:15.065: INFO: Checking APIGroup: telemetry.intel.com Oct 30 01:19:15.065: INFO: PreferredVersion.GroupVersion: telemetry.intel.com/v1alpha1 Oct 30 01:19:15.066: INFO: Versions found [{telemetry.intel.com/v1alpha1 v1alpha1}] Oct 30 01:19:15.066: INFO: telemetry.intel.com/v1alpha1 matches telemetry.intel.com/v1alpha1 Oct 30 01:19:15.066: INFO: Checking APIGroup: custom.metrics.k8s.io Oct 30 01:19:15.066: INFO: PreferredVersion.GroupVersion: custom.metrics.k8s.io/v1beta1 Oct 30 01:19:15.066: INFO: Versions found [{custom.metrics.k8s.io/v1beta1 v1beta1}] Oct 30 01:19:15.066: INFO: custom.metrics.k8s.io/v1beta1 matches custom.metrics.k8s.io/v1beta1 [AfterEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:19:15.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "discovery-4244" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":-1,"completed":40,"skipped":756,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:19:15.123: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should run the lifecycle of PodTemplates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:19:15.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-965" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":-1,"completed":41,"skipped":786,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:19:12.670: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 01:19:12.691: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes Oct 30 01:19:12.707: INFO: The status of Pod pod-logs-websocket-90cc3e9e-1533-490b-aadd-b7fe1e263899 is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:19:14.710: INFO: The status of Pod pod-logs-websocket-90cc3e9e-1533-490b-aadd-b7fe1e263899 is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:19:16.711: INFO: The status of Pod pod-logs-websocket-90cc3e9e-1533-490b-aadd-b7fe1e263899 is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:19:18.714: INFO: The status of Pod pod-logs-websocket-90cc3e9e-1533-490b-aadd-b7fe1e263899 is Running (Ready = true) [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:19:18.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-223" for this suite. • [SLOW TEST:6.071 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":492,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:19:09.160: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-787.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-787.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-787.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-787.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-787.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-787.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Oct 30 01:19:19.229: INFO: DNS probes using dns-787/dns-test-a205879e-161e-4936-bc0f-e3c3271ebe30 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:19:19.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-787" for this suite. • [SLOW TEST:10.091 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":-1,"completed":16,"skipped":340,"failed":0} SS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:19:15.184: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Oct 30 01:19:19.235: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:19:19.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3465" for this suite. •SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:19:14.419: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-map-37423faf-c566-4ee1-9de5-e4f8dab49079 STEP: Creating a pod to test consume secrets Oct 30 01:19:14.450: INFO: Waiting up to 5m0s for pod "pod-secrets-5e9d607f-4817-4d03-b15d-1a7f5de9756a" in namespace "secrets-2160" to be "Succeeded or Failed" Oct 30 01:19:14.453: INFO: Pod "pod-secrets-5e9d607f-4817-4d03-b15d-1a7f5de9756a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.3478ms Oct 30 01:19:16.455: INFO: Pod "pod-secrets-5e9d607f-4817-4d03-b15d-1a7f5de9756a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005123965s Oct 30 01:19:18.460: INFO: Pod "pod-secrets-5e9d607f-4817-4d03-b15d-1a7f5de9756a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009379516s Oct 30 01:19:20.463: INFO: Pod "pod-secrets-5e9d607f-4817-4d03-b15d-1a7f5de9756a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.012318928s STEP: Saw pod success Oct 30 01:19:20.463: INFO: Pod "pod-secrets-5e9d607f-4817-4d03-b15d-1a7f5de9756a" satisfied condition "Succeeded or Failed" Oct 30 01:19:20.465: INFO: Trying to get logs from node node1 pod pod-secrets-5e9d607f-4817-4d03-b15d-1a7f5de9756a container secret-volume-test: STEP: delete the pod Oct 30 01:19:20.479: INFO: Waiting for pod pod-secrets-5e9d607f-4817-4d03-b15d-1a7f5de9756a to disappear Oct 30 01:19:20.481: INFO: Pod pod-secrets-5e9d607f-4817-4d03-b15d-1a7f5de9756a no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:19:20.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2160" for this suite. • [SLOW TEST:6.070 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":32,"skipped":529,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} S ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:18:57.077: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to change the type from NodePort to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a service nodeport-service with the type=NodePort in namespace services-4937 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-4937 STEP: creating replication controller externalsvc in namespace services-4937 I1030 01:18:57.114834 27 runners.go:190] Created replication controller with name: externalsvc, namespace: services-4937, replica count: 2 I1030 01:19:00.167080 27 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1030 01:19:03.168164 27 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1030 01:19:06.168394 27 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Oct 30 01:19:06.181: INFO: Creating new exec pod Oct 30 01:19:10.196: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4937 exec execpodf8rxw -- /bin/sh -x -c nslookup nodeport-service.services-4937.svc.cluster.local' Oct 30 01:19:10.582: INFO: stderr: "+ nslookup nodeport-service.services-4937.svc.cluster.local\n" Oct 30 01:19:10.582: INFO: stdout: "Server:\t\t10.233.0.3\nAddress:\t10.233.0.3#53\n\nnodeport-service.services-4937.svc.cluster.local\tcanonical name = externalsvc.services-4937.svc.cluster.local.\nName:\texternalsvc.services-4937.svc.cluster.local\nAddress: 10.233.43.7\n\n" STEP: deleting ReplicationController externalsvc in namespace services-4937, will wait for the garbage collector to delete the pods Oct 30 01:19:10.640: INFO: Deleting ReplicationController externalsvc took: 4.012173ms Oct 30 01:19:10.741: INFO: Terminating ReplicationController externalsvc pods took: 101.297377ms Oct 30 01:19:22.952: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:19:22.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4937" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:25.888 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":-1,"completed":22,"skipped":294,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:19:18.750: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1308 STEP: creating the pod Oct 30 01:19:18.773: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8129 create -f -' Oct 30 01:19:19.162: INFO: stderr: "" Oct 30 01:19:19.162: INFO: stdout: "pod/pause created\n" Oct 30 01:19:19.162: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Oct 30 01:19:19.162: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-8129" to be "running and ready" Oct 30 01:19:19.165: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.62051ms Oct 30 01:19:21.169: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006355455s Oct 30 01:19:23.173: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010951532s Oct 30 01:19:25.179: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 6.016376951s Oct 30 01:19:25.179: INFO: Pod "pause" satisfied condition "running and ready" Oct 30 01:19:25.179: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: adding the label testing-label with value testing-label-value to a pod Oct 30 01:19:25.179: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8129 label pods pause testing-label=testing-label-value' Oct 30 01:19:25.344: INFO: stderr: "" Oct 30 01:19:25.344: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Oct 30 01:19:25.344: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8129 get pod pause -L testing-label' Oct 30 01:19:25.512: INFO: stderr: "" Oct 30 01:19:25.512: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 6s testing-label-value\n" STEP: removing the label testing-label of a pod Oct 30 01:19:25.512: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8129 label pods pause testing-label-' Oct 30 01:19:25.689: INFO: stderr: "" Oct 30 01:19:25.689: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Oct 30 01:19:25.689: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8129 get pod pause -L testing-label' Oct 30 01:19:25.846: INFO: stderr: "" Oct 30 01:19:25.846: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 6s \n" [AfterEach] Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1314 STEP: using delete to clean up resources Oct 30 01:19:25.846: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8129 delete --grace-period=0 --force -f -' Oct 30 01:19:25.985: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Oct 30 01:19:25.985: INFO: stdout: "pod \"pause\" force deleted\n" Oct 30 01:19:25.985: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8129 get rc,svc -l name=pause --no-headers' Oct 30 01:19:26.185: INFO: stderr: "No resources found in kubectl-8129 namespace.\n" Oct 30 01:19:26.185: INFO: stdout: "" Oct 30 01:19:26.185: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8129 get pods -l name=pause -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Oct 30 01:19:26.349: INFO: stderr: "" Oct 30 01:19:26.349: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:19:26.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8129" for this suite. • [SLOW TEST:7.608 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1306 should update the label on a resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":-1,"completed":27,"skipped":495,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:19:23.004: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test substitution in container's args Oct 30 01:19:23.036: INFO: Waiting up to 5m0s for pod "var-expansion-a3ab1aed-f3c2-49cb-9673-da8978556077" in namespace "var-expansion-7183" to be "Succeeded or Failed" Oct 30 01:19:23.039: INFO: Pod "var-expansion-a3ab1aed-f3c2-49cb-9673-da8978556077": Phase="Pending", Reason="", readiness=false. Elapsed: 2.159108ms Oct 30 01:19:25.044: INFO: Pod "var-expansion-a3ab1aed-f3c2-49cb-9673-da8978556077": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00790974s Oct 30 01:19:27.048: INFO: Pod "var-expansion-a3ab1aed-f3c2-49cb-9673-da8978556077": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011392427s STEP: Saw pod success Oct 30 01:19:27.048: INFO: Pod "var-expansion-a3ab1aed-f3c2-49cb-9673-da8978556077" satisfied condition "Succeeded or Failed" Oct 30 01:19:27.051: INFO: Trying to get logs from node node1 pod var-expansion-a3ab1aed-f3c2-49cb-9673-da8978556077 container dapi-container: STEP: delete the pod Oct 30 01:19:27.150: INFO: Waiting for pod var-expansion-a3ab1aed-f3c2-49cb-9673-da8978556077 to disappear Oct 30 01:19:27.152: INFO: Pod var-expansion-a3ab1aed-f3c2-49cb-9673-da8978556077 no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:19:27.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7183" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":313,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:19:27.192: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 01:19:27.227: INFO: Waiting up to 5m0s for pod "busybox-user-65534-8dc68345-23b8-4f12-a7b5-6cb5573ebead" in namespace "security-context-test-5034" to be "Succeeded or Failed" Oct 30 01:19:27.230: INFO: Pod "busybox-user-65534-8dc68345-23b8-4f12-a7b5-6cb5573ebead": Phase="Pending", Reason="", readiness=false. Elapsed: 2.513467ms Oct 30 01:19:29.234: INFO: Pod "busybox-user-65534-8dc68345-23b8-4f12-a7b5-6cb5573ebead": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006335721s Oct 30 01:19:31.237: INFO: Pod "busybox-user-65534-8dc68345-23b8-4f12-a7b5-6cb5573ebead": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009724522s Oct 30 01:19:33.243: INFO: Pod "busybox-user-65534-8dc68345-23b8-4f12-a7b5-6cb5573ebead": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015124773s Oct 30 01:19:33.243: INFO: Pod "busybox-user-65534-8dc68345-23b8-4f12-a7b5-6cb5573ebead" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:19:33.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5034" for this suite. • [SLOW TEST:6.061 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a container with runAsUser /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:50 should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":326,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:19:19.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:19:35.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7566" for this suite. • [SLOW TEST:16.116 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:19:26.384: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with backticks [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 01:19:30.425: INFO: Deleting pod "var-expansion-52f3b947-e1a6-4cf7-b6bd-8d8bac45c94d" in namespace "var-expansion-6429" Oct 30 01:19:30.430: INFO: Wait up to 5m0s for pod "var-expansion-52f3b947-e1a6-4cf7-b6bd-8d8bac45c94d" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:19:44.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6429" for this suite. • [SLOW TEST:18.066 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should fail substituting values in a volume subpath with backticks [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]","total":-1,"completed":28,"skipped":507,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:19:33.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Oct 30 01:19:33.344: INFO: Waiting up to 5m0s for pod "downwardapi-volume-87756aec-1a78-4beb-b4a5-6c30795bfa55" in namespace "projected-327" to be "Succeeded or Failed" Oct 30 01:19:33.348: INFO: Pod "downwardapi-volume-87756aec-1a78-4beb-b4a5-6c30795bfa55": Phase="Pending", Reason="", readiness=false. Elapsed: 3.926799ms Oct 30 01:19:35.353: INFO: Pod "downwardapi-volume-87756aec-1a78-4beb-b4a5-6c30795bfa55": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008540682s Oct 30 01:19:37.358: INFO: Pod "downwardapi-volume-87756aec-1a78-4beb-b4a5-6c30795bfa55": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013860747s Oct 30 01:19:39.363: INFO: Pod "downwardapi-volume-87756aec-1a78-4beb-b4a5-6c30795bfa55": Phase="Pending", Reason="", readiness=false. Elapsed: 6.018833174s Oct 30 01:19:41.367: INFO: Pod "downwardapi-volume-87756aec-1a78-4beb-b4a5-6c30795bfa55": Phase="Pending", Reason="", readiness=false. Elapsed: 8.022595782s Oct 30 01:19:43.371: INFO: Pod "downwardapi-volume-87756aec-1a78-4beb-b4a5-6c30795bfa55": Phase="Pending", Reason="", readiness=false. Elapsed: 10.026424354s Oct 30 01:19:45.374: INFO: Pod "downwardapi-volume-87756aec-1a78-4beb-b4a5-6c30795bfa55": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.029514941s STEP: Saw pod success Oct 30 01:19:45.374: INFO: Pod "downwardapi-volume-87756aec-1a78-4beb-b4a5-6c30795bfa55" satisfied condition "Succeeded or Failed" Oct 30 01:19:45.377: INFO: Trying to get logs from node node1 pod downwardapi-volume-87756aec-1a78-4beb-b4a5-6c30795bfa55 container client-container: STEP: delete the pod Oct 30 01:19:45.463: INFO: Waiting for pod downwardapi-volume-87756aec-1a78-4beb-b4a5-6c30795bfa55 to disappear Oct 30 01:19:45.466: INFO: Pod downwardapi-volume-87756aec-1a78-4beb-b4a5-6c30795bfa55 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:19:45.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-327" for this suite. • [SLOW TEST:12.169 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":357,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSS ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":-1,"completed":17,"skipped":358,"failed":0} [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:19:35.398: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] Deployment should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 01:19:35.419: INFO: Creating simple deployment test-new-deployment Oct 30 01:19:35.427: INFO: new replicaset for deployment "test-new-deployment" is yet to be created Oct 30 01:19:37.438: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153575, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153575, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153575, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153575, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-847dcfb7fb\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 30 01:19:39.442: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153575, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153575, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153575, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153575, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-847dcfb7fb\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 30 01:19:41.441: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153575, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153575, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153575, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153575, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-847dcfb7fb\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 30 01:19:43.443: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153575, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153575, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153575, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153575, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-847dcfb7fb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the deployment Spec.Replicas was modified STEP: Patch a scale subresource [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 Oct 30 01:19:45.460: INFO: Deployment "test-new-deployment": &Deployment{ObjectMeta:{test-new-deployment deployment-5165 f276abab-c3a3-4a93-af42-015d1bc9a0c9 93969 3 2021-10-30 01:19:35 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 2021-10-30 01:19:35 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-10-30 01:19:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*4,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00061d2f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2021-10-30 01:19:45 +0000 UTC,LastTransitionTime:2021-10-30 01:19:45 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-new-deployment-847dcfb7fb" has successfully progressed.,LastUpdateTime:2021-10-30 01:19:45 +0000 UTC,LastTransitionTime:2021-10-30 01:19:35 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Oct 30 01:19:45.463: INFO: New ReplicaSet "test-new-deployment-847dcfb7fb" of Deployment "test-new-deployment": &ReplicaSet{ObjectMeta:{test-new-deployment-847dcfb7fb deployment-5165 502c93d9-b60b-4bae-90d5-84d9efa2b7bf 93972 3 2021-10-30 01:19:35 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[deployment.kubernetes.io/desired-replicas:4 deployment.kubernetes.io/max-replicas:5 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-new-deployment f276abab-c3a3-4a93-af42-015d1bc9a0c9 0xc00061dde7 0xc00061dde8}] [] [{kube-controller-manager Update apps/v1 2021-10-30 01:19:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f276abab-c3a3-4a93-af42-015d1bc9a0c9\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*4,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 847dcfb7fb,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00061df18 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Oct 30 01:19:45.466: INFO: Pod "test-new-deployment-847dcfb7fb-n7xj8" is not available: &Pod{ObjectMeta:{test-new-deployment-847dcfb7fb-n7xj8 test-new-deployment-847dcfb7fb- deployment-5165 219f71d9-168f-4f96-bdcb-c6eb1d4e9a2d 93973 0 2021-10-30 01:19:45 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-new-deployment-847dcfb7fb 502c93d9-b60b-4bae-90d5-84d9efa2b7bf 0xc00151c2bf 0xc00151c2d0}] [] [{kube-controller-manager Update v1 2021-10-30 01:19:45 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"502c93d9-b60b-4bae-90d5-84d9efa2b7bf\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-wbq2j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wbq2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 30 01:19:45.467: INFO: Pod "test-new-deployment-847dcfb7fb-vlc64" is available: &Pod{ObjectMeta:{test-new-deployment-847dcfb7fb-vlc64 test-new-deployment-847dcfb7fb- deployment-5165 5fd2f513-eec0-434c-8338-48f657f2a703 93961 0 2021-10-30 01:19:35 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.165" ], "mac": "92:b6:fa:30:dd:45", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.165" ], "mac": "92:b6:fa:30:dd:45", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-new-deployment-847dcfb7fb 502c93d9-b60b-4bae-90d5-84d9efa2b7bf 0xc00151c40f 0xc00151c420}] [] [{kube-controller-manager Update v1 2021-10-30 01:19:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"502c93d9-b60b-4bae-90d5-84d9efa2b7bf\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-10-30 01:19:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-10-30 01:19:45 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.165\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-scx74,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-scx74,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:19:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:19:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:19:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:19:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.3.165,StartTime:2021-10-30 01:19:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-30 01:19:42 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://f93bab0c27452da6e0faf0faa201fc62538d6574515226b8b276d115076f22bc,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.165,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:19:45.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5165" for this suite. • [SLOW TEST:10.091 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Deployment should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-apps] Deployment Deployment should have a working scale subresource [Conformance]","total":-1,"completed":18,"skipped":358,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:19:44.459: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name projected-secret-test-c331f6b4-490a-4dcd-9a25-a600858e26ff STEP: Creating a pod to test consume secrets Oct 30 01:19:44.496: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-44b57f8d-cb22-482d-9a77-d882d0d2099a" in namespace "projected-7702" to be "Succeeded or Failed" Oct 30 01:19:44.499: INFO: Pod "pod-projected-secrets-44b57f8d-cb22-482d-9a77-d882d0d2099a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.297833ms Oct 30 01:19:46.503: INFO: Pod "pod-projected-secrets-44b57f8d-cb22-482d-9a77-d882d0d2099a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006415318s Oct 30 01:19:48.507: INFO: Pod "pod-projected-secrets-44b57f8d-cb22-482d-9a77-d882d0d2099a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010387026s STEP: Saw pod success Oct 30 01:19:48.507: INFO: Pod "pod-projected-secrets-44b57f8d-cb22-482d-9a77-d882d0d2099a" satisfied condition "Succeeded or Failed" Oct 30 01:19:48.509: INFO: Trying to get logs from node node1 pod pod-projected-secrets-44b57f8d-cb22-482d-9a77-d882d0d2099a container projected-secret-volume-test: STEP: delete the pod Oct 30 01:19:48.521: INFO: Waiting for pod pod-projected-secrets-44b57f8d-cb22-482d-9a77-d882d0d2099a to disappear Oct 30 01:19:48.523: INFO: Pod pod-projected-secrets-44b57f8d-cb22-482d-9a77-d882d0d2099a no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:19:48.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7702" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":29,"skipped":510,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:19:20.494: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Performing setup for networking test in namespace pod-network-test-8882 STEP: creating a selector STEP: Creating the service pods in kubernetes Oct 30 01:19:20.514: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Oct 30 01:19:20.547: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:19:22.551: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:19:24.552: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:19:26.552: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 30 01:19:28.552: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 30 01:19:30.550: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 30 01:19:32.550: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 30 01:19:34.555: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 30 01:19:36.552: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 30 01:19:38.555: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 30 01:19:40.550: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 30 01:19:42.552: INFO: The status of Pod netserver-0 is Running (Ready = true) Oct 30 01:19:42.557: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Oct 30 01:19:48.594: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 Oct 30 01:19:48.594: INFO: Going to poll 10.244.3.162 on port 8081 at least 0 times, with a maximum of 34 tries before failing Oct 30 01:19:48.598: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.3.162 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8882 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 01:19:48.598: INFO: >>> kubeConfig: /root/.kube/config Oct 30 01:19:49.687: INFO: Found all 1 expected endpoints: [netserver-0] Oct 30 01:19:49.687: INFO: Going to poll 10.244.4.30 on port 8081 at least 0 times, with a maximum of 34 tries before failing Oct 30 01:19:49.689: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.4.30 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8882 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 01:19:49.689: INFO: >>> kubeConfig: /root/.kube/config Oct 30 01:19:50.770: INFO: Found all 1 expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:19:50.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8882" for this suite. • [SLOW TEST:30.290 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":33,"skipped":530,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:19:45.591: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:77 Oct 30 01:19:45.614: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the sample API server. Oct 30 01:19:45.920: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Oct 30 01:19:47.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153585, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153585, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153585, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153585, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 30 01:19:49.953: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153585, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153585, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153585, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153585, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 30 01:19:52.776: INFO: Waited 814.271951ms for the sample-apiserver to be ready to handle requests. STEP: Read Status for v1alpha1.wardle.example.com STEP: kubectl patch apiservice v1alpha1.wardle.example.com -p '{"spec":{"versionPriority": 400}}' STEP: List APIServices Oct 30 01:19:53.221: INFO: Found v1alpha1.wardle.example.com in APIServiceList [AfterEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:68 [AfterEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:19:54.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-2879" for this suite. • [SLOW TEST:8.520 seconds] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":-1,"completed":26,"skipped":418,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:19:48.558: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:19:59.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8841" for this suite. • [SLOW TEST:11.062 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":-1,"completed":30,"skipped":520,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:19:59.634: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0666 on tmpfs Oct 30 01:19:59.667: INFO: Waiting up to 5m0s for pod "pod-0ca9f2a7-4f32-4b41-97e8-6d0039ca9409" in namespace "emptydir-3060" to be "Succeeded or Failed" Oct 30 01:19:59.672: INFO: Pod "pod-0ca9f2a7-4f32-4b41-97e8-6d0039ca9409": Phase="Pending", Reason="", readiness=false. Elapsed: 5.142593ms Oct 30 01:20:01.676: INFO: Pod "pod-0ca9f2a7-4f32-4b41-97e8-6d0039ca9409": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009156301s Oct 30 01:20:03.681: INFO: Pod "pod-0ca9f2a7-4f32-4b41-97e8-6d0039ca9409": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013665198s STEP: Saw pod success Oct 30 01:20:03.681: INFO: Pod "pod-0ca9f2a7-4f32-4b41-97e8-6d0039ca9409" satisfied condition "Succeeded or Failed" Oct 30 01:20:03.683: INFO: Trying to get logs from node node1 pod pod-0ca9f2a7-4f32-4b41-97e8-6d0039ca9409 container test-container: STEP: delete the pod Oct 30 01:20:03.697: INFO: Waiting for pod pod-0ca9f2a7-4f32-4b41-97e8-6d0039ca9409 to disappear Oct 30 01:20:03.699: INFO: Pod pod-0ca9f2a7-4f32-4b41-97e8-6d0039ca9409 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:20:03.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3060" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":31,"skipped":525,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSS ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":42,"skipped":791,"failed":0} [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:19:19.257: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should succeed in writing subpaths in container [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: waiting for pod running STEP: creating a file in subpath Oct 30 01:19:25.302: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-9301 PodName:var-expansion-a55be079-124c-49ed-bd28-75cef66a0df6 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 01:19:25.302: INFO: >>> kubeConfig: /root/.kube/config STEP: test for file in mounted path Oct 30 01:19:25.398: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-9301 PodName:var-expansion-a55be079-124c-49ed-bd28-75cef66a0df6 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 01:19:25.398: INFO: >>> kubeConfig: /root/.kube/config STEP: updating the annotation value Oct 30 01:19:25.981: INFO: Successfully updated pod "var-expansion-a55be079-124c-49ed-bd28-75cef66a0df6" STEP: waiting for annotated pod running STEP: deleting the pod gracefully Oct 30 01:19:25.984: INFO: Deleting pod "var-expansion-a55be079-124c-49ed-bd28-75cef66a0df6" in namespace "var-expansion-9301" Oct 30 01:19:25.989: INFO: Wait up to 5m0s for pod "var-expansion-a55be079-124c-49ed-bd28-75cef66a0df6" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:20:03.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9301" for this suite. • [SLOW TEST:44.746 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should succeed in writing subpaths in container [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","total":-1,"completed":43,"skipped":791,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:19:54.142: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name s-test-opt-del-a4969b47-3834-4e67-b9a0-1d655671b393 STEP: Creating secret with name s-test-opt-upd-1babad5b-8cbf-4180-a06a-5bcf5c0b1707 STEP: Creating the pod Oct 30 01:19:54.187: INFO: The status of Pod pod-secrets-227f8da8-eab8-4319-9efc-f7dda46cffc4 is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:19:56.192: INFO: The status of Pod pod-secrets-227f8da8-eab8-4319-9efc-f7dda46cffc4 is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:19:58.192: INFO: The status of Pod pod-secrets-227f8da8-eab8-4319-9efc-f7dda46cffc4 is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:20:00.192: INFO: The status of Pod pod-secrets-227f8da8-eab8-4319-9efc-f7dda46cffc4 is Running (Ready = true) STEP: Deleting secret s-test-opt-del-a4969b47-3834-4e67-b9a0-1d655671b393 STEP: Updating secret s-test-opt-upd-1babad5b-8cbf-4180-a06a-5bcf5c0b1707 STEP: Creating secret with name s-test-opt-create-f27c5df4-69cf-4fc6-b8a8-489f38592f10 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:20:04.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1913" for this suite. • [SLOW TEST:10.120 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":433,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} S ------------------------------ [BeforeEach] [sig-instrumentation] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:20:04.267: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a test event STEP: listing all events in all namespaces STEP: patching the test event STEP: fetching the test event STEP: deleting the test event STEP: listing all events in all namespaces [AfterEach] [sig-instrumentation] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:20:04.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-1349" for this suite. • ------------------------------ {"msg":"PASSED [sig-instrumentation] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":28,"skipped":434,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:20:03.718: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-9039ffde-bcde-45e7-b85c-19044c4fe2d4 STEP: Creating a pod to test consume secrets Oct 30 01:20:03.779: INFO: Waiting up to 5m0s for pod "pod-secrets-b3793528-7d4e-4fb0-acd5-ac9efa89a03f" in namespace "secrets-309" to be "Succeeded or Failed" Oct 30 01:20:03.782: INFO: Pod "pod-secrets-b3793528-7d4e-4fb0-acd5-ac9efa89a03f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.680999ms Oct 30 01:20:05.785: INFO: Pod "pod-secrets-b3793528-7d4e-4fb0-acd5-ac9efa89a03f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005377922s Oct 30 01:20:07.787: INFO: Pod "pod-secrets-b3793528-7d4e-4fb0-acd5-ac9efa89a03f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008143151s STEP: Saw pod success Oct 30 01:20:07.787: INFO: Pod "pod-secrets-b3793528-7d4e-4fb0-acd5-ac9efa89a03f" satisfied condition "Succeeded or Failed" Oct 30 01:20:07.790: INFO: Trying to get logs from node node1 pod pod-secrets-b3793528-7d4e-4fb0-acd5-ac9efa89a03f container secret-volume-test: STEP: delete the pod Oct 30 01:20:07.933: INFO: Waiting for pod pod-secrets-b3793528-7d4e-4fb0-acd5-ac9efa89a03f to disappear Oct 30 01:20:07.935: INFO: Pod pod-secrets-b3793528-7d4e-4fb0-acd5-ac9efa89a03f no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:20:07.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-309" for this suite. STEP: Destroying namespace "secret-namespace-1320" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":-1,"completed":32,"skipped":529,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:20:04.059: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:20:08.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7915" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":-1,"completed":44,"skipped":820,"failed":0} SS ------------------------------ [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:20:08.121: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] Replace and Patch tests [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 01:20:08.149: INFO: Pod name sample-pod: Found 0 pods out of 1 Oct 30 01:20:13.155: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running STEP: Scaling up "test-rs" replicaset Oct 30 01:20:13.160: INFO: Updating replica set "test-rs" STEP: patching the ReplicaSet Oct 30 01:20:13.168: INFO: observed ReplicaSet test-rs in namespace replicaset-9387 with ReadyReplicas 1, AvailableReplicas 1 Oct 30 01:20:13.178: INFO: observed ReplicaSet test-rs in namespace replicaset-9387 with ReadyReplicas 1, AvailableReplicas 1 Oct 30 01:20:13.208: INFO: observed ReplicaSet test-rs in namespace replicaset-9387 with ReadyReplicas 1, AvailableReplicas 1 Oct 30 01:20:13.212: INFO: observed ReplicaSet test-rs in namespace replicaset-9387 with ReadyReplicas 1, AvailableReplicas 1 Oct 30 01:20:17.184: INFO: observed ReplicaSet test-rs in namespace replicaset-9387 with ReadyReplicas 2, AvailableReplicas 2 Oct 30 01:20:17.195: INFO: observed Replicaset test-rs in namespace replicaset-9387 with ReadyReplicas 3 found true [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:20:17.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-9387" for this suite. • [SLOW TEST:9.082 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Replace and Patch tests [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet Replace and Patch tests [Conformance]","total":-1,"completed":45,"skipped":822,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:20:04.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostport STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/hostport.go:47 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Trying to create a pod(pod1) with hostport 54323 and hostIP 127.0.0.1 and expect scheduled Oct 30 01:20:04.396: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:20:06.399: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:20:08.401: INFO: The status of Pod pod1 is Running (Ready = true) STEP: Trying to create another pod(pod2) with hostport 54323 but hostIP 10.10.190.208 on the node which pod1 resides and expect scheduled Oct 30 01:20:08.414: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:20:10.416: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:20:12.418: INFO: The status of Pod pod2 is Running (Ready = true) STEP: Trying to create a third pod(pod3) with hostport 54323, hostIP 10.10.190.208 but use UDP protocol on the node which pod2 resides Oct 30 01:20:12.431: INFO: The status of Pod pod3 is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:20:14.435: INFO: The status of Pod pod3 is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:20:16.434: INFO: The status of Pod pod3 is Running (Ready = true) Oct 30 01:20:16.446: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:20:18.449: INFO: The status of Pod e2e-host-exec is Running (Ready = true) STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54323 Oct 30 01:20:18.452: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 10.10.190.208 http://127.0.0.1:54323/hostname] Namespace:hostport-2587 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 01:20:18.452: INFO: >>> kubeConfig: /root/.kube/config STEP: checking connectivity from pod e2e-host-exec to serverIP: 10.10.190.208, port: 54323 Oct 30 01:20:18.544: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://10.10.190.208:54323/hostname] Namespace:hostport-2587 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 01:20:18.544: INFO: >>> kubeConfig: /root/.kube/config STEP: checking connectivity from pod e2e-host-exec to serverIP: 10.10.190.208, port: 54323 UDP Oct 30 01:20:18.636: INFO: ExecWithOptions {Command:[/bin/sh -c nc -vuz -w 5 10.10.190.208 54323] Namespace:hostport-2587 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 30 01:20:18.636: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:20:23.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostport-2587" for this suite. • [SLOW TEST:19.372 seconds] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]","total":-1,"completed":29,"skipped":452,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:20:17.214: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating replication controller my-hostname-basic-ef895dd7-ef37-46d8-b009-02015cfa7841 Oct 30 01:20:17.243: INFO: Pod name my-hostname-basic-ef895dd7-ef37-46d8-b009-02015cfa7841: Found 0 pods out of 1 Oct 30 01:20:22.246: INFO: Pod name my-hostname-basic-ef895dd7-ef37-46d8-b009-02015cfa7841: Found 1 pods out of 1 Oct 30 01:20:22.246: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-ef895dd7-ef37-46d8-b009-02015cfa7841" are running Oct 30 01:20:22.248: INFO: Pod "my-hostname-basic-ef895dd7-ef37-46d8-b009-02015cfa7841-hv5tv" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-30 01:20:17 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-30 01:20:19 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-30 01:20:19 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-30 01:20:17 +0000 UTC Reason: Message:}]) Oct 30 01:20:22.248: INFO: Trying to dial the pod Oct 30 01:20:27.260: INFO: Controller my-hostname-basic-ef895dd7-ef37-46d8-b009-02015cfa7841: Got expected result from replica 1 [my-hostname-basic-ef895dd7-ef37-46d8-b009-02015cfa7841-hv5tv]: "my-hostname-basic-ef895dd7-ef37-46d8-b009-02015cfa7841-hv5tv", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:20:27.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9920" for this suite. • [SLOW TEST:10.057 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":-1,"completed":46,"skipped":827,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:20:23.779: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test override all Oct 30 01:20:23.814: INFO: Waiting up to 5m0s for pod "client-containers-11db774d-66b9-4792-9636-03fcca290593" in namespace "containers-512" to be "Succeeded or Failed" Oct 30 01:20:23.816: INFO: Pod "client-containers-11db774d-66b9-4792-9636-03fcca290593": Phase="Pending", Reason="", readiness=false. Elapsed: 2.375617ms Oct 30 01:20:25.819: INFO: Pod "client-containers-11db774d-66b9-4792-9636-03fcca290593": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005631647s Oct 30 01:20:27.823: INFO: Pod "client-containers-11db774d-66b9-4792-9636-03fcca290593": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009014684s STEP: Saw pod success Oct 30 01:20:27.823: INFO: Pod "client-containers-11db774d-66b9-4792-9636-03fcca290593" satisfied condition "Succeeded or Failed" Oct 30 01:20:27.826: INFO: Trying to get logs from node node2 pod client-containers-11db774d-66b9-4792-9636-03fcca290593 container agnhost-container: STEP: delete the pod Oct 30 01:20:27.840: INFO: Waiting for pod client-containers-11db774d-66b9-4792-9636-03fcca290593 to disappear Oct 30 01:20:27.843: INFO: Pod client-containers-11db774d-66b9-4792-9636-03fcca290593 no longer exists [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:20:27.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-512" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":-1,"completed":30,"skipped":473,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:18:03.082: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod liveness-ef202d16-e74f-448a-a270-82f980902b05 in namespace container-probe-3358 Oct 30 01:18:07.116: INFO: Started pod liveness-ef202d16-e74f-448a-a270-82f980902b05 in namespace container-probe-3358 STEP: checking the pod's current state and verifying that restartCount is present Oct 30 01:18:07.120: INFO: Initial restart count of pod liveness-ef202d16-e74f-448a-a270-82f980902b05 is 0 Oct 30 01:18:25.161: INFO: Restart count of pod container-probe-3358/liveness-ef202d16-e74f-448a-a270-82f980902b05 is now 1 (18.041519139s elapsed) Oct 30 01:18:47.203: INFO: Restart count of pod container-probe-3358/liveness-ef202d16-e74f-448a-a270-82f980902b05 is now 2 (40.083236501s elapsed) Oct 30 01:19:05.242: INFO: Restart count of pod container-probe-3358/liveness-ef202d16-e74f-448a-a270-82f980902b05 is now 3 (58.122375508s elapsed) Oct 30 01:19:25.285: INFO: Restart count of pod container-probe-3358/liveness-ef202d16-e74f-448a-a270-82f980902b05 is now 4 (1m18.16511826s elapsed) Oct 30 01:20:39.421: INFO: Restart count of pod container-probe-3358/liveness-ef202d16-e74f-448a-a270-82f980902b05 is now 5 (2m32.301633965s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:20:39.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3358" for this suite. • [SLOW TEST:156.356 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":163,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:20:27.282: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 01:20:27.307: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-7566 I1030 01:20:27.327589 30 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-7566, replica count: 1 I1030 01:20:28.379349 30 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1030 01:20:29.379655 30 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1030 01:20:30.380296 30 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1030 01:20:31.380718 30 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 30 01:20:31.487: INFO: Created: latency-svc-mdf9k Oct 30 01:20:31.492: INFO: Got endpoints: latency-svc-mdf9k [10.858154ms] Oct 30 01:20:31.508: INFO: Created: latency-svc-fftz7 Oct 30 01:20:31.510: INFO: Got endpoints: latency-svc-fftz7 [18.10736ms] Oct 30 01:20:31.511: INFO: Created: latency-svc-dd4ss Oct 30 01:20:31.513: INFO: Created: latency-svc-h7fct Oct 30 01:20:31.514: INFO: Got endpoints: latency-svc-dd4ss [21.570582ms] Oct 30 01:20:31.516: INFO: Created: latency-svc-5dfc7 Oct 30 01:20:31.516: INFO: Got endpoints: latency-svc-h7fct [23.475894ms] Oct 30 01:20:31.518: INFO: Got endpoints: latency-svc-5dfc7 [25.563107ms] Oct 30 01:20:31.518: INFO: Created: latency-svc-2gklc Oct 30 01:20:31.521: INFO: Got endpoints: latency-svc-2gklc [27.989906ms] Oct 30 01:20:31.521: INFO: Created: latency-svc-gmx8r Oct 30 01:20:31.523: INFO: Got endpoints: latency-svc-gmx8r [29.885236ms] Oct 30 01:20:31.524: INFO: Created: latency-svc-tq2dw Oct 30 01:20:31.526: INFO: Got endpoints: latency-svc-tq2dw [8.331396ms] Oct 30 01:20:31.527: INFO: Created: latency-svc-t4ftx Oct 30 01:20:31.529: INFO: Got endpoints: latency-svc-t4ftx [36.800183ms] Oct 30 01:20:31.530: INFO: Created: latency-svc-65bxf Oct 30 01:20:31.532: INFO: Got endpoints: latency-svc-65bxf [38.939163ms] Oct 30 01:20:31.532: INFO: Created: latency-svc-kcmfb Oct 30 01:20:31.535: INFO: Got endpoints: latency-svc-kcmfb [41.87776ms] Oct 30 01:20:31.535: INFO: Created: latency-svc-m7dfj Oct 30 01:20:31.537: INFO: Got endpoints: latency-svc-m7dfj [44.797501ms] Oct 30 01:20:31.538: INFO: Created: latency-svc-djbrj Oct 30 01:20:31.540: INFO: Got endpoints: latency-svc-djbrj [47.34857ms] Oct 30 01:20:31.541: INFO: Created: latency-svc-httc2 Oct 30 01:20:31.543: INFO: Got endpoints: latency-svc-httc2 [50.722914ms] Oct 30 01:20:31.544: INFO: Created: latency-svc-4nfbt Oct 30 01:20:31.546: INFO: Got endpoints: latency-svc-4nfbt [53.111387ms] Oct 30 01:20:31.547: INFO: Created: latency-svc-rpzbt Oct 30 01:20:31.548: INFO: Got endpoints: latency-svc-rpzbt [55.228124ms] Oct 30 01:20:31.549: INFO: Created: latency-svc-tj6ks Oct 30 01:20:31.552: INFO: Got endpoints: latency-svc-tj6ks [58.430684ms] Oct 30 01:20:31.552: INFO: Created: latency-svc-xd5db Oct 30 01:20:31.554: INFO: Created: latency-svc-7prcw Oct 30 01:20:31.555: INFO: Got endpoints: latency-svc-xd5db [44.486364ms] Oct 30 01:20:31.557: INFO: Got endpoints: latency-svc-7prcw [43.383968ms] Oct 30 01:20:31.558: INFO: Created: latency-svc-t8gtm Oct 30 01:20:31.560: INFO: Created: latency-svc-llqhj Oct 30 01:20:31.560: INFO: Got endpoints: latency-svc-t8gtm [44.60662ms] Oct 30 01:20:31.563: INFO: Got endpoints: latency-svc-llqhj [42.418871ms] Oct 30 01:20:31.565: INFO: Created: latency-svc-p4796 Oct 30 01:20:31.567: INFO: Got endpoints: latency-svc-p4796 [43.56038ms] Oct 30 01:20:31.567: INFO: Created: latency-svc-26drk Oct 30 01:20:31.569: INFO: Created: latency-svc-cxxcj Oct 30 01:20:31.570: INFO: Got endpoints: latency-svc-26drk [43.000952ms] Oct 30 01:20:31.572: INFO: Got endpoints: latency-svc-cxxcj [42.750468ms] Oct 30 01:20:31.572: INFO: Created: latency-svc-dt4rc Oct 30 01:20:31.575: INFO: Got endpoints: latency-svc-dt4rc [43.693227ms] Oct 30 01:20:31.577: INFO: Created: latency-svc-6gq9c Oct 30 01:20:31.579: INFO: Got endpoints: latency-svc-6gq9c [44.80241ms] Oct 30 01:20:31.580: INFO: Created: latency-svc-4dxwr Oct 30 01:20:31.582: INFO: Got endpoints: latency-svc-4dxwr [44.885399ms] Oct 30 01:20:31.583: INFO: Created: latency-svc-8x7rn Oct 30 01:20:31.585: INFO: Got endpoints: latency-svc-8x7rn [45.355216ms] Oct 30 01:20:31.586: INFO: Created: latency-svc-mzmhc Oct 30 01:20:31.589: INFO: Created: latency-svc-5dn2n Oct 30 01:20:31.589: INFO: Got endpoints: latency-svc-mzmhc [45.850157ms] Oct 30 01:20:31.591: INFO: Got endpoints: latency-svc-5dn2n [45.045033ms] Oct 30 01:20:31.591: INFO: Created: latency-svc-pvtwg Oct 30 01:20:31.598: INFO: Got endpoints: latency-svc-pvtwg [49.948004ms] Oct 30 01:20:31.598: INFO: Created: latency-svc-bf7bh Oct 30 01:20:31.602: INFO: Created: latency-svc-2tx65 Oct 30 01:20:31.613: INFO: Got endpoints: latency-svc-bf7bh [61.854274ms] Oct 30 01:20:31.617: INFO: Created: latency-svc-4886t Oct 30 01:20:31.618: INFO: Created: latency-svc-4pxjk Oct 30 01:20:31.620: INFO: Created: latency-svc-wfths Oct 30 01:20:31.623: INFO: Created: latency-svc-dj6h7 Oct 30 01:20:31.625: INFO: Created: latency-svc-dvvvl Oct 30 01:20:31.628: INFO: Created: latency-svc-2pqkq Oct 30 01:20:31.631: INFO: Created: latency-svc-84b6n Oct 30 01:20:31.633: INFO: Created: latency-svc-hgkmg Oct 30 01:20:31.636: INFO: Created: latency-svc-x7f8k Oct 30 01:20:31.639: INFO: Created: latency-svc-sb5vk Oct 30 01:20:31.641: INFO: Got endpoints: latency-svc-2tx65 [86.083817ms] Oct 30 01:20:31.641: INFO: Created: latency-svc-ph7zk Oct 30 01:20:31.644: INFO: Created: latency-svc-jjpqs Oct 30 01:20:31.649: INFO: Created: latency-svc-8tbg9 Oct 30 01:20:31.652: INFO: Created: latency-svc-cbcpf Oct 30 01:20:31.654: INFO: Created: latency-svc-kp28t Oct 30 01:20:31.690: INFO: Got endpoints: latency-svc-4886t [132.785662ms] Oct 30 01:20:31.696: INFO: Created: latency-svc-qrcvp Oct 30 01:20:31.741: INFO: Got endpoints: latency-svc-4pxjk [180.368233ms] Oct 30 01:20:31.746: INFO: Created: latency-svc-6q2pz Oct 30 01:20:31.790: INFO: Got endpoints: latency-svc-wfths [226.424874ms] Oct 30 01:20:31.795: INFO: Created: latency-svc-vcmb9 Oct 30 01:20:31.839: INFO: Got endpoints: latency-svc-dj6h7 [272.852323ms] Oct 30 01:20:31.845: INFO: Created: latency-svc-94phf Oct 30 01:20:31.891: INFO: Got endpoints: latency-svc-dvvvl [321.551762ms] Oct 30 01:20:31.897: INFO: Created: latency-svc-cznrh Oct 30 01:20:31.941: INFO: Got endpoints: latency-svc-2pqkq [369.137176ms] Oct 30 01:20:31.947: INFO: Created: latency-svc-xrrq4 Oct 30 01:20:31.991: INFO: Got endpoints: latency-svc-84b6n [415.696963ms] Oct 30 01:20:31.998: INFO: Created: latency-svc-qhmdw Oct 30 01:20:32.041: INFO: Got endpoints: latency-svc-hgkmg [461.232567ms] Oct 30 01:20:32.046: INFO: Created: latency-svc-ng6qf Oct 30 01:20:32.089: INFO: Got endpoints: latency-svc-x7f8k [506.881557ms] Oct 30 01:20:32.094: INFO: Created: latency-svc-jtc28 Oct 30 01:20:32.140: INFO: Got endpoints: latency-svc-sb5vk [555.187094ms] Oct 30 01:20:32.146: INFO: Created: latency-svc-sjkcp Oct 30 01:20:32.190: INFO: Got endpoints: latency-svc-ph7zk [601.395139ms] Oct 30 01:20:32.195: INFO: Created: latency-svc-4tb5h Oct 30 01:20:32.274: INFO: Got endpoints: latency-svc-jjpqs [682.490288ms] Oct 30 01:20:32.279: INFO: Created: latency-svc-xh86x Oct 30 01:20:32.290: INFO: Got endpoints: latency-svc-8tbg9 [691.380969ms] Oct 30 01:20:32.295: INFO: Created: latency-svc-2cbgn Oct 30 01:20:32.340: INFO: Got endpoints: latency-svc-cbcpf [726.589751ms] Oct 30 01:20:32.345: INFO: Created: latency-svc-kw9r4 Oct 30 01:20:32.392: INFO: Got endpoints: latency-svc-kp28t [750.889305ms] Oct 30 01:20:32.401: INFO: Created: latency-svc-cxhmm Oct 30 01:20:32.440: INFO: Got endpoints: latency-svc-qrcvp [749.589987ms] Oct 30 01:20:32.444: INFO: Created: latency-svc-q5tbj Oct 30 01:20:32.490: INFO: Got endpoints: latency-svc-6q2pz [749.488527ms] Oct 30 01:20:32.495: INFO: Created: latency-svc-xv4m8 Oct 30 01:20:32.540: INFO: Got endpoints: latency-svc-vcmb9 [750.481782ms] Oct 30 01:20:32.546: INFO: Created: latency-svc-cksc6 Oct 30 01:20:32.590: INFO: Got endpoints: latency-svc-94phf [750.320643ms] Oct 30 01:20:32.595: INFO: Created: latency-svc-crkn2 Oct 30 01:20:32.639: INFO: Got endpoints: latency-svc-cznrh [748.338552ms] Oct 30 01:20:32.645: INFO: Created: latency-svc-vb6mj Oct 30 01:20:32.691: INFO: Got endpoints: latency-svc-xrrq4 [749.864237ms] Oct 30 01:20:32.696: INFO: Created: latency-svc-shsvr Oct 30 01:20:32.740: INFO: Got endpoints: latency-svc-qhmdw [748.831775ms] Oct 30 01:20:32.746: INFO: Created: latency-svc-v8pd5 Oct 30 01:20:32.790: INFO: Got endpoints: latency-svc-ng6qf [749.677117ms] Oct 30 01:20:32.796: INFO: Created: latency-svc-24gjk Oct 30 01:20:32.840: INFO: Got endpoints: latency-svc-jtc28 [750.734624ms] Oct 30 01:20:32.845: INFO: Created: latency-svc-gnwg9 Oct 30 01:20:32.889: INFO: Got endpoints: latency-svc-sjkcp [748.782379ms] Oct 30 01:20:32.895: INFO: Created: latency-svc-mfb5s Oct 30 01:20:32.939: INFO: Got endpoints: latency-svc-4tb5h [749.11735ms] Oct 30 01:20:32.945: INFO: Created: latency-svc-v4rqb Oct 30 01:20:32.990: INFO: Got endpoints: latency-svc-xh86x [715.836481ms] Oct 30 01:20:32.995: INFO: Created: latency-svc-gxq4z Oct 30 01:20:33.039: INFO: Got endpoints: latency-svc-2cbgn [749.559959ms] Oct 30 01:20:33.046: INFO: Created: latency-svc-nt7br Oct 30 01:20:33.090: INFO: Got endpoints: latency-svc-kw9r4 [749.713861ms] Oct 30 01:20:33.095: INFO: Created: latency-svc-4mbjz Oct 30 01:20:33.140: INFO: Got endpoints: latency-svc-cxhmm [748.127782ms] Oct 30 01:20:33.146: INFO: Created: latency-svc-szd6s Oct 30 01:20:33.191: INFO: Got endpoints: latency-svc-q5tbj [751.007519ms] Oct 30 01:20:33.197: INFO: Created: latency-svc-wmbp6 Oct 30 01:20:33.240: INFO: Got endpoints: latency-svc-xv4m8 [749.119764ms] Oct 30 01:20:33.246: INFO: Created: latency-svc-zfb5k Oct 30 01:20:33.290: INFO: Got endpoints: latency-svc-cksc6 [749.560031ms] Oct 30 01:20:33.295: INFO: Created: latency-svc-92xh4 Oct 30 01:20:33.340: INFO: Got endpoints: latency-svc-crkn2 [750.233681ms] Oct 30 01:20:33.346: INFO: Created: latency-svc-jnkj4 Oct 30 01:20:33.390: INFO: Got endpoints: latency-svc-vb6mj [750.228981ms] Oct 30 01:20:33.395: INFO: Created: latency-svc-m6gmv Oct 30 01:20:33.440: INFO: Got endpoints: latency-svc-shsvr [748.72434ms] Oct 30 01:20:33.445: INFO: Created: latency-svc-v6zmg Oct 30 01:20:33.490: INFO: Got endpoints: latency-svc-v8pd5 [750.185248ms] Oct 30 01:20:33.497: INFO: Created: latency-svc-c95bg Oct 30 01:20:33.541: INFO: Got endpoints: latency-svc-24gjk [750.07621ms] Oct 30 01:20:33.546: INFO: Created: latency-svc-gphrn Oct 30 01:20:33.591: INFO: Got endpoints: latency-svc-gnwg9 [750.568076ms] Oct 30 01:20:33.596: INFO: Created: latency-svc-65zz6 Oct 30 01:20:33.640: INFO: Got endpoints: latency-svc-mfb5s [750.982633ms] Oct 30 01:20:33.647: INFO: Created: latency-svc-8qtkr Oct 30 01:20:33.690: INFO: Got endpoints: latency-svc-v4rqb [750.885921ms] Oct 30 01:20:33.696: INFO: Created: latency-svc-mksf4 Oct 30 01:20:33.741: INFO: Got endpoints: latency-svc-gxq4z [751.239041ms] Oct 30 01:20:33.746: INFO: Created: latency-svc-jcvn8 Oct 30 01:20:33.790: INFO: Got endpoints: latency-svc-nt7br [750.529732ms] Oct 30 01:20:33.796: INFO: Created: latency-svc-p6hfn Oct 30 01:20:33.840: INFO: Got endpoints: latency-svc-4mbjz [750.316984ms] Oct 30 01:20:33.846: INFO: Created: latency-svc-zffgx Oct 30 01:20:33.891: INFO: Got endpoints: latency-svc-szd6s [751.281203ms] Oct 30 01:20:33.902: INFO: Created: latency-svc-wgjtv Oct 30 01:20:33.940: INFO: Got endpoints: latency-svc-wmbp6 [749.069396ms] Oct 30 01:20:33.947: INFO: Created: latency-svc-zt2hq Oct 30 01:20:33.990: INFO: Got endpoints: latency-svc-zfb5k [750.178634ms] Oct 30 01:20:33.995: INFO: Created: latency-svc-k9bts Oct 30 01:20:34.041: INFO: Got endpoints: latency-svc-92xh4 [750.756913ms] Oct 30 01:20:34.046: INFO: Created: latency-svc-g28h9 Oct 30 01:20:34.090: INFO: Got endpoints: latency-svc-jnkj4 [749.716589ms] Oct 30 01:20:34.095: INFO: Created: latency-svc-psw4l Oct 30 01:20:34.140: INFO: Got endpoints: latency-svc-m6gmv [749.92187ms] Oct 30 01:20:34.146: INFO: Created: latency-svc-ml5p2 Oct 30 01:20:34.191: INFO: Got endpoints: latency-svc-v6zmg [750.906991ms] Oct 30 01:20:34.196: INFO: Created: latency-svc-lxnqh Oct 30 01:20:34.241: INFO: Got endpoints: latency-svc-c95bg [750.973688ms] Oct 30 01:20:34.248: INFO: Created: latency-svc-vk9kv Oct 30 01:20:34.290: INFO: Got endpoints: latency-svc-gphrn [749.837998ms] Oct 30 01:20:34.297: INFO: Created: latency-svc-flq2d Oct 30 01:20:34.341: INFO: Got endpoints: latency-svc-65zz6 [750.755088ms] Oct 30 01:20:34.346: INFO: Created: latency-svc-w4xwt Oct 30 01:20:34.391: INFO: Got endpoints: latency-svc-8qtkr [750.146252ms] Oct 30 01:20:34.396: INFO: Created: latency-svc-dnnx4 Oct 30 01:20:34.441: INFO: Got endpoints: latency-svc-mksf4 [750.078113ms] Oct 30 01:20:34.445: INFO: Created: latency-svc-llt2r Oct 30 01:20:34.490: INFO: Got endpoints: latency-svc-jcvn8 [749.515021ms] Oct 30 01:20:34.495: INFO: Created: latency-svc-87pjj Oct 30 01:20:34.539: INFO: Got endpoints: latency-svc-p6hfn [749.389728ms] Oct 30 01:20:34.545: INFO: Created: latency-svc-j5xbk Oct 30 01:20:34.591: INFO: Got endpoints: latency-svc-zffgx [751.002635ms] Oct 30 01:20:34.597: INFO: Created: latency-svc-g2fb7 Oct 30 01:20:34.640: INFO: Got endpoints: latency-svc-wgjtv [748.806237ms] Oct 30 01:20:34.647: INFO: Created: latency-svc-vfwbd Oct 30 01:20:34.691: INFO: Got endpoints: latency-svc-zt2hq [751.261349ms] Oct 30 01:20:34.697: INFO: Created: latency-svc-cwv2w Oct 30 01:20:34.740: INFO: Got endpoints: latency-svc-k9bts [750.246648ms] Oct 30 01:20:34.745: INFO: Created: latency-svc-fspvs Oct 30 01:20:34.790: INFO: Got endpoints: latency-svc-g28h9 [749.256747ms] Oct 30 01:20:34.795: INFO: Created: latency-svc-kgx85 Oct 30 01:20:34.843: INFO: Got endpoints: latency-svc-psw4l [752.600673ms] Oct 30 01:20:34.848: INFO: Created: latency-svc-sqt9b Oct 30 01:20:34.890: INFO: Got endpoints: latency-svc-ml5p2 [750.567453ms] Oct 30 01:20:34.896: INFO: Created: latency-svc-qvkdj Oct 30 01:20:34.941: INFO: Got endpoints: latency-svc-lxnqh [749.895769ms] Oct 30 01:20:34.946: INFO: Created: latency-svc-wlkcj Oct 30 01:20:34.990: INFO: Got endpoints: latency-svc-vk9kv [749.151733ms] Oct 30 01:20:34.996: INFO: Created: latency-svc-vstfs Oct 30 01:20:35.041: INFO: Got endpoints: latency-svc-flq2d [750.444969ms] Oct 30 01:20:35.047: INFO: Created: latency-svc-86b82 Oct 30 01:20:35.090: INFO: Got endpoints: latency-svc-w4xwt [749.05451ms] Oct 30 01:20:35.096: INFO: Created: latency-svc-hzq4c Oct 30 01:20:35.140: INFO: Got endpoints: latency-svc-dnnx4 [749.521365ms] Oct 30 01:20:35.146: INFO: Created: latency-svc-782tx Oct 30 01:20:35.189: INFO: Got endpoints: latency-svc-llt2r [748.773125ms] Oct 30 01:20:35.194: INFO: Created: latency-svc-j2n4n Oct 30 01:20:35.240: INFO: Got endpoints: latency-svc-87pjj [749.342306ms] Oct 30 01:20:35.246: INFO: Created: latency-svc-zgfsn Oct 30 01:20:35.291: INFO: Got endpoints: latency-svc-j5xbk [751.43271ms] Oct 30 01:20:35.296: INFO: Created: latency-svc-pjl52 Oct 30 01:20:35.341: INFO: Got endpoints: latency-svc-g2fb7 [749.595509ms] Oct 30 01:20:35.347: INFO: Created: latency-svc-znrg5 Oct 30 01:20:35.391: INFO: Got endpoints: latency-svc-vfwbd [750.879565ms] Oct 30 01:20:35.397: INFO: Created: latency-svc-rzxxb Oct 30 01:20:35.441: INFO: Got endpoints: latency-svc-cwv2w [749.642764ms] Oct 30 01:20:35.446: INFO: Created: latency-svc-wl8rk Oct 30 01:20:35.491: INFO: Got endpoints: latency-svc-fspvs [751.020428ms] Oct 30 01:20:35.496: INFO: Created: latency-svc-hh645 Oct 30 01:20:35.541: INFO: Got endpoints: latency-svc-kgx85 [750.622117ms] Oct 30 01:20:35.547: INFO: Created: latency-svc-q66v5 Oct 30 01:20:35.591: INFO: Got endpoints: latency-svc-sqt9b [748.639897ms] Oct 30 01:20:35.597: INFO: Created: latency-svc-wd5td Oct 30 01:20:35.641: INFO: Got endpoints: latency-svc-qvkdj [750.165515ms] Oct 30 01:20:35.646: INFO: Created: latency-svc-sxk8l Oct 30 01:20:35.691: INFO: Got endpoints: latency-svc-wlkcj [749.765975ms] Oct 30 01:20:35.696: INFO: Created: latency-svc-2rs8n Oct 30 01:20:35.740: INFO: Got endpoints: latency-svc-vstfs [749.714817ms] Oct 30 01:20:35.745: INFO: Created: latency-svc-5l4g6 Oct 30 01:20:35.790: INFO: Got endpoints: latency-svc-86b82 [749.316295ms] Oct 30 01:20:35.796: INFO: Created: latency-svc-kq845 Oct 30 01:20:35.840: INFO: Got endpoints: latency-svc-hzq4c [749.970651ms] Oct 30 01:20:35.846: INFO: Created: latency-svc-vslp9 Oct 30 01:20:35.890: INFO: Got endpoints: latency-svc-782tx [750.18577ms] Oct 30 01:20:35.898: INFO: Created: latency-svc-gfnpr Oct 30 01:20:35.940: INFO: Got endpoints: latency-svc-j2n4n [751.097644ms] Oct 30 01:20:35.947: INFO: Created: latency-svc-spzb4 Oct 30 01:20:35.990: INFO: Got endpoints: latency-svc-zgfsn [750.177931ms] Oct 30 01:20:35.995: INFO: Created: latency-svc-857gn Oct 30 01:20:36.040: INFO: Got endpoints: latency-svc-pjl52 [749.328919ms] Oct 30 01:20:36.047: INFO: Created: latency-svc-cg8m4 Oct 30 01:20:36.090: INFO: Got endpoints: latency-svc-znrg5 [749.007473ms] Oct 30 01:20:36.097: INFO: Created: latency-svc-bs28v Oct 30 01:20:36.141: INFO: Got endpoints: latency-svc-rzxxb [749.854956ms] Oct 30 01:20:36.147: INFO: Created: latency-svc-mtbkd Oct 30 01:20:36.190: INFO: Got endpoints: latency-svc-wl8rk [749.392891ms] Oct 30 01:20:36.197: INFO: Created: latency-svc-8cr42 Oct 30 01:20:36.241: INFO: Got endpoints: latency-svc-hh645 [749.450379ms] Oct 30 01:20:36.246: INFO: Created: latency-svc-6495h Oct 30 01:20:36.290: INFO: Got endpoints: latency-svc-q66v5 [749.131627ms] Oct 30 01:20:36.295: INFO: Created: latency-svc-nb8p8 Oct 30 01:20:36.340: INFO: Got endpoints: latency-svc-wd5td [748.907174ms] Oct 30 01:20:36.348: INFO: Created: latency-svc-r6plq Oct 30 01:20:36.390: INFO: Got endpoints: latency-svc-sxk8l [749.601602ms] Oct 30 01:20:36.395: INFO: Created: latency-svc-7pk8m Oct 30 01:20:36.440: INFO: Got endpoints: latency-svc-2rs8n [749.776246ms] Oct 30 01:20:36.445: INFO: Created: latency-svc-b5jtp Oct 30 01:20:36.490: INFO: Got endpoints: latency-svc-5l4g6 [750.078724ms] Oct 30 01:20:36.495: INFO: Created: latency-svc-hz4v2 Oct 30 01:20:36.540: INFO: Got endpoints: latency-svc-kq845 [749.837799ms] Oct 30 01:20:36.546: INFO: Created: latency-svc-n8sjg Oct 30 01:20:36.590: INFO: Got endpoints: latency-svc-vslp9 [749.130352ms] Oct 30 01:20:36.595: INFO: Created: latency-svc-lpgql Oct 30 01:20:36.641: INFO: Got endpoints: latency-svc-gfnpr [750.358191ms] Oct 30 01:20:36.646: INFO: Created: latency-svc-9s8wp Oct 30 01:20:36.690: INFO: Got endpoints: latency-svc-spzb4 [749.162446ms] Oct 30 01:20:36.696: INFO: Created: latency-svc-6zs55 Oct 30 01:20:36.741: INFO: Got endpoints: latency-svc-857gn [750.368568ms] Oct 30 01:20:36.746: INFO: Created: latency-svc-dxmtk Oct 30 01:20:36.790: INFO: Got endpoints: latency-svc-cg8m4 [749.522466ms] Oct 30 01:20:36.795: INFO: Created: latency-svc-b2f2d Oct 30 01:20:36.840: INFO: Got endpoints: latency-svc-bs28v [749.646609ms] Oct 30 01:20:36.845: INFO: Created: latency-svc-tgckg Oct 30 01:20:36.890: INFO: Got endpoints: latency-svc-mtbkd [749.001146ms] Oct 30 01:20:36.896: INFO: Created: latency-svc-hblq7 Oct 30 01:20:36.941: INFO: Got endpoints: latency-svc-8cr42 [751.192543ms] Oct 30 01:20:36.946: INFO: Created: latency-svc-tnhhj Oct 30 01:20:36.991: INFO: Got endpoints: latency-svc-6495h [750.430424ms] Oct 30 01:20:36.996: INFO: Created: latency-svc-slswk Oct 30 01:20:37.041: INFO: Got endpoints: latency-svc-nb8p8 [751.198393ms] Oct 30 01:20:37.047: INFO: Created: latency-svc-jqpmz Oct 30 01:20:37.090: INFO: Got endpoints: latency-svc-r6plq [749.582329ms] Oct 30 01:20:37.096: INFO: Created: latency-svc-mv249 Oct 30 01:20:37.141: INFO: Got endpoints: latency-svc-7pk8m [750.614984ms] Oct 30 01:20:37.147: INFO: Created: latency-svc-55c7r Oct 30 01:20:37.190: INFO: Got endpoints: latency-svc-b5jtp [749.571101ms] Oct 30 01:20:37.195: INFO: Created: latency-svc-sfmpp Oct 30 01:20:37.241: INFO: Got endpoints: latency-svc-hz4v2 [750.180391ms] Oct 30 01:20:37.245: INFO: Created: latency-svc-jj7lv Oct 30 01:20:37.290: INFO: Got endpoints: latency-svc-n8sjg [749.882999ms] Oct 30 01:20:37.296: INFO: Created: latency-svc-jng7w Oct 30 01:20:37.340: INFO: Got endpoints: latency-svc-lpgql [750.024232ms] Oct 30 01:20:37.345: INFO: Created: latency-svc-gpr5w Oct 30 01:20:37.393: INFO: Got endpoints: latency-svc-9s8wp [751.781838ms] Oct 30 01:20:37.399: INFO: Created: latency-svc-2xzxv Oct 30 01:20:37.440: INFO: Got endpoints: latency-svc-6zs55 [750.447589ms] Oct 30 01:20:37.445: INFO: Created: latency-svc-5rkqm Oct 30 01:20:37.491: INFO: Got endpoints: latency-svc-dxmtk [750.520056ms] Oct 30 01:20:37.497: INFO: Created: latency-svc-9sd7s Oct 30 01:20:37.540: INFO: Got endpoints: latency-svc-b2f2d [750.29363ms] Oct 30 01:20:37.546: INFO: Created: latency-svc-xmg7p Oct 30 01:20:37.591: INFO: Got endpoints: latency-svc-tgckg [751.022949ms] Oct 30 01:20:37.597: INFO: Created: latency-svc-z5fcp Oct 30 01:20:37.640: INFO: Got endpoints: latency-svc-hblq7 [749.525401ms] Oct 30 01:20:37.646: INFO: Created: latency-svc-vsqd4 Oct 30 01:20:37.693: INFO: Got endpoints: latency-svc-tnhhj [751.240286ms] Oct 30 01:20:37.699: INFO: Created: latency-svc-mtxl6 Oct 30 01:20:37.741: INFO: Got endpoints: latency-svc-slswk [749.529211ms] Oct 30 01:20:37.746: INFO: Created: latency-svc-gd5hd Oct 30 01:20:37.790: INFO: Got endpoints: latency-svc-jqpmz [749.149282ms] Oct 30 01:20:37.796: INFO: Created: latency-svc-f2n54 Oct 30 01:20:37.840: INFO: Got endpoints: latency-svc-mv249 [750.589211ms] Oct 30 01:20:37.845: INFO: Created: latency-svc-wd8pz Oct 30 01:20:37.890: INFO: Got endpoints: latency-svc-55c7r [748.871434ms] Oct 30 01:20:37.895: INFO: Created: latency-svc-mlcmm Oct 30 01:20:37.940: INFO: Got endpoints: latency-svc-sfmpp [749.713355ms] Oct 30 01:20:37.945: INFO: Created: latency-svc-zsqx4 Oct 30 01:20:37.991: INFO: Got endpoints: latency-svc-jj7lv [750.42074ms] Oct 30 01:20:37.998: INFO: Created: latency-svc-qm2vb Oct 30 01:20:38.041: INFO: Got endpoints: latency-svc-jng7w [750.395085ms] Oct 30 01:20:38.046: INFO: Created: latency-svc-mjbsx Oct 30 01:20:38.091: INFO: Got endpoints: latency-svc-gpr5w [751.329937ms] Oct 30 01:20:38.097: INFO: Created: latency-svc-rz8sc Oct 30 01:20:38.140: INFO: Got endpoints: latency-svc-2xzxv [747.521217ms] Oct 30 01:20:38.145: INFO: Created: latency-svc-kpwv5 Oct 30 01:20:38.191: INFO: Got endpoints: latency-svc-5rkqm [750.489495ms] Oct 30 01:20:38.196: INFO: Created: latency-svc-w9dbh Oct 30 01:20:38.240: INFO: Got endpoints: latency-svc-9sd7s [748.921202ms] Oct 30 01:20:38.247: INFO: Created: latency-svc-tbg9t Oct 30 01:20:38.291: INFO: Got endpoints: latency-svc-xmg7p [750.629942ms] Oct 30 01:20:38.296: INFO: Created: latency-svc-7xdwj Oct 30 01:20:38.340: INFO: Got endpoints: latency-svc-z5fcp [749.444811ms] Oct 30 01:20:38.346: INFO: Created: latency-svc-jh4ww Oct 30 01:20:38.391: INFO: Got endpoints: latency-svc-vsqd4 [751.441336ms] Oct 30 01:20:38.396: INFO: Created: latency-svc-fcgqs Oct 30 01:20:38.440: INFO: Got endpoints: latency-svc-mtxl6 [747.306074ms] Oct 30 01:20:38.446: INFO: Created: latency-svc-7vnkn Oct 30 01:20:38.490: INFO: Got endpoints: latency-svc-gd5hd [749.427336ms] Oct 30 01:20:38.496: INFO: Created: latency-svc-52cmv Oct 30 01:20:38.541: INFO: Got endpoints: latency-svc-f2n54 [750.186727ms] Oct 30 01:20:38.547: INFO: Created: latency-svc-qwslm Oct 30 01:20:38.591: INFO: Got endpoints: latency-svc-wd8pz [750.178275ms] Oct 30 01:20:38.597: INFO: Created: latency-svc-44zj5 Oct 30 01:20:38.640: INFO: Got endpoints: latency-svc-mlcmm [750.59687ms] Oct 30 01:20:38.646: INFO: Created: latency-svc-wm7dz Oct 30 01:20:38.690: INFO: Got endpoints: latency-svc-zsqx4 [750.544204ms] Oct 30 01:20:38.697: INFO: Created: latency-svc-mb2np Oct 30 01:20:38.740: INFO: Got endpoints: latency-svc-qm2vb [749.147586ms] Oct 30 01:20:38.746: INFO: Created: latency-svc-k72c4 Oct 30 01:20:38.790: INFO: Got endpoints: latency-svc-mjbsx [749.667662ms] Oct 30 01:20:38.797: INFO: Created: latency-svc-np9xm Oct 30 01:20:38.841: INFO: Got endpoints: latency-svc-rz8sc [749.34519ms] Oct 30 01:20:38.846: INFO: Created: latency-svc-qbqkq Oct 30 01:20:38.891: INFO: Got endpoints: latency-svc-kpwv5 [750.791779ms] Oct 30 01:20:38.896: INFO: Created: latency-svc-v5dhx Oct 30 01:20:38.941: INFO: Got endpoints: latency-svc-w9dbh [750.245792ms] Oct 30 01:20:38.948: INFO: Created: latency-svc-bvzjx Oct 30 01:20:38.992: INFO: Got endpoints: latency-svc-tbg9t [751.708785ms] Oct 30 01:20:38.997: INFO: Created: latency-svc-cfnhz Oct 30 01:20:39.041: INFO: Got endpoints: latency-svc-7xdwj [750.201518ms] Oct 30 01:20:39.048: INFO: Created: latency-svc-p29qv Oct 30 01:20:39.092: INFO: Got endpoints: latency-svc-jh4ww [751.810357ms] Oct 30 01:20:39.098: INFO: Created: latency-svc-km8mc Oct 30 01:20:39.140: INFO: Got endpoints: latency-svc-fcgqs [748.864244ms] Oct 30 01:20:39.146: INFO: Created: latency-svc-xjrrl Oct 30 01:20:39.191: INFO: Got endpoints: latency-svc-7vnkn [750.463081ms] Oct 30 01:20:39.196: INFO: Created: latency-svc-5w9bb Oct 30 01:20:39.243: INFO: Got endpoints: latency-svc-52cmv [752.315894ms] Oct 30 01:20:39.249: INFO: Created: latency-svc-77bdj Oct 30 01:20:39.291: INFO: Got endpoints: latency-svc-qwslm [750.506391ms] Oct 30 01:20:39.298: INFO: Created: latency-svc-wtsd8 Oct 30 01:20:39.341: INFO: Got endpoints: latency-svc-44zj5 [750.005231ms] Oct 30 01:20:39.391: INFO: Got endpoints: latency-svc-wm7dz [750.394504ms] Oct 30 01:20:39.440: INFO: Got endpoints: latency-svc-mb2np [749.606486ms] Oct 30 01:20:39.490: INFO: Got endpoints: latency-svc-k72c4 [749.420911ms] Oct 30 01:20:39.540: INFO: Got endpoints: latency-svc-np9xm [750.044656ms] Oct 30 01:20:39.590: INFO: Got endpoints: latency-svc-qbqkq [749.672649ms] Oct 30 01:20:39.640: INFO: Got endpoints: latency-svc-v5dhx [748.708578ms] Oct 30 01:20:39.691: INFO: Got endpoints: latency-svc-bvzjx [749.627695ms] Oct 30 01:20:39.740: INFO: Got endpoints: latency-svc-cfnhz [748.185274ms] Oct 30 01:20:39.790: INFO: Got endpoints: latency-svc-p29qv [749.124024ms] Oct 30 01:20:39.841: INFO: Got endpoints: latency-svc-km8mc [748.857299ms] Oct 30 01:20:39.889: INFO: Got endpoints: latency-svc-xjrrl [749.28853ms] Oct 30 01:20:39.941: INFO: Got endpoints: latency-svc-5w9bb [750.306062ms] Oct 30 01:20:39.991: INFO: Got endpoints: latency-svc-77bdj [748.519261ms] Oct 30 01:20:40.040: INFO: Got endpoints: latency-svc-wtsd8 [749.119429ms] Oct 30 01:20:40.040: INFO: Latencies: [8.331396ms 18.10736ms 21.570582ms 23.475894ms 25.563107ms 27.989906ms 29.885236ms 36.800183ms 38.939163ms 41.87776ms 42.418871ms 42.750468ms 43.000952ms 43.383968ms 43.56038ms 43.693227ms 44.486364ms 44.60662ms 44.797501ms 44.80241ms 44.885399ms 45.045033ms 45.355216ms 45.850157ms 47.34857ms 49.948004ms 50.722914ms 53.111387ms 55.228124ms 58.430684ms 61.854274ms 86.083817ms 132.785662ms 180.368233ms 226.424874ms 272.852323ms 321.551762ms 369.137176ms 415.696963ms 461.232567ms 506.881557ms 555.187094ms 601.395139ms 682.490288ms 691.380969ms 715.836481ms 726.589751ms 747.306074ms 747.521217ms 748.127782ms 748.185274ms 748.338552ms 748.519261ms 748.639897ms 748.708578ms 748.72434ms 748.773125ms 748.782379ms 748.806237ms 748.831775ms 748.857299ms 748.864244ms 748.871434ms 748.907174ms 748.921202ms 749.001146ms 749.007473ms 749.05451ms 749.069396ms 749.11735ms 749.119429ms 749.119764ms 749.124024ms 749.130352ms 749.131627ms 749.147586ms 749.149282ms 749.151733ms 749.162446ms 749.256747ms 749.28853ms 749.316295ms 749.328919ms 749.342306ms 749.34519ms 749.389728ms 749.392891ms 749.420911ms 749.427336ms 749.444811ms 749.450379ms 749.488527ms 749.515021ms 749.521365ms 749.522466ms 749.525401ms 749.529211ms 749.559959ms 749.560031ms 749.571101ms 749.582329ms 749.589987ms 749.595509ms 749.601602ms 749.606486ms 749.627695ms 749.642764ms 749.646609ms 749.667662ms 749.672649ms 749.677117ms 749.713355ms 749.713861ms 749.714817ms 749.716589ms 749.765975ms 749.776246ms 749.837799ms 749.837998ms 749.854956ms 749.864237ms 749.882999ms 749.895769ms 749.92187ms 749.970651ms 750.005231ms 750.024232ms 750.044656ms 750.07621ms 750.078113ms 750.078724ms 750.146252ms 750.165515ms 750.177931ms 750.178275ms 750.178634ms 750.180391ms 750.185248ms 750.18577ms 750.186727ms 750.201518ms 750.228981ms 750.233681ms 750.245792ms 750.246648ms 750.29363ms 750.306062ms 750.316984ms 750.320643ms 750.358191ms 750.368568ms 750.394504ms 750.395085ms 750.42074ms 750.430424ms 750.444969ms 750.447589ms 750.463081ms 750.481782ms 750.489495ms 750.506391ms 750.520056ms 750.529732ms 750.544204ms 750.567453ms 750.568076ms 750.589211ms 750.59687ms 750.614984ms 750.622117ms 750.629942ms 750.734624ms 750.755088ms 750.756913ms 750.791779ms 750.879565ms 750.885921ms 750.889305ms 750.906991ms 750.973688ms 750.982633ms 751.002635ms 751.007519ms 751.020428ms 751.022949ms 751.097644ms 751.192543ms 751.198393ms 751.239041ms 751.240286ms 751.261349ms 751.281203ms 751.329937ms 751.43271ms 751.441336ms 751.708785ms 751.781838ms 751.810357ms 752.315894ms 752.600673ms] Oct 30 01:20:40.041: INFO: 50 %ile: 749.582329ms Oct 30 01:20:40.041: INFO: 90 %ile: 750.982633ms Oct 30 01:20:40.041: INFO: 99 %ile: 752.315894ms Oct 30 01:20:40.041: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:20:40.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-7566" for this suite. • [SLOW TEST:12.776 seconds] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should not be very high [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":-1,"completed":47,"skipped":831,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:20:40.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 01:20:40.136: INFO: The status of Pod busybox-host-aliases7f1cce87-c259-4b01-9f2c-c26fb30cf73f is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:20:42.139: INFO: The status of Pod busybox-host-aliases7f1cce87-c259-4b01-9f2c-c26fb30cf73f is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:20:44.141: INFO: The status of Pod busybox-host-aliases7f1cce87-c259-4b01-9f2c-c26fb30cf73f is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:20:46.139: INFO: The status of Pod busybox-host-aliases7f1cce87-c259-4b01-9f2c-c26fb30cf73f is Running (Ready = true) [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:20:46.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5333" for this suite. • [SLOW TEST:6.050 seconds] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when scheduling a busybox Pod with hostAliases /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:137 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":48,"skipped":854,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:20:08.017: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Oct 30 01:20:08.038: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Oct 30 01:20:26.385: INFO: >>> kubeConfig: /root/.kube/config Oct 30 01:20:34.942: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:20:53.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7140" for this suite. • [SLOW TEST:45.836 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":-1,"completed":33,"skipped":570,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:20:46.274: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should create services for rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating Agnhost RC Oct 30 01:20:46.296: INFO: namespace kubectl-6248 Oct 30 01:20:46.296: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6248 create -f -' Oct 30 01:20:46.673: INFO: stderr: "" Oct 30 01:20:46.673: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Oct 30 01:20:47.677: INFO: Selector matched 1 pods for map[app:agnhost] Oct 30 01:20:47.677: INFO: Found 0 / 1 Oct 30 01:20:48.676: INFO: Selector matched 1 pods for map[app:agnhost] Oct 30 01:20:48.676: INFO: Found 0 / 1 Oct 30 01:20:49.675: INFO: Selector matched 1 pods for map[app:agnhost] Oct 30 01:20:49.675: INFO: Found 0 / 1 Oct 30 01:20:50.677: INFO: Selector matched 1 pods for map[app:agnhost] Oct 30 01:20:50.677: INFO: Found 0 / 1 Oct 30 01:20:51.678: INFO: Selector matched 1 pods for map[app:agnhost] Oct 30 01:20:51.678: INFO: Found 0 / 1 Oct 30 01:20:52.677: INFO: Selector matched 1 pods for map[app:agnhost] Oct 30 01:20:52.678: INFO: Found 1 / 1 Oct 30 01:20:52.678: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Oct 30 01:20:52.680: INFO: Selector matched 1 pods for map[app:agnhost] Oct 30 01:20:52.680: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Oct 30 01:20:52.680: INFO: wait on agnhost-primary startup in kubectl-6248 Oct 30 01:20:52.680: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6248 logs agnhost-primary-bbpws agnhost-primary' Oct 30 01:20:53.768: INFO: stderr: "" Oct 30 01:20:53.768: INFO: stdout: "Paused\n" STEP: exposing RC Oct 30 01:20:53.769: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6248 expose rc agnhost-primary --name=rm2 --port=1234 --target-port=6379' Oct 30 01:20:53.976: INFO: stderr: "" Oct 30 01:20:53.976: INFO: stdout: "service/rm2 exposed\n" Oct 30 01:20:53.978: INFO: Service rm2 in namespace kubectl-6248 found. STEP: exposing service Oct 30 01:20:55.985: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6248 expose service rm2 --name=rm3 --port=2345 --target-port=6379' Oct 30 01:20:56.186: INFO: stderr: "" Oct 30 01:20:56.186: INFO: stdout: "service/rm3 exposed\n" Oct 30 01:20:56.189: INFO: Service rm3 in namespace kubectl-6248 found. [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:20:58.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6248" for this suite. • [SLOW TEST:11.929 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1223 should create services for rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":-1,"completed":49,"skipped":925,"failed":0} S ------------------------------ [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:19:50.888: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63 W1030 01:19:50.911798 32 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob [It] should schedule multiple jobs concurrently [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a cronjob STEP: Ensuring more than one job is running at a time STEP: Ensuring at least two running jobs exists by listing jobs explicitly STEP: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:21:00.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-3387" for this suite. • [SLOW TEST:70.045 seconds] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should schedule multiple jobs concurrently [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","total":-1,"completed":34,"skipped":591,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:20:39.457: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 30 01:20:39.854: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 30 01:20:41.864: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153639, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153639, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153639, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153639, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 30 01:20:43.869: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153639, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153639, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153639, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153639, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 30 01:20:45.868: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153639, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153639, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153639, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153639, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 30 01:20:48.874: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Oct 30 01:20:49.875: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Oct 30 01:20:50.873: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Oct 30 01:20:51.874: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Oct 30 01:20:52.873: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Oct 30 01:20:53.874: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Oct 30 01:20:54.874: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 01:20:54.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-637-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:21:02.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7854" for this suite. STEP: Destroying namespace "webhook-7854-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:23.495 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":-1,"completed":16,"skipped":172,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:21:01.068: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-upd-48ebaf40-4865-4a16-8e11-e7d07e3905ae STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:21:05.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7259" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":35,"skipped":666,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:18:47.230: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to change the type from ExternalName to NodePort [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a service externalname-service with the type=ExternalName in namespace services-3756 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-3756 I1030 01:18:47.268906 39 runners.go:190] Created replication controller with name: externalname-service, namespace: services-3756, replica count: 2 I1030 01:18:50.321947 39 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1030 01:18:53.322752 39 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1030 01:18:56.323656 39 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 30 01:18:56.323: INFO: Creating new exec pod Oct 30 01:19:05.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 30 01:19:05.591: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 30 01:19:05.591: INFO: stdout: "externalname-service-tp7ph" Oct 30 01:19:05.591: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.57.109 80' Oct 30 01:19:05.855: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.57.109 80\nConnection to 10.233.57.109 80 port [tcp/http] succeeded!\n" Oct 30 01:19:05.855: INFO: stdout: "externalname-service-tp7ph" Oct 30 01:19:05.855: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:19:06.091: INFO: rc: 1 Oct 30 01:19:06.091: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:19:07.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:19:07.667: INFO: rc: 1 Oct 30 01:19:07.667: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:19:08.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:19:08.397: INFO: rc: 1 Oct 30 01:19:08.397: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:19:09.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:19:09.349: INFO: rc: 1 Oct 30 01:19:09.349: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:19:10.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:19:10.414: INFO: rc: 1 Oct 30 01:19:10.414: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:19:11.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:19:11.348: INFO: rc: 1 Oct 30 01:19:11.348: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:19:12.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:19:13.377: INFO: rc: 1 Oct 30 01:19:13.377: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:19:14.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:19:14.354: INFO: rc: 1 Oct 30 01:19:14.354: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:19:15.093: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:19:15.411: INFO: rc: 1 Oct 30 01:19:15.411: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:19:16.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:19:16.699: INFO: rc: 1 Oct 30 01:19:16.699: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31014 + echo hostName nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:19:17.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:19:17.552: INFO: rc: 1 Oct 30 01:19:17.552: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:19:18.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:19:18.331: INFO: rc: 1 Oct 30 01:19:18.331: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:19:19.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:19:19.483: INFO: rc: 1 Oct 30 01:19:19.483: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:19:20.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:19:20.428: INFO: rc: 1 Oct 30 01:19:20.428: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:19:21.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:19:21.520: INFO: rc: 1 Oct 30 01:19:21.520: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:19:22.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:19:22.384: INFO: rc: 1 Oct 30 01:19:22.384: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:19:23.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:19:23.353: INFO: rc: 1 Oct 30 01:19:23.354: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:19:24.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:19:24.745: INFO: rc: 1 Oct 30 01:19:24.745: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:19:25.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:19:25.375: INFO: rc: 1 Oct 30 01:19:25.375: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:19:26.093: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:19:26.346: INFO: rc: 1 Oct 30 01:19:26.346: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:19:27.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:19:27.448: INFO: rc: 1 Oct 30 01:19:27.448: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:19:28.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:19:28.435: INFO: rc: 1 Oct 30 01:19:28.435: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:19:29.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:19:29.337: INFO: rc: 1 Oct 30 01:19:29.337: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:19:30.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:19:30.340: INFO: rc: 1 Oct 30 01:19:30.340: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:19:31.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:19:31.334: INFO: rc: 1 Oct 30 01:19:31.334: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:19:32.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:19:32.361: INFO: rc: 1 Oct 30 01:19:32.361: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:19:33.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:19:33.323: INFO: rc: 1 Oct 30 01:19:33.323: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31014 + echo hostName nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:19:34.093: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:19:35.478: INFO: rc: 1 Oct 30 01:19:35.478: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:19:36.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:19:36.370: INFO: rc: 1 Oct 30 01:19:36.370: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:19:37.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:19:38.069: INFO: rc: 1 Oct 30 01:19:38.069: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:19:38.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:19:42.540: INFO: rc: 1 Oct 30 01:19:42.540: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:19:43.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:19:43.735: INFO: rc: 1 Oct 30 01:19:43.735: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:19:44.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:19:44.460: INFO: rc: 1 Oct 30 01:19:44.460: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:19:45.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:19:45.357: INFO: rc: 1 Oct 30 01:19:45.357: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:19:46.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:19:46.643: INFO: rc: 1 Oct 30 01:19:46.643: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:19:47.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:19:47.352: INFO: rc: 1 Oct 30 01:19:47.352: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:19:48.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:19:48.349: INFO: rc: 1 Oct 30 01:19:48.349: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:19:49.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:19:49.330: INFO: rc: 1 Oct 30 01:19:49.330: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:19:50.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:19:50.338: INFO: rc: 1 Oct 30 01:19:50.338: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:19:51.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:19:51.358: INFO: rc: 1 Oct 30 01:19:51.358: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:19:52.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:19:52.325: INFO: rc: 1 Oct 30 01:19:52.325: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:19:53.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:19:53.328: INFO: rc: 1 Oct 30 01:19:53.328: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:19:54.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:19:54.344: INFO: rc: 1 Oct 30 01:19:54.344: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:19:55.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:19:55.640: INFO: rc: 1 Oct 30 01:19:55.641: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:19:56.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:19:56.862: INFO: rc: 1 Oct 30 01:19:56.862: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:19:57.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:19:57.392: INFO: rc: 1 Oct 30 01:19:57.392: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:19:58.093: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:19:58.349: INFO: rc: 1 Oct 30 01:19:58.349: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:19:59.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:19:59.343: INFO: rc: 1 Oct 30 01:19:59.343: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:00.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:20:00.356: INFO: rc: 1 Oct 30 01:20:00.356: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:01.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:20:01.315: INFO: rc: 1 Oct 30 01:20:01.315: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:02.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:20:02.334: INFO: rc: 1 Oct 30 01:20:02.335: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:03.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:20:03.406: INFO: rc: 1 Oct 30 01:20:03.407: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:04.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:20:04.486: INFO: rc: 1 Oct 30 01:20:04.487: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:05.093: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:20:05.339: INFO: rc: 1 Oct 30 01:20:05.339: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:06.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:20:06.324: INFO: rc: 1 Oct 30 01:20:06.324: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:07.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:20:07.364: INFO: rc: 1 Oct 30 01:20:07.364: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:08.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:20:08.794: INFO: rc: 1 Oct 30 01:20:08.794: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:09.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:20:09.342: INFO: rc: 1 Oct 30 01:20:09.342: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:10.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:20:10.426: INFO: rc: 1 Oct 30 01:20:10.426: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:11.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:20:11.328: INFO: rc: 1 Oct 30 01:20:11.328: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:12.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:20:13.301: INFO: rc: 1 Oct 30 01:20:13.301: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:14.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:20:14.352: INFO: rc: 1 Oct 30 01:20:14.352: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:15.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:20:15.757: INFO: rc: 1 Oct 30 01:20:15.757: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:16.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:20:16.359: INFO: rc: 1 Oct 30 01:20:16.359: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:17.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:20:17.334: INFO: rc: 1 Oct 30 01:20:17.334: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:18.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:20:18.663: INFO: rc: 1 Oct 30 01:20:18.663: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:19.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:20:19.320: INFO: rc: 1 Oct 30 01:20:19.320: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:20.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:20:20.356: INFO: rc: 1 Oct 30 01:20:20.356: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:21.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:20:21.321: INFO: rc: 1 Oct 30 01:20:21.321: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:22.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:20:22.353: INFO: rc: 1 Oct 30 01:20:22.353: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:23.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:20:23.447: INFO: rc: 1 Oct 30 01:20:23.447: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:24.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:20:24.341: INFO: rc: 1 Oct 30 01:20:24.341: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:25.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:20:25.339: INFO: rc: 1 Oct 30 01:20:25.339: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:26.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:20:26.308: INFO: rc: 1 Oct 30 01:20:26.308: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:27.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:20:27.327: INFO: rc: 1 Oct 30 01:20:27.327: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:28.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:20:28.358: INFO: rc: 1 Oct 30 01:20:28.358: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:29.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:20:29.350: INFO: rc: 1 Oct 30 01:20:29.350: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:30.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:20:30.338: INFO: rc: 1 Oct 30 01:20:30.338: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31014 + echo hostName nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:31.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:20:31.485: INFO: rc: 1 Oct 30 01:20:31.485: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31014 + echo hostName nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:32.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:20:32.325: INFO: rc: 1 Oct 30 01:20:32.325: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:33.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:20:33.333: INFO: rc: 1 Oct 30 01:20:33.333: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:34.093: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:20:34.307: INFO: rc: 1 Oct 30 01:20:34.307: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:35.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:20:35.352: INFO: rc: 1 Oct 30 01:20:35.352: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:36.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:20:36.331: INFO: rc: 1 Oct 30 01:20:36.331: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:37.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:20:37.323: INFO: rc: 1 Oct 30 01:20:37.323: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:38.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:20:38.310: INFO: rc: 1 Oct 30 01:20:38.310: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:39.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:20:39.318: INFO: rc: 1 Oct 30 01:20:39.318: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:40.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:20:40.346: INFO: rc: 1 Oct 30 01:20:40.346: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:41.093: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:20:41.537: INFO: rc: 1 Oct 30 01:20:41.537: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:42.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:20:43.754: INFO: rc: 1 Oct 30 01:20:43.755: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:44.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:20:44.340: INFO: rc: 1 Oct 30 01:20:44.340: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:45.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:20:45.898: INFO: rc: 1 Oct 30 01:20:45.898: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:46.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:20:47.332: INFO: rc: 1 Oct 30 01:20:47.332: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:48.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:20:48.342: INFO: rc: 1 Oct 30 01:20:48.342: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:49.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:20:50.199: INFO: rc: 1 Oct 30 01:20:50.199: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:51.093: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:20:51.610: INFO: rc: 1 Oct 30 01:20:51.610: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:52.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:20:52.308: INFO: rc: 1 Oct 30 01:20:52.308: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:53.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:20:53.845: INFO: rc: 1 Oct 30 01:20:53.845: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:54.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:20:54.375: INFO: rc: 1 Oct 30 01:20:54.375: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:55.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:20:55.431: INFO: rc: 1 Oct 30 01:20:55.431: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:56.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:20:56.360: INFO: rc: 1 Oct 30 01:20:56.360: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:57.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:20:57.308: INFO: rc: 1 Oct 30 01:20:57.308: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:58.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:20:58.320: INFO: rc: 1 Oct 30 01:20:58.320: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:59.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:20:59.731: INFO: rc: 1 Oct 30 01:20:59.731: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:21:00.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:21:00.326: INFO: rc: 1 Oct 30 01:21:00.326: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:21:01.093: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:21:01.330: INFO: rc: 1 Oct 30 01:21:01.330: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:21:02.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:21:02.314: INFO: rc: 1 Oct 30 01:21:02.314: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:21:03.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:21:03.510: INFO: rc: 1 Oct 30 01:21:03.511: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:21:04.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:21:04.364: INFO: rc: 1 Oct 30 01:21:04.364: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:21:05.093: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:21:05.619: INFO: rc: 1 Oct 30 01:21:05.619: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:21:06.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:21:06.387: INFO: rc: 1 Oct 30 01:21:06.387: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:21:06.387: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014' Oct 30 01:21:07.037: INFO: rc: 1 Oct 30 01:21:07.037: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3756 exec execpod6mmlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31014: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31014 nc: connect to 10.10.190.207 port 31014 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:21:07.038: FAIL: Unexpected error: <*errors.errorString | 0xc0043c25c0>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31014 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31014 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func24.15() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1351 +0x358 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001555080) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc001555080) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc001555080, 0x70e7b58) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 Oct 30 01:21:07.039: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-3756". STEP: Found 17 events. Oct 30 01:21:07.054: INFO: At 2021-10-30 01:18:47 +0000 UTC - event for externalname-service: {replication-controller } SuccessfulCreate: Created pod: externalname-service-tp7ph Oct 30 01:21:07.054: INFO: At 2021-10-30 01:18:47 +0000 UTC - event for externalname-service: {replication-controller } SuccessfulCreate: Created pod: externalname-service-hkx7c Oct 30 01:21:07.054: INFO: At 2021-10-30 01:18:47 +0000 UTC - event for externalname-service-hkx7c: {default-scheduler } Scheduled: Successfully assigned services-3756/externalname-service-hkx7c to node2 Oct 30 01:21:07.054: INFO: At 2021-10-30 01:18:47 +0000 UTC - event for externalname-service-tp7ph: {default-scheduler } Scheduled: Successfully assigned services-3756/externalname-service-tp7ph to node2 Oct 30 01:21:07.054: INFO: At 2021-10-30 01:18:49 +0000 UTC - event for externalname-service-hkx7c: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 406.871002ms Oct 30 01:21:07.054: INFO: At 2021-10-30 01:18:49 +0000 UTC - event for externalname-service-hkx7c: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Oct 30 01:21:07.054: INFO: At 2021-10-30 01:18:49 +0000 UTC - event for externalname-service-tp7ph: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Oct 30 01:21:07.054: INFO: At 2021-10-30 01:18:50 +0000 UTC - event for externalname-service-hkx7c: {kubelet node2} Created: Created container externalname-service Oct 30 01:21:07.054: INFO: At 2021-10-30 01:18:50 +0000 UTC - event for externalname-service-tp7ph: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 645.774368ms Oct 30 01:21:07.054: INFO: At 2021-10-30 01:18:50 +0000 UTC - event for externalname-service-tp7ph: {kubelet node2} Created: Created container externalname-service Oct 30 01:21:07.054: INFO: At 2021-10-30 01:18:51 +0000 UTC - event for externalname-service-hkx7c: {kubelet node2} Started: Started container externalname-service Oct 30 01:21:07.054: INFO: At 2021-10-30 01:18:51 +0000 UTC - event for externalname-service-tp7ph: {kubelet node2} Started: Started container externalname-service Oct 30 01:21:07.054: INFO: At 2021-10-30 01:18:56 +0000 UTC - event for execpod6mmlh: {default-scheduler } Scheduled: Successfully assigned services-3756/execpod6mmlh to node1 Oct 30 01:21:07.054: INFO: At 2021-10-30 01:19:00 +0000 UTC - event for execpod6mmlh: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 310.603115ms Oct 30 01:21:07.054: INFO: At 2021-10-30 01:19:00 +0000 UTC - event for execpod6mmlh: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Oct 30 01:21:07.054: INFO: At 2021-10-30 01:19:01 +0000 UTC - event for execpod6mmlh: {kubelet node1} Started: Started container agnhost-container Oct 30 01:21:07.054: INFO: At 2021-10-30 01:19:01 +0000 UTC - event for execpod6mmlh: {kubelet node1} Created: Created container agnhost-container Oct 30 01:21:07.057: INFO: POD NODE PHASE GRACE CONDITIONS Oct 30 01:21:07.057: INFO: execpod6mmlh node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:18:56 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:19:02 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:19:02 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:18:56 +0000 UTC }] Oct 30 01:21:07.057: INFO: externalname-service-hkx7c node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:18:47 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:18:53 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:18:53 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:18:47 +0000 UTC }] Oct 30 01:21:07.058: INFO: externalname-service-tp7ph node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:18:47 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:18:52 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:18:52 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:18:47 +0000 UTC }] Oct 30 01:21:07.058: INFO: Oct 30 01:21:07.061: INFO: Logging node info for node master1 Oct 30 01:21:07.065: INFO: Node Info: &Node{ObjectMeta:{master1 b47c04d5-47a7-4a95-8e97-481e6e60af54 96996 0 2021-10-29 21:05:34 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-10-29 21:05:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2021-10-29 21:05:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2021-10-29 21:08:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kubelet Update v1 2021-10-29 21:13:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:27 +0000 UTC,LastTransitionTime:2021-10-29 21:11:27 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 01:21:03 +0000 UTC,LastTransitionTime:2021-10-29 21:05:32 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 01:21:03 +0000 UTC,LastTransitionTime:2021-10-29 21:05:32 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 01:21:03 +0000 UTC,LastTransitionTime:2021-10-29 21:05:32 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 01:21:03 +0000 UTC,LastTransitionTime:2021-10-29 21:08:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5d3ed60c561e427db72df14bd9006ed0,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:01b9d6bc-4126-4864-a1df-901a1bee4906,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:b7d2fa8154ac5d9cff45866e4d3d210a7d390f8576611c301a2eed2b57273227 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:2012644549052fa07c43b0d19f320c871a25e105d0b23e33645e4f1bcf8fcd97 nginx:1.20.1-alpine],SizeBytes:22650454,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 30 01:21:07.066: INFO: Logging kubelet events for node master1 Oct 30 01:21:07.069: INFO: Logging pods the kubelet thinks is on node master1 Oct 30 01:21:07.098: INFO: node-exporter-fv84w started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded) Oct 30 01:21:07.098: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 01:21:07.098: INFO: Container node-exporter ready: true, restart count 0 Oct 30 01:21:07.098: INFO: kube-scheduler-master1 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:07.098: INFO: Container kube-scheduler ready: true, restart count 0 Oct 30 01:21:07.098: INFO: kube-proxy-z5k8p started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:07.098: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 01:21:07.098: INFO: coredns-8474476ff8-lczbr started at 2021-10-29 21:09:04 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:07.098: INFO: Container coredns ready: true, restart count 1 Oct 30 01:21:07.098: INFO: container-registry-65d7c44b96-zzkfl started at 2021-10-29 21:12:56 +0000 UTC (0+2 container statuses recorded) Oct 30 01:21:07.098: INFO: Container docker-registry ready: true, restart count 0 Oct 30 01:21:07.099: INFO: Container nginx ready: true, restart count 0 Oct 30 01:21:07.099: INFO: kube-apiserver-master1 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:07.099: INFO: Container kube-apiserver ready: true, restart count 0 Oct 30 01:21:07.099: INFO: kube-controller-manager-master1 started at 2021-10-29 21:14:43 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:07.099: INFO: Container kube-controller-manager ready: true, restart count 2 Oct 30 01:21:07.099: INFO: kube-flannel-d4pmt started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded) Oct 30 01:21:07.099: INFO: Init container install-cni ready: true, restart count 0 Oct 30 01:21:07.099: INFO: Container kube-flannel ready: true, restart count 2 Oct 30 01:21:07.099: INFO: kube-multus-ds-amd64-wgkfq started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:07.099: INFO: Container kube-multus ready: true, restart count 1 W1030 01:21:07.109933 39 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 30 01:21:07.188: INFO: Latency metrics for node master1 Oct 30 01:21:07.188: INFO: Logging node info for node master2 Oct 30 01:21:07.191: INFO: Node Info: &Node{ObjectMeta:{master2 208792d3-d365-4ddb-83d4-10e6e818079c 96897 0 2021-10-29 21:06:06 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-10-29 21:06:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-10-29 21:08:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-10-29 21:08:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-10-29 21:18:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:19 +0000 UTC,LastTransitionTime:2021-10-29 21:11:19 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 01:21:00 +0000 UTC,LastTransitionTime:2021-10-29 21:06:06 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 01:21:00 +0000 UTC,LastTransitionTime:2021-10-29 21:06:06 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 01:21:00 +0000 UTC,LastTransitionTime:2021-10-29 21:06:06 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 01:21:00 +0000 UTC,LastTransitionTime:2021-10-29 21:08:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:12290c1916d84ddda20431c28083da6a,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:314e82b8-9747-4131-b883-220496309995,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 30 01:21:07.192: INFO: Logging kubelet events for node master2 Oct 30 01:21:07.194: INFO: Logging pods the kubelet thinks is on node master2 Oct 30 01:21:07.216: INFO: kube-controller-manager-master2 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:07.216: INFO: Container kube-controller-manager ready: true, restart count 3 Oct 30 01:21:07.216: INFO: kube-scheduler-master2 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:07.216: INFO: Container kube-scheduler ready: true, restart count 2 Oct 30 01:21:07.216: INFO: kube-proxy-5gz4v started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:07.216: INFO: Container kube-proxy ready: true, restart count 2 Oct 30 01:21:07.216: INFO: kube-flannel-qvqll started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded) Oct 30 01:21:07.216: INFO: Init container install-cni ready: true, restart count 2 Oct 30 01:21:07.216: INFO: Container kube-flannel ready: true, restart count 1 Oct 30 01:21:07.216: INFO: kube-multus-ds-amd64-brkpk started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:07.216: INFO: Container kube-multus ready: true, restart count 1 Oct 30 01:21:07.216: INFO: node-exporter-lc9kk started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded) Oct 30 01:21:07.216: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 01:21:07.216: INFO: Container node-exporter ready: true, restart count 0 Oct 30 01:21:07.216: INFO: kube-apiserver-master2 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:07.216: INFO: Container kube-apiserver ready: true, restart count 0 W1030 01:21:07.229518 39 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 30 01:21:07.296: INFO: Latency metrics for node master2 Oct 30 01:21:07.296: INFO: Logging node info for node master3 Oct 30 01:21:07.299: INFO: Node Info: &Node{ObjectMeta:{master3 168f1589-e029-47ae-b194-10215fc22d6a 96869 0 2021-10-29 21:06:17 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-10-29 21:06:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-10-29 21:08:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-10-29 21:08:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-10-29 21:16:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2021-10-29 21:16:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:36 +0000 UTC,LastTransitionTime:2021-10-29 21:11:36 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 01:20:59 +0000 UTC,LastTransitionTime:2021-10-29 21:06:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 01:20:59 +0000 UTC,LastTransitionTime:2021-10-29 21:06:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 01:20:59 +0000 UTC,LastTransitionTime:2021-10-29 21:06:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 01:20:59 +0000 UTC,LastTransitionTime:2021-10-29 21:08:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:de18dcb6cb4c493e9f4d987da2c8b3fd,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:89235c4b-b1f5-4716-bbd7-18b41c0bde74,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 30 01:21:07.300: INFO: Logging kubelet events for node master3 Oct 30 01:21:07.302: INFO: Logging pods the kubelet thinks is on node master3 Oct 30 01:21:07.318: INFO: kube-multus-ds-amd64-bdwh9 started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:07.318: INFO: Container kube-multus ready: true, restart count 1 Oct 30 01:21:07.318: INFO: coredns-8474476ff8-wrwwv started at 2021-10-29 21:09:00 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:07.318: INFO: Container coredns ready: true, restart count 1 Oct 30 01:21:07.318: INFO: prometheus-operator-585ccfb458-czbr2 started at 2021-10-29 21:21:06 +0000 UTC (0+2 container statuses recorded) Oct 30 01:21:07.318: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 01:21:07.318: INFO: Container prometheus-operator ready: true, restart count 0 Oct 30 01:21:07.318: INFO: node-exporter-bv946 started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded) Oct 30 01:21:07.318: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 01:21:07.318: INFO: Container node-exporter ready: true, restart count 0 Oct 30 01:21:07.318: INFO: kube-controller-manager-master3 started at 2021-10-29 21:14:43 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:07.318: INFO: Container kube-controller-manager ready: true, restart count 1 Oct 30 01:21:07.318: INFO: kube-proxy-r6fpx started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:07.318: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 01:21:07.318: INFO: kube-flannel-rbdlt started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded) Oct 30 01:21:07.318: INFO: Init container install-cni ready: true, restart count 2 Oct 30 01:21:07.318: INFO: Container kube-flannel ready: true, restart count 2 Oct 30 01:21:07.318: INFO: node-feature-discovery-controller-cff799f9f-qq7g4 started at 2021-10-29 21:15:58 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:07.318: INFO: Container nfd-controller ready: true, restart count 0 Oct 30 01:21:07.318: INFO: kube-apiserver-master3 started at 2021-10-29 21:11:10 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:07.318: INFO: Container kube-apiserver ready: true, restart count 0 Oct 30 01:21:07.318: INFO: kube-scheduler-master3 started at 2021-10-29 21:14:43 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:07.318: INFO: Container kube-scheduler ready: true, restart count 2 Oct 30 01:21:07.318: INFO: dns-autoscaler-7df78bfcfb-phsdx started at 2021-10-29 21:09:02 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:07.318: INFO: Container autoscaler ready: true, restart count 1 W1030 01:21:07.332459 39 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 30 01:21:07.419: INFO: Latency metrics for node master3 Oct 30 01:21:07.419: INFO: Logging node info for node node1 Oct 30 01:21:07.422: INFO: Node Info: &Node{ObjectMeta:{node1 ddef9269-94c5-4165-81fb-a3b0c4ac5c75 97005 0 2021-10-29 21:07:27 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-10-29 21:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-10-29 21:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-10-29 21:08:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-10-29 21:16:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-10-29 21:19:31 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-10-29 21:19:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:38 +0000 UTC,LastTransitionTime:2021-10-29 21:11:38 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 01:21:04 +0000 UTC,LastTransitionTime:2021-10-29 21:07:27 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 01:21:04 +0000 UTC,LastTransitionTime:2021-10-29 21:07:27 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 01:21:04 +0000 UTC,LastTransitionTime:2021-10-29 21:07:27 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 01:21:04 +0000 UTC,LastTransitionTime:2021-10-29 21:08:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3bf4179125e4495c89c046ed0ae7baf7,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:ce868148-dc5e-4c7c-a555-42ee929547f7,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003432289,},ContainerImage{Names:[localhost:30500/cmk@sha256:430843a71fa03faf488543c9f5b50d3efbef49988d6784f9f48b8077cc806f60 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:2f1ff7ac170c0ac8079e232ea4ee89d23b7906d1b824d901927acb4e399c52c9 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:a296b4c6f6ee2b88f095b61e95c7ef4f51ba25598835b4978c9256d8c8ace48a alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 30 01:21:07.423: INFO: Logging kubelet events for node node1 Oct 30 01:21:07.426: INFO: Logging pods the kubelet thinks is on node node1 Oct 30 01:21:07.442: INFO: nginx-proxy-node1 started at 2021-10-29 21:07:28 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:07.442: INFO: Container nginx-proxy ready: true, restart count 2 Oct 30 01:21:07.442: INFO: cmk-init-discover-node1-n4mcc started at 2021-10-29 21:19:28 +0000 UTC (0+3 container statuses recorded) Oct 30 01:21:07.442: INFO: Container discover ready: false, restart count 0 Oct 30 01:21:07.442: INFO: Container init ready: false, restart count 0 Oct 30 01:21:07.442: INFO: Container install ready: false, restart count 0 Oct 30 01:21:07.442: INFO: cmk-89lqq started at 2021-10-29 21:20:10 +0000 UTC (0+2 container statuses recorded) Oct 30 01:21:07.442: INFO: Container nodereport ready: true, restart count 0 Oct 30 01:21:07.442: INFO: Container reconcile ready: true, restart count 0 Oct 30 01:21:07.442: INFO: var-expansion-a561c289-19d0-4595-8755-a2088ef58de3 started at 2021-10-30 01:20:28 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:07.442: INFO: Container dapi-container ready: false, restart count 0 Oct 30 01:21:07.442: INFO: kube-multus-ds-amd64-68wrz started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:07.442: INFO: Container kube-multus ready: true, restart count 1 Oct 30 01:21:07.442: INFO: test-cleanup-controller-6s2gs started at 2021-10-30 01:20:58 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:07.442: INFO: Container httpd ready: true, restart count 0 Oct 30 01:21:07.442: INFO: pod-handle-http-request started at 2021-10-30 01:21:03 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:07.442: INFO: Container agnhost-container ready: false, restart count 0 Oct 30 01:21:07.442: INFO: test-cleanup-deployment-5b4d99b59b-ztqqn started at 2021-10-30 01:21:03 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:07.442: INFO: Container agnhost ready: false, restart count 0 Oct 30 01:21:07.442: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-t789r started at 2021-10-29 21:17:10 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:07.442: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 30 01:21:07.442: INFO: execpod6mmlh started at 2021-10-30 01:18:56 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:07.442: INFO: Container agnhost-container ready: true, restart count 0 Oct 30 01:21:07.442: INFO: kubernetes-metrics-scraper-5558854cb-5rmjw started at 2021-10-29 21:09:04 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:07.442: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Oct 30 01:21:07.442: INFO: node-exporter-256wm started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded) Oct 30 01:21:07.442: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 01:21:07.442: INFO: Container node-exporter ready: true, restart count 0 Oct 30 01:21:07.442: INFO: prometheus-k8s-0 started at 2021-10-29 21:21:17 +0000 UTC (0+4 container statuses recorded) Oct 30 01:21:07.442: INFO: Container config-reloader ready: true, restart count 0 Oct 30 01:21:07.442: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Oct 30 01:21:07.442: INFO: Container grafana ready: true, restart count 0 Oct 30 01:21:07.442: INFO: Container prometheus ready: true, restart count 1 Oct 30 01:21:07.442: INFO: affinity-nodeport-timeout-stwzl started at 2021-10-30 01:19:08 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:07.442: INFO: Container affinity-nodeport-timeout ready: true, restart count 0 Oct 30 01:21:07.442: INFO: bin-falseac49585d-fbac-484d-ba6e-b0b0e93a2207 started at 2021-10-30 01:21:05 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:07.442: INFO: Container bin-falseac49585d-fbac-484d-ba6e-b0b0e93a2207 ready: false, restart count 0 Oct 30 01:21:07.442: INFO: node-feature-discovery-worker-w5vdb started at 2021-10-29 21:15:58 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:07.442: INFO: Container nfd-worker ready: true, restart count 0 Oct 30 01:21:07.442: INFO: kube-flannel-phg88 started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded) Oct 30 01:21:07.442: INFO: Init container install-cni ready: true, restart count 2 Oct 30 01:21:07.442: INFO: Container kube-flannel ready: true, restart count 2 Oct 30 01:21:07.442: INFO: busybox-host-aliases7f1cce87-c259-4b01-9f2c-c26fb30cf73f started at 2021-10-30 01:20:40 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:07.442: INFO: Container busybox-host-aliases7f1cce87-c259-4b01-9f2c-c26fb30cf73f ready: true, restart count 0 Oct 30 01:21:07.442: INFO: kube-proxy-z5hqt started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:07.442: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 01:21:07.442: INFO: collectd-d45rv started at 2021-10-29 21:25:13 +0000 UTC (0+3 container statuses recorded) Oct 30 01:21:07.442: INFO: Container collectd ready: true, restart count 0 Oct 30 01:21:07.442: INFO: Container collectd-exporter ready: true, restart count 0 Oct 30 01:21:07.442: INFO: Container rbac-proxy ready: true, restart count 0 Oct 30 01:21:07.442: INFO: pod-projected-configmaps-38781d87-ab78-459b-90a5-f23fa39caa8e started at 2021-10-30 01:20:53 +0000 UTC (0+3 container statuses recorded) Oct 30 01:21:07.443: INFO: Container createcm-volume-test ready: true, restart count 0 Oct 30 01:21:07.443: INFO: Container delcm-volume-test ready: true, restart count 0 Oct 30 01:21:07.443: INFO: Container updcm-volume-test ready: true, restart count 0 W1030 01:21:07.456856 39 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 30 01:21:07.759: INFO: Latency metrics for node node1 Oct 30 01:21:07.759: INFO: Logging node info for node node2 Oct 30 01:21:07.761: INFO: Node Info: &Node{ObjectMeta:{node2 3b49ad19-ba56-4f4a-b1fa-eef102063de9 97009 0 2021-10-29 21:07:28 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-10-29 21:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-10-29 21:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-10-29 21:08:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-10-29 21:16:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-10-29 21:19:53 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-10-29 21:19:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:34 +0000 UTC,LastTransitionTime:2021-10-29 21:11:34 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 01:21:04 +0000 UTC,LastTransitionTime:2021-10-29 21:07:28 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 01:21:04 +0000 UTC,LastTransitionTime:2021-10-29 21:07:28 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 01:21:04 +0000 UTC,LastTransitionTime:2021-10-29 21:07:28 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 01:21:04 +0000 UTC,LastTransitionTime:2021-10-29 21:08:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7283436dd9e34722a6e4df817add95ed,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:c219e7bd-582b-4d6c-b379-1161acc70676,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[localhost:30500/cmk@sha256:430843a71fa03faf488543c9f5b50d3efbef49988d6784f9f48b8077cc806f60 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:2f1ff7ac170c0ac8079e232ea4ee89d23b7906d1b824d901927acb4e399c52c9 localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[localhost:30500/tasextender@sha256:b7d2fa8154ac5d9cff45866e4d3d210a7d390f8576611c301a2eed2b57273227 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 30 01:21:07.762: INFO: Logging kubelet events for node node2 Oct 30 01:21:07.764: INFO: Logging pods the kubelet thinks is on node node2 Oct 30 01:21:07.780: INFO: cmk-8bpbf started at 2021-10-29 21:20:11 +0000 UTC (0+2 container statuses recorded) Oct 30 01:21:07.780: INFO: Container nodereport ready: true, restart count 0 Oct 30 01:21:07.780: INFO: Container reconcile ready: true, restart count 0 Oct 30 01:21:07.780: INFO: node-exporter-r77s4 started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded) Oct 30 01:21:07.780: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 01:21:07.780: INFO: Container node-exporter ready: true, restart count 0 Oct 30 01:21:07.780: INFO: tas-telemetry-aware-scheduling-84ff454dfb-989mh started at 2021-10-29 21:24:23 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:07.780: INFO: Container tas-extender ready: true, restart count 0 Oct 30 01:21:07.780: INFO: affinity-nodeport-timeout-sxkhs started at 2021-10-30 01:19:08 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:07.780: INFO: Container affinity-nodeport-timeout ready: true, restart count 0 Oct 30 01:21:07.780: INFO: collectd-flvhl started at 2021-10-29 21:25:13 +0000 UTC (0+3 container statuses recorded) Oct 30 01:21:07.780: INFO: Container collectd ready: true, restart count 0 Oct 30 01:21:07.780: INFO: Container collectd-exporter ready: true, restart count 0 Oct 30 01:21:07.780: INFO: Container rbac-proxy ready: true, restart count 0 Oct 30 01:21:07.780: INFO: kube-proxy-76285 started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:07.780: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 01:21:07.780: INFO: forbid-27259278-ws8bb started at 2021-10-30 01:18:00 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:07.780: INFO: Container c ready: true, restart count 0 Oct 30 01:21:07.780: INFO: pod-configmaps-a8a63f4a-3964-4d12-9342-6b06af7c74a6 started at 2021-10-30 01:21:01 +0000 UTC (0+2 container statuses recorded) Oct 30 01:21:07.780: INFO: Container agnhost-container ready: true, restart count 0 Oct 30 01:21:07.780: INFO: Container configmap-volume-binary-test ready: false, restart count 0 Oct 30 01:21:07.780: INFO: node-feature-discovery-worker-h6lcp started at 2021-10-29 21:15:58 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:07.780: INFO: Container nfd-worker ready: true, restart count 0 Oct 30 01:21:07.780: INFO: externalname-service-tp7ph started at 2021-10-30 01:18:47 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:07.780: INFO: Container externalname-service ready: true, restart count 0 Oct 30 01:21:07.780: INFO: busybox-839dfaab-f3ac-49e4-8560-aa70d63f7aa7 started at 2021-10-30 01:18:48 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:07.780: INFO: Container busybox ready: true, restart count 0 Oct 30 01:21:07.780: INFO: affinity-nodeport-timeout-kklwd started at 2021-10-30 01:19:08 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:07.780: INFO: Container affinity-nodeport-timeout ready: true, restart count 0 Oct 30 01:21:07.780: INFO: kube-flannel-f6s5v started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded) Oct 30 01:21:07.780: INFO: Init container install-cni ready: true, restart count 2 Oct 30 01:21:07.780: INFO: Container kube-flannel ready: true, restart count 3 Oct 30 01:21:07.780: INFO: kube-multus-ds-amd64-7tvbl started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:07.780: INFO: Container kube-multus ready: true, restart count 1 Oct 30 01:21:07.780: INFO: cmk-webhook-6c9d5f8578-ffk66 started at 2021-10-29 21:20:11 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:07.780: INFO: Container cmk-webhook ready: true, restart count 0 Oct 30 01:21:07.780: INFO: execpod-affinityqt95s started at 2021-10-30 01:19:17 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:07.780: INFO: Container agnhost-container ready: true, restart count 0 Oct 30 01:21:07.780: INFO: concurrent-27259280-tlflb started at 2021-10-30 01:20:00 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:07.780: INFO: Container c ready: true, restart count 0 Oct 30 01:21:07.780: INFO: nginx-proxy-node2 started at 2021-10-29 21:07:28 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:07.780: INFO: Container nginx-proxy ready: true, restart count 2 Oct 30 01:21:07.780: INFO: kubernetes-dashboard-785dcbb76d-pbjjt started at 2021-10-29 21:09:04 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:07.780: INFO: Container kubernetes-dashboard ready: true, restart count 1 Oct 30 01:21:07.780: INFO: cmk-init-discover-node2-2fmmt started at 2021-10-29 21:19:48 +0000 UTC (0+3 container statuses recorded) Oct 30 01:21:07.780: INFO: Container discover ready: false, restart count 0 Oct 30 01:21:07.780: INFO: Container init ready: false, restart count 0 Oct 30 01:21:07.780: INFO: Container install ready: false, restart count 0 Oct 30 01:21:07.780: INFO: externalname-service-hkx7c started at 2021-10-30 01:18:47 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:07.780: INFO: Container externalname-service ready: true, restart count 0 Oct 30 01:21:07.780: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-69pkg started at 2021-10-29 21:17:10 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:07.780: INFO: Container kube-sriovdp ready: true, restart count 0 W1030 01:21:07.799571 39 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 30 01:21:08.010: INFO: Latency metrics for node node2 Oct 30 01:21:08.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3756" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [140.787 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 01:21:07.038: Unexpected error: <*errors.errorString | 0xc0043c25c0>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31014 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31014 over TCP protocol occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1351 ------------------------------ {"msg":"FAILED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":-1,"completed":30,"skipped":414,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SS ------------------------------ [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:21:05.206: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:21:09.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2979" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":-1,"completed":36,"skipped":707,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:20:58.207: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] deployment should delete old replica sets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 01:20:58.231: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Oct 30 01:21:03.234: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Oct 30 01:21:03.234: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 Oct 30 01:21:09.254: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-8808 bc464b3a-de0b-44eb-960f-f58bc5ab5b3a 97092 1 2021-10-30 01:21:03 +0000 UTC map[name:cleanup-pod] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 2021-10-30 01:21:03 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-10-30 01:21:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00386a6f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2021-10-30 01:21:03 +0000 UTC,LastTransitionTime:2021-10-30 01:21:03 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-cleanup-deployment-5b4d99b59b" has successfully progressed.,LastUpdateTime:2021-10-30 01:21:08 +0000 UTC,LastTransitionTime:2021-10-30 01:21:03 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Oct 30 01:21:09.257: INFO: New ReplicaSet "test-cleanup-deployment-5b4d99b59b" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-5b4d99b59b deployment-8808 b2015f4e-5580-4697-b71f-70a65f3a838b 97071 1 2021-10-30 01:21:03 +0000 UTC map[name:cleanup-pod pod-template-hash:5b4d99b59b] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment bc464b3a-de0b-44eb-960f-f58bc5ab5b3a 0xc00386aa97 0xc00386aa98}] [] [{kube-controller-manager Update apps/v1 2021-10-30 01:21:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bc464b3a-de0b-44eb-960f-f58bc5ab5b3a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 5b4d99b59b,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:5b4d99b59b] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00386ab28 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Oct 30 01:21:09.259: INFO: Pod "test-cleanup-deployment-5b4d99b59b-ztqqn" is available: &Pod{ObjectMeta:{test-cleanup-deployment-5b4d99b59b-ztqqn test-cleanup-deployment-5b4d99b59b- deployment-8808 38dfdec4-6cd5-49ce-9c73-8521837feb52 97070 0 2021-10-30 01:21:03 +0000 UTC map[name:cleanup-pod pod-template-hash:5b4d99b59b] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.184" ], "mac": "1a:50:e1:38:6c:49", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.184" ], "mac": "1a:50:e1:38:6c:49", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-cleanup-deployment-5b4d99b59b b2015f4e-5580-4697-b71f-70a65f3a838b 0xc00386ae7f 0xc00386ae90}] [] [{kube-controller-manager Update v1 2021-10-30 01:21:03 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b2015f4e-5580-4697-b71f-70a65f3a838b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-10-30 01:21:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-10-30 01:21:08 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.184\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-mw9lb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mw9lb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:21:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:21:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:21:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:21:03 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.3.184,StartTime:2021-10-30 01:21:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-30 01:21:06 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1,ContainerID:docker://89032d0a40f4225029dbf8b51f6fd6ff96f6760744d5ceaf84de11c918f52da9,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.184,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:21:09.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8808" for this suite. • [SLOW TEST:11.060 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":-1,"completed":50,"skipped":926,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:21:08.032: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 01:21:08.053: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Oct 30 01:21:16.547: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8921 --namespace=crd-publish-openapi-8921 create -f -' Oct 30 01:21:16.995: INFO: stderr: "" Oct 30 01:21:16.995: INFO: stdout: "e2e-test-crd-publish-openapi-909-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Oct 30 01:21:16.995: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8921 --namespace=crd-publish-openapi-8921 delete e2e-test-crd-publish-openapi-909-crds test-cr' Oct 30 01:21:17.146: INFO: stderr: "" Oct 30 01:21:17.146: INFO: stdout: "e2e-test-crd-publish-openapi-909-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Oct 30 01:21:17.146: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8921 --namespace=crd-publish-openapi-8921 apply -f -' Oct 30 01:21:17.467: INFO: stderr: "" Oct 30 01:21:17.467: INFO: stdout: "e2e-test-crd-publish-openapi-909-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Oct 30 01:21:17.467: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8921 --namespace=crd-publish-openapi-8921 delete e2e-test-crd-publish-openapi-909-crds test-cr' Oct 30 01:21:17.633: INFO: stderr: "" Oct 30 01:21:17.633: INFO: stdout: "e2e-test-crd-publish-openapi-909-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Oct 30 01:21:17.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8921 explain e2e-test-crd-publish-openapi-909-crds' Oct 30 01:21:17.958: INFO: stderr: "" Oct 30 01:21:17.958: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-909-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t<>\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:21:21.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8921" for this suite. • [SLOW TEST:13.443 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":-1,"completed":31,"skipped":416,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:21:02.994: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. Oct 30 01:21:03.028: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:21:05.033: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:21:07.031: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:21:09.033: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the pod with lifecycle hook Oct 30 01:21:09.047: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:21:11.053: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:21:13.050: INFO: The status of Pod pod-with-prestop-http-hook is Running (Ready = true) STEP: delete the pod with lifecycle hook Oct 30 01:21:13.056: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Oct 30 01:21:13.058: INFO: Pod pod-with-prestop-http-hook still exists Oct 30 01:21:15.059: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Oct 30 01:21:15.062: INFO: Pod pod-with-prestop-http-hook still exists Oct 30 01:21:17.059: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Oct 30 01:21:17.061: INFO: Pod pod-with-prestop-http-hook still exists Oct 30 01:21:19.060: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Oct 30 01:21:19.063: INFO: Pod pod-with-prestop-http-hook still exists Oct 30 01:21:21.059: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Oct 30 01:21:21.062: INFO: Pod pod-with-prestop-http-hook still exists Oct 30 01:21:23.059: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Oct 30 01:21:23.062: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:21:23.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3784" for this suite. • [SLOW TEST:20.086 seconds] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43 should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":193,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:21:09.342: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 30 01:21:09.603: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 30 01:21:11.611: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153669, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153669, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153669, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153669, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 30 01:21:13.615: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153669, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153669, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153669, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153669, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 30 01:21:16.620: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:21:28.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4148" for this suite. STEP: Destroying namespace "webhook-4148-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:19.395 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":-1,"completed":37,"skipped":752,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSS ------------------------------ [BeforeEach] [sig-api-machinery] server version /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:21:28.748: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename server-version STEP: Waiting for a default service account to be provisioned in namespace [It] should find the server version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Request ServerVersion STEP: Confirm major version Oct 30 01:21:28.770: INFO: Major version: 1 STEP: Confirm minor version Oct 30 01:21:28.770: INFO: cleanMinorVersion: 21 Oct 30 01:21:28.771: INFO: Minor version: 21 [AfterEach] [sig-api-machinery] server version /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:21:28.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "server-version-849" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":-1,"completed":38,"skipped":756,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:21:21.496: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Oct 30 01:21:28.037: INFO: Successfully updated pod "adopt-release-65drz" STEP: Checking that the Job readopts the Pod Oct 30 01:21:28.038: INFO: Waiting up to 15m0s for pod "adopt-release-65drz" in namespace "job-2695" to be "adopted" Oct 30 01:21:28.040: INFO: Pod "adopt-release-65drz": Phase="Running", Reason="", readiness=true. Elapsed: 1.990129ms Oct 30 01:21:30.043: INFO: Pod "adopt-release-65drz": Phase="Running", Reason="", readiness=true. Elapsed: 2.005234702s Oct 30 01:21:30.043: INFO: Pod "adopt-release-65drz" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Oct 30 01:21:30.553: INFO: Successfully updated pod "adopt-release-65drz" STEP: Checking that the Job releases the Pod Oct 30 01:21:30.553: INFO: Waiting up to 15m0s for pod "adopt-release-65drz" in namespace "job-2695" to be "released" Oct 30 01:21:30.557: INFO: Pod "adopt-release-65drz": Phase="Running", Reason="", readiness=true. Elapsed: 3.243865ms Oct 30 01:21:32.561: INFO: Pod "adopt-release-65drz": Phase="Running", Reason="", readiness=true. Elapsed: 2.008029419s Oct 30 01:21:32.561: INFO: Pod "adopt-release-65drz" satisfied condition "released" [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:21:32.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-2695" for this suite. • [SLOW TEST:11.075 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":-1,"completed":32,"skipped":427,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} S ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:19:03.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-2728 Oct 30 01:19:03.761: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:19:05.765: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:19:07.765: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true) Oct 30 01:19:07.767: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Oct 30 01:19:07.999: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n" Oct 30 01:19:07.999: INFO: stdout: "iptables" Oct 30 01:19:07.999: INFO: proxyMode: iptables Oct 30 01:19:08.007: INFO: Waiting for pod kube-proxy-mode-detector to disappear Oct 30 01:19:08.010: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-nodeport-timeout in namespace services-2728 STEP: creating replication controller affinity-nodeport-timeout in namespace services-2728 I1030 01:19:08.022818 29 runners.go:190] Created replication controller with name: affinity-nodeport-timeout, namespace: services-2728, replica count: 3 I1030 01:19:11.075746 29 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1030 01:19:14.075902 29 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1030 01:19:17.076078 29 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 30 01:19:17.085: INFO: Creating new exec pod Oct 30 01:19:22.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Oct 30 01:19:22.378: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\n" Oct 30 01:19:22.378: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Oct 30 01:19:22.378: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.58.253 80' Oct 30 01:19:22.706: INFO: stderr: "+ nc -v -t -w 2 10.233.58.253 80\n+ echo hostName\nConnection to 10.233.58.253 80 port [tcp/http] succeeded!\n" Oct 30 01:19:22.706: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Oct 30 01:19:22.706: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:19:22.951: INFO: rc: 1 Oct 30 01:19:22.952: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:19:23.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:19:24.180: INFO: rc: 1 Oct 30 01:19:24.181: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:19:24.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:19:25.188: INFO: rc: 1 Oct 30 01:19:25.188: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:19:25.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:19:26.192: INFO: rc: 1 Oct 30 01:19:26.192: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:19:26.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:19:27.199: INFO: rc: 1 Oct 30 01:19:27.199: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:19:27.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:19:28.206: INFO: rc: 1 Oct 30 01:19:28.206: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:19:28.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:19:29.346: INFO: rc: 1 Oct 30 01:19:29.346: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:19:29.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:19:30.203: INFO: rc: 1 Oct 30 01:19:30.203: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:19:30.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:19:31.273: INFO: rc: 1 Oct 30 01:19:31.273: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:19:31.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:19:32.269: INFO: rc: 1 Oct 30 01:19:32.269: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:19:32.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:19:33.190: INFO: rc: 1 Oct 30 01:19:33.190: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:19:33.954: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:19:34.203: INFO: rc: 1 Oct 30 01:19:34.203: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:19:34.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:19:35.179: INFO: rc: 1 Oct 30 01:19:35.179: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:19:35.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:19:36.205: INFO: rc: 1 Oct 30 01:19:36.205: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:19:36.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:19:37.208: INFO: rc: 1 Oct 30 01:19:37.208: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31165 + echo hostName nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:19:37.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:19:38.203: INFO: rc: 1 Oct 30 01:19:38.203: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:19:38.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:19:39.203: INFO: rc: 1 Oct 30 01:19:39.203: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:19:39.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:19:40.178: INFO: rc: 1 Oct 30 01:19:40.178: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:19:40.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:19:42.097: INFO: rc: 1 Oct 30 01:19:42.097: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:19:42.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:19:43.206: INFO: rc: 1 Oct 30 01:19:43.206: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:19:43.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:19:44.207: INFO: rc: 1 Oct 30 01:19:44.207: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:19:44.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:19:45.231: INFO: rc: 1 Oct 30 01:19:45.231: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:19:45.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:19:46.305: INFO: rc: 1 Oct 30 01:19:46.305: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:19:46.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:19:47.381: INFO: rc: 1 Oct 30 01:19:47.381: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:19:47.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:19:48.277: INFO: rc: 1 Oct 30 01:19:48.277: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:19:48.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:19:49.222: INFO: rc: 1 Oct 30 01:19:49.222: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:19:49.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:19:50.205: INFO: rc: 1 Oct 30 01:19:50.205: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:19:50.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:19:51.189: INFO: rc: 1 Oct 30 01:19:51.189: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:19:51.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:19:52.225: INFO: rc: 1 Oct 30 01:19:52.225: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:19:52.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:19:53.193: INFO: rc: 1 Oct 30 01:19:53.193: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:19:53.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:19:54.216: INFO: rc: 1 Oct 30 01:19:54.216: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:19:54.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:19:55.205: INFO: rc: 1 Oct 30 01:19:55.205: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:19:55.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:19:56.350: INFO: rc: 1 Oct 30 01:19:56.350: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:19:56.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:19:57.204: INFO: rc: 1 Oct 30 01:19:57.204: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:19:57.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:19:58.194: INFO: rc: 1 Oct 30 01:19:58.194: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:19:58.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:19:59.188: INFO: rc: 1 Oct 30 01:19:59.188: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:19:59.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:20:00.174: INFO: rc: 1 Oct 30 01:20:00.174: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:00.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:20:01.443: INFO: rc: 1 Oct 30 01:20:01.443: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:01.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:20:02.210: INFO: rc: 1 Oct 30 01:20:02.210: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:02.954: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:20:03.202: INFO: rc: 1 Oct 30 01:20:03.202: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:03.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:20:04.186: INFO: rc: 1 Oct 30 01:20:04.186: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:04.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:20:05.194: INFO: rc: 1 Oct 30 01:20:05.194: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:05.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:20:06.187: INFO: rc: 1 Oct 30 01:20:06.188: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:06.954: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:20:07.198: INFO: rc: 1 Oct 30 01:20:07.198: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:07.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:20:08.192: INFO: rc: 1 Oct 30 01:20:08.192: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:08.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:20:09.383: INFO: rc: 1 Oct 30 01:20:09.383: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:09.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:20:10.417: INFO: rc: 1 Oct 30 01:20:10.417: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:10.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:20:12.306: INFO: rc: 1 Oct 30 01:20:12.306: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:12.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:20:13.198: INFO: rc: 1 Oct 30 01:20:13.198: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:13.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:20:14.190: INFO: rc: 1 Oct 30 01:20:14.190: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:14.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:20:15.204: INFO: rc: 1 Oct 30 01:20:15.204: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:15.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:20:16.189: INFO: rc: 1 Oct 30 01:20:16.189: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:16.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:20:17.217: INFO: rc: 1 Oct 30 01:20:17.217: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:17.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:20:18.199: INFO: rc: 1 Oct 30 01:20:18.199: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:18.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:20:19.184: INFO: rc: 1 Oct 30 01:20:19.184: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:19.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:20:20.182: INFO: rc: 1 Oct 30 01:20:20.182: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:20.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:20:21.194: INFO: rc: 1 Oct 30 01:20:21.194: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:21.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:20:22.193: INFO: rc: 1 Oct 30 01:20:22.193: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:22.954: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:20:23.297: INFO: rc: 1 Oct 30 01:20:23.297: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:23.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:20:24.184: INFO: rc: 1 Oct 30 01:20:24.184: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:24.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:20:25.280: INFO: rc: 1 Oct 30 01:20:25.280: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:25.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:20:26.189: INFO: rc: 1 Oct 30 01:20:26.189: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:26.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:20:27.218: INFO: rc: 1 Oct 30 01:20:27.218: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:27.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:20:28.242: INFO: rc: 1 Oct 30 01:20:28.242: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:28.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:20:29.322: INFO: rc: 1 Oct 30 01:20:29.322: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:29.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:20:30.259: INFO: rc: 1 Oct 30 01:20:30.259: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:30.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:20:31.278: INFO: rc: 1 Oct 30 01:20:31.278: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:31.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:20:32.188: INFO: rc: 1 Oct 30 01:20:32.188: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:32.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:20:33.168: INFO: rc: 1 Oct 30 01:20:33.168: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:33.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:20:34.192: INFO: rc: 1 Oct 30 01:20:34.192: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:34.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:20:35.195: INFO: rc: 1 Oct 30 01:20:35.195: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:35.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:20:36.172: INFO: rc: 1 Oct 30 01:20:36.172: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:36.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:20:37.185: INFO: rc: 1 Oct 30 01:20:37.185: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:37.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:20:38.194: INFO: rc: 1 Oct 30 01:20:38.194: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:38.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:20:39.191: INFO: rc: 1 Oct 30 01:20:39.191: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:39.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:20:40.181: INFO: rc: 1 Oct 30 01:20:40.181: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:40.954: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:20:42.310: INFO: rc: 1 Oct 30 01:20:42.310: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:42.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:20:43.189: INFO: rc: 1 Oct 30 01:20:43.189: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:43.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:20:44.208: INFO: rc: 1 Oct 30 01:20:44.208: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:44.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:20:45.194: INFO: rc: 1 Oct 30 01:20:45.194: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:45.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:20:46.197: INFO: rc: 1 Oct 30 01:20:46.197: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:46.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:20:47.224: INFO: rc: 1 Oct 30 01:20:47.224: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:47.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:20:48.189: INFO: rc: 1 Oct 30 01:20:48.189: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31165 + echo hostName nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:48.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:20:49.197: INFO: rc: 1 Oct 30 01:20:49.197: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:49.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:20:50.182: INFO: rc: 1 Oct 30 01:20:50.183: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:50.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:20:51.196: INFO: rc: 1 Oct 30 01:20:51.196: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:51.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:20:52.170: INFO: rc: 1 Oct 30 01:20:52.170: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:52.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:20:53.191: INFO: rc: 1 Oct 30 01:20:53.191: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:53.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:20:54.202: INFO: rc: 1 Oct 30 01:20:54.202: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31165 + echo hostName nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:54.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:20:55.188: INFO: rc: 1 Oct 30 01:20:55.188: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:55.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:20:56.195: INFO: rc: 1 Oct 30 01:20:56.195: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:56.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:20:57.181: INFO: rc: 1 Oct 30 01:20:57.181: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:57.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:20:58.194: INFO: rc: 1 Oct 30 01:20:58.194: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:58.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:20:59.201: INFO: rc: 1 Oct 30 01:20:59.201: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:20:59.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:21:00.190: INFO: rc: 1 Oct 30 01:21:00.191: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:21:00.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:21:01.583: INFO: rc: 1 Oct 30 01:21:01.583: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:21:01.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:21:02.392: INFO: rc: 1 Oct 30 01:21:02.392: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:21:02.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:21:03.315: INFO: rc: 1 Oct 30 01:21:03.316: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:21:03.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:21:04.219: INFO: rc: 1 Oct 30 01:21:04.219: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:21:04.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:21:05.200: INFO: rc: 1 Oct 30 01:21:05.200: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:21:05.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:21:06.225: INFO: rc: 1 Oct 30 01:21:06.225: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:21:06.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:21:07.203: INFO: rc: 1 Oct 30 01:21:07.203: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:21:07.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:21:08.200: INFO: rc: 1 Oct 30 01:21:08.201: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:21:08.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:21:09.209: INFO: rc: 1 Oct 30 01:21:09.209: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:21:09.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:21:10.197: INFO: rc: 1 Oct 30 01:21:10.197: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:21:10.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:21:12.560: INFO: rc: 1 Oct 30 01:21:12.560: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:21:12.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:21:13.275: INFO: rc: 1 Oct 30 01:21:13.275: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:21:13.954: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:21:14.271: INFO: rc: 1 Oct 30 01:21:14.271: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:21:14.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:21:15.193: INFO: rc: 1 Oct 30 01:21:15.193: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:21:15.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:21:16.220: INFO: rc: 1 Oct 30 01:21:16.220: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:21:16.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:21:17.198: INFO: rc: 1 Oct 30 01:21:17.198: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:21:17.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:21:18.198: INFO: rc: 1 Oct 30 01:21:18.198: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:21:18.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:21:19.193: INFO: rc: 1 Oct 30 01:21:19.193: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:21:19.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:21:20.191: INFO: rc: 1 Oct 30 01:21:20.191: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:21:20.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:21:21.186: INFO: rc: 1 Oct 30 01:21:21.186: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:21:21.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:21:22.273: INFO: rc: 1 Oct 30 01:21:22.273: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:21:22.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:21:23.273: INFO: rc: 1 Oct 30 01:21:23.273: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:21:23.273: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165' Oct 30 01:21:23.723: INFO: rc: 1 Oct 30 01:21:23.723: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2728 exec execpod-affinityqt95s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31165: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31165 nc: connect to 10.10.190.207 port 31165 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 30 01:21:23.723: FAIL: Unexpected error: <*errors.errorString | 0xc001d56340>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31165 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31165 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.execAffinityTestForSessionAffinityTimeout(0xc001205b80, 0x779f8f8, 0xc0006309a0, 0xc0012a4a00) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2493 +0x751 k8s.io/kubernetes/test/e2e/network.glob..func24.26() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1846 +0x9c k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001958c00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc001958c00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc001958c00, 0x70e7b58) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 Oct 30 01:21:23.725: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-timeout in namespace services-2728, will wait for the garbage collector to delete the pods Oct 30 01:21:23.790: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 3.952283ms Oct 30 01:21:23.890: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 100.544072ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-2728". STEP: Found 33 events. Oct 30 01:21:33.008: INFO: At 2021-10-30 01:19:03 +0000 UTC - event for kube-proxy-mode-detector: {default-scheduler } Scheduled: Successfully assigned services-2728/kube-proxy-mode-detector to node1 Oct 30 01:21:33.008: INFO: At 2021-10-30 01:19:04 +0000 UTC - event for kube-proxy-mode-detector: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Oct 30 01:21:33.008: INFO: At 2021-10-30 01:19:05 +0000 UTC - event for kube-proxy-mode-detector: {kubelet node1} Started: Started container agnhost-container Oct 30 01:21:33.008: INFO: At 2021-10-30 01:19:05 +0000 UTC - event for kube-proxy-mode-detector: {kubelet node1} Created: Created container agnhost-container Oct 30 01:21:33.008: INFO: At 2021-10-30 01:19:05 +0000 UTC - event for kube-proxy-mode-detector: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 267.064226ms Oct 30 01:21:33.008: INFO: At 2021-10-30 01:19:08 +0000 UTC - event for affinity-nodeport-timeout: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-timeout-sxkhs Oct 30 01:21:33.008: INFO: At 2021-10-30 01:19:08 +0000 UTC - event for affinity-nodeport-timeout: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-timeout-kklwd Oct 30 01:21:33.008: INFO: At 2021-10-30 01:19:08 +0000 UTC - event for affinity-nodeport-timeout: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-timeout-stwzl Oct 30 01:21:33.008: INFO: At 2021-10-30 01:19:08 +0000 UTC - event for affinity-nodeport-timeout-kklwd: {default-scheduler } Scheduled: Successfully assigned services-2728/affinity-nodeport-timeout-kklwd to node2 Oct 30 01:21:33.008: INFO: At 2021-10-30 01:19:08 +0000 UTC - event for affinity-nodeport-timeout-stwzl: {default-scheduler } Scheduled: Successfully assigned services-2728/affinity-nodeport-timeout-stwzl to node1 Oct 30 01:21:33.008: INFO: At 2021-10-30 01:19:08 +0000 UTC - event for affinity-nodeport-timeout-sxkhs: {default-scheduler } Scheduled: Successfully assigned services-2728/affinity-nodeport-timeout-sxkhs to node2 Oct 30 01:21:33.008: INFO: At 2021-10-30 01:19:08 +0000 UTC - event for kube-proxy-mode-detector: {kubelet node1} Killing: Stopping container agnhost-container Oct 30 01:21:33.008: INFO: At 2021-10-30 01:19:10 +0000 UTC - event for affinity-nodeport-timeout-stwzl: {kubelet node1} Created: Created container affinity-nodeport-timeout Oct 30 01:21:33.008: INFO: At 2021-10-30 01:19:10 +0000 UTC - event for affinity-nodeport-timeout-stwzl: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 306.626449ms Oct 30 01:21:33.008: INFO: At 2021-10-30 01:19:10 +0000 UTC - event for affinity-nodeport-timeout-stwzl: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Oct 30 01:21:33.008: INFO: At 2021-10-30 01:19:10 +0000 UTC - event for affinity-nodeport-timeout-sxkhs: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Oct 30 01:21:33.008: INFO: At 2021-10-30 01:19:10 +0000 UTC - event for affinity-nodeport-timeout-sxkhs: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 473.620129ms Oct 30 01:21:33.008: INFO: At 2021-10-30 01:19:11 +0000 UTC - event for affinity-nodeport-timeout-stwzl: {kubelet node1} Started: Started container affinity-nodeport-timeout Oct 30 01:21:33.008: INFO: At 2021-10-30 01:19:12 +0000 UTC - event for affinity-nodeport-timeout-sxkhs: {kubelet node2} Created: Created container affinity-nodeport-timeout Oct 30 01:21:33.008: INFO: At 2021-10-30 01:19:13 +0000 UTC - event for affinity-nodeport-timeout-sxkhs: {kubelet node2} Started: Started container affinity-nodeport-timeout Oct 30 01:21:33.008: INFO: At 2021-10-30 01:19:14 +0000 UTC - event for affinity-nodeport-timeout-kklwd: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Oct 30 01:21:33.008: INFO: At 2021-10-30 01:19:14 +0000 UTC - event for affinity-nodeport-timeout-kklwd: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 316.877138ms Oct 30 01:21:33.008: INFO: At 2021-10-30 01:19:15 +0000 UTC - event for affinity-nodeport-timeout-kklwd: {kubelet node2} Started: Started container affinity-nodeport-timeout Oct 30 01:21:33.008: INFO: At 2021-10-30 01:19:15 +0000 UTC - event for affinity-nodeport-timeout-kklwd: {kubelet node2} Created: Created container affinity-nodeport-timeout Oct 30 01:21:33.008: INFO: At 2021-10-30 01:19:17 +0000 UTC - event for execpod-affinityqt95s: {default-scheduler } Scheduled: Successfully assigned services-2728/execpod-affinityqt95s to node2 Oct 30 01:21:33.008: INFO: At 2021-10-30 01:19:18 +0000 UTC - event for execpod-affinityqt95s: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 271.514759ms Oct 30 01:21:33.008: INFO: At 2021-10-30 01:19:18 +0000 UTC - event for execpod-affinityqt95s: {kubelet node2} Created: Created container agnhost-container Oct 30 01:21:33.008: INFO: At 2021-10-30 01:19:18 +0000 UTC - event for execpod-affinityqt95s: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Oct 30 01:21:33.008: INFO: At 2021-10-30 01:19:19 +0000 UTC - event for execpod-affinityqt95s: {kubelet node2} Started: Started container agnhost-container Oct 30 01:21:33.008: INFO: At 2021-10-30 01:21:23 +0000 UTC - event for affinity-nodeport-timeout-kklwd: {kubelet node2} Killing: Stopping container affinity-nodeport-timeout Oct 30 01:21:33.008: INFO: At 2021-10-30 01:21:23 +0000 UTC - event for affinity-nodeport-timeout-stwzl: {kubelet node1} Killing: Stopping container affinity-nodeport-timeout Oct 30 01:21:33.008: INFO: At 2021-10-30 01:21:23 +0000 UTC - event for affinity-nodeport-timeout-sxkhs: {kubelet node2} Killing: Stopping container affinity-nodeport-timeout Oct 30 01:21:33.008: INFO: At 2021-10-30 01:21:23 +0000 UTC - event for execpod-affinityqt95s: {kubelet node2} Killing: Stopping container agnhost-container Oct 30 01:21:33.010: INFO: POD NODE PHASE GRACE CONDITIONS Oct 30 01:21:33.010: INFO: Oct 30 01:21:33.014: INFO: Logging node info for node master1 Oct 30 01:21:33.017: INFO: Node Info: &Node{ObjectMeta:{master1 b47c04d5-47a7-4a95-8e97-481e6e60af54 97377 0 2021-10-29 21:05:34 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-10-29 21:05:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2021-10-29 21:05:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2021-10-29 21:08:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kubelet Update v1 2021-10-29 21:13:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:27 +0000 UTC,LastTransitionTime:2021-10-29 21:11:27 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 01:21:23 +0000 UTC,LastTransitionTime:2021-10-29 21:05:32 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 01:21:23 +0000 UTC,LastTransitionTime:2021-10-29 21:05:32 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 01:21:23 +0000 UTC,LastTransitionTime:2021-10-29 21:05:32 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 01:21:23 +0000 UTC,LastTransitionTime:2021-10-29 21:08:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5d3ed60c561e427db72df14bd9006ed0,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:01b9d6bc-4126-4864-a1df-901a1bee4906,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:b7d2fa8154ac5d9cff45866e4d3d210a7d390f8576611c301a2eed2b57273227 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:2012644549052fa07c43b0d19f320c871a25e105d0b23e33645e4f1bcf8fcd97 nginx:1.20.1-alpine],SizeBytes:22650454,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 30 01:21:33.017: INFO: Logging kubelet events for node master1 Oct 30 01:21:33.020: INFO: Logging pods the kubelet thinks is on node master1 Oct 30 01:21:33.041: INFO: kube-scheduler-master1 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:33.041: INFO: Container kube-scheduler ready: true, restart count 0 Oct 30 01:21:33.041: INFO: kube-proxy-z5k8p started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:33.041: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 01:21:33.041: INFO: coredns-8474476ff8-lczbr started at 2021-10-29 21:09:04 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:33.041: INFO: Container coredns ready: true, restart count 1 Oct 30 01:21:33.041: INFO: container-registry-65d7c44b96-zzkfl started at 2021-10-29 21:12:56 +0000 UTC (0+2 container statuses recorded) Oct 30 01:21:33.041: INFO: Container docker-registry ready: true, restart count 0 Oct 30 01:21:33.041: INFO: Container nginx ready: true, restart count 0 Oct 30 01:21:33.041: INFO: node-exporter-fv84w started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded) Oct 30 01:21:33.041: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 01:21:33.041: INFO: Container node-exporter ready: true, restart count 0 Oct 30 01:21:33.041: INFO: kube-apiserver-master1 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:33.041: INFO: Container kube-apiserver ready: true, restart count 0 Oct 30 01:21:33.041: INFO: kube-controller-manager-master1 started at 2021-10-29 21:14:43 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:33.041: INFO: Container kube-controller-manager ready: true, restart count 2 Oct 30 01:21:33.041: INFO: kube-flannel-d4pmt started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded) Oct 30 01:21:33.041: INFO: Init container install-cni ready: true, restart count 0 Oct 30 01:21:33.041: INFO: Container kube-flannel ready: true, restart count 2 Oct 30 01:21:33.041: INFO: kube-multus-ds-amd64-wgkfq started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:33.041: INFO: Container kube-multus ready: true, restart count 1 W1030 01:21:33.056161 29 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 30 01:21:33.125: INFO: Latency metrics for node master1 Oct 30 01:21:33.126: INFO: Logging node info for node master2 Oct 30 01:21:33.128: INFO: Node Info: &Node{ObjectMeta:{master2 208792d3-d365-4ddb-83d4-10e6e818079c 97522 0 2021-10-29 21:06:06 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-10-29 21:06:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-10-29 21:08:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-10-29 21:08:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-10-29 21:18:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:19 +0000 UTC,LastTransitionTime:2021-10-29 21:11:19 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 01:21:30 +0000 UTC,LastTransitionTime:2021-10-29 21:06:06 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 01:21:30 +0000 UTC,LastTransitionTime:2021-10-29 21:06:06 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 01:21:30 +0000 UTC,LastTransitionTime:2021-10-29 21:06:06 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 01:21:30 +0000 UTC,LastTransitionTime:2021-10-29 21:08:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:12290c1916d84ddda20431c28083da6a,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:314e82b8-9747-4131-b883-220496309995,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 30 01:21:33.128: INFO: Logging kubelet events for node master2 Oct 30 01:21:33.131: INFO: Logging pods the kubelet thinks is on node master2 Oct 30 01:21:33.139: INFO: kube-flannel-qvqll started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded) Oct 30 01:21:33.139: INFO: Init container install-cni ready: true, restart count 2 Oct 30 01:21:33.139: INFO: Container kube-flannel ready: true, restart count 1 Oct 30 01:21:33.139: INFO: kube-multus-ds-amd64-brkpk started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:33.139: INFO: Container kube-multus ready: true, restart count 1 Oct 30 01:21:33.139: INFO: node-exporter-lc9kk started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded) Oct 30 01:21:33.139: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 01:21:33.139: INFO: Container node-exporter ready: true, restart count 0 Oct 30 01:21:33.139: INFO: kube-apiserver-master2 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:33.139: INFO: Container kube-apiserver ready: true, restart count 0 Oct 30 01:21:33.139: INFO: kube-controller-manager-master2 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:33.139: INFO: Container kube-controller-manager ready: true, restart count 3 Oct 30 01:21:33.139: INFO: kube-scheduler-master2 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:33.139: INFO: Container kube-scheduler ready: true, restart count 2 Oct 30 01:21:33.139: INFO: kube-proxy-5gz4v started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:33.139: INFO: Container kube-proxy ready: true, restart count 2 W1030 01:21:33.155542 29 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 30 01:21:33.216: INFO: Latency metrics for node master2 Oct 30 01:21:33.216: INFO: Logging node info for node master3 Oct 30 01:21:33.219: INFO: Node Info: &Node{ObjectMeta:{master3 168f1589-e029-47ae-b194-10215fc22d6a 97494 0 2021-10-29 21:06:17 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-10-29 21:06:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-10-29 21:08:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-10-29 21:08:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-10-29 21:16:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2021-10-29 21:16:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:36 +0000 UTC,LastTransitionTime:2021-10-29 21:11:36 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 01:21:29 +0000 UTC,LastTransitionTime:2021-10-29 21:06:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 01:21:29 +0000 UTC,LastTransitionTime:2021-10-29 21:06:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 01:21:29 +0000 UTC,LastTransitionTime:2021-10-29 21:06:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 01:21:29 +0000 UTC,LastTransitionTime:2021-10-29 21:08:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:de18dcb6cb4c493e9f4d987da2c8b3fd,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:89235c4b-b1f5-4716-bbd7-18b41c0bde74,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 30 01:21:33.219: INFO: Logging kubelet events for node master3 Oct 30 01:21:33.222: INFO: Logging pods the kubelet thinks is on node master3 Oct 30 01:21:33.234: INFO: kube-scheduler-master3 started at 2021-10-29 21:14:43 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:33.234: INFO: Container kube-scheduler ready: true, restart count 2 Oct 30 01:21:33.234: INFO: dns-autoscaler-7df78bfcfb-phsdx started at 2021-10-29 21:09:02 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:33.234: INFO: Container autoscaler ready: true, restart count 1 Oct 30 01:21:33.234: INFO: node-feature-discovery-controller-cff799f9f-qq7g4 started at 2021-10-29 21:15:58 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:33.234: INFO: Container nfd-controller ready: true, restart count 0 Oct 30 01:21:33.234: INFO: kube-apiserver-master3 started at 2021-10-29 21:11:10 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:33.234: INFO: Container kube-apiserver ready: true, restart count 0 Oct 30 01:21:33.234: INFO: kube-proxy-r6fpx started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:33.234: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 01:21:33.234: INFO: kube-flannel-rbdlt started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded) Oct 30 01:21:33.234: INFO: Init container install-cni ready: true, restart count 2 Oct 30 01:21:33.234: INFO: Container kube-flannel ready: true, restart count 2 Oct 30 01:21:33.234: INFO: kube-multus-ds-amd64-bdwh9 started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:33.234: INFO: Container kube-multus ready: true, restart count 1 Oct 30 01:21:33.234: INFO: coredns-8474476ff8-wrwwv started at 2021-10-29 21:09:00 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:33.234: INFO: Container coredns ready: true, restart count 1 Oct 30 01:21:33.234: INFO: prometheus-operator-585ccfb458-czbr2 started at 2021-10-29 21:21:06 +0000 UTC (0+2 container statuses recorded) Oct 30 01:21:33.234: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 01:21:33.234: INFO: Container prometheus-operator ready: true, restart count 0 Oct 30 01:21:33.234: INFO: node-exporter-bv946 started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded) Oct 30 01:21:33.235: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 01:21:33.235: INFO: Container node-exporter ready: true, restart count 0 Oct 30 01:21:33.235: INFO: kube-controller-manager-master3 started at 2021-10-29 21:14:43 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:33.235: INFO: Container kube-controller-manager ready: true, restart count 1 W1030 01:21:33.249144 29 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 30 01:21:33.328: INFO: Latency metrics for node master3 Oct 30 01:21:33.328: INFO: Logging node info for node node1 Oct 30 01:21:33.331: INFO: Node Info: &Node{ObjectMeta:{node1 ddef9269-94c5-4165-81fb-a3b0c4ac5c75 97397 0 2021-10-29 21:07:27 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-10-29 21:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-10-29 21:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-10-29 21:08:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-10-29 21:16:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-10-29 21:19:31 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-10-29 21:19:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:38 +0000 UTC,LastTransitionTime:2021-10-29 21:11:38 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 01:21:24 +0000 UTC,LastTransitionTime:2021-10-29 21:07:27 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 01:21:24 +0000 UTC,LastTransitionTime:2021-10-29 21:07:27 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 01:21:24 +0000 UTC,LastTransitionTime:2021-10-29 21:07:27 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 01:21:24 +0000 UTC,LastTransitionTime:2021-10-29 21:08:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3bf4179125e4495c89c046ed0ae7baf7,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:ce868148-dc5e-4c7c-a555-42ee929547f7,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003432289,},ContainerImage{Names:[localhost:30500/cmk@sha256:430843a71fa03faf488543c9f5b50d3efbef49988d6784f9f48b8077cc806f60 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:2f1ff7ac170c0ac8079e232ea4ee89d23b7906d1b824d901927acb4e399c52c9 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:a296b4c6f6ee2b88f095b61e95c7ef4f51ba25598835b4978c9256d8c8ace48a alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 30 01:21:33.332: INFO: Logging kubelet events for node node1 Oct 30 01:21:33.334: INFO: Logging pods the kubelet thinks is on node node1 Oct 30 01:21:33.350: INFO: node-feature-discovery-worker-w5vdb started at 2021-10-29 21:15:58 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:33.350: INFO: Container nfd-worker ready: true, restart count 0 Oct 30 01:21:33.350: INFO: kube-flannel-phg88 started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded) Oct 30 01:21:33.350: INFO: Init container install-cni ready: true, restart count 2 Oct 30 01:21:33.350: INFO: Container kube-flannel ready: true, restart count 2 Oct 30 01:21:33.350: INFO: kube-proxy-z5hqt started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:33.350: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 01:21:33.350: INFO: collectd-d45rv started at 2021-10-29 21:25:13 +0000 UTC (0+3 container statuses recorded) Oct 30 01:21:33.350: INFO: Container collectd ready: true, restart count 0 Oct 30 01:21:33.350: INFO: Container collectd-exporter ready: true, restart count 0 Oct 30 01:21:33.350: INFO: Container rbac-proxy ready: true, restart count 0 Oct 30 01:21:33.350: INFO: pod-projected-configmaps-38781d87-ab78-459b-90a5-f23fa39caa8e started at 2021-10-30 01:20:53 +0000 UTC (0+3 container statuses recorded) Oct 30 01:21:33.350: INFO: Container createcm-volume-test ready: true, restart count 0 Oct 30 01:21:33.350: INFO: Container delcm-volume-test ready: true, restart count 0 Oct 30 01:21:33.350: INFO: Container updcm-volume-test ready: true, restart count 0 Oct 30 01:21:33.350: INFO: sample-webhook-deployment-78988fc6cd-456tf started at 2021-10-30 01:21:29 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:33.350: INFO: Container sample-webhook ready: true, restart count 0 Oct 30 01:21:33.350: INFO: var-expansion-a561c289-19d0-4595-8755-a2088ef58de3 started at 2021-10-30 01:20:28 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:33.350: INFO: Container dapi-container ready: false, restart count 0 Oct 30 01:21:33.350: INFO: nginx-proxy-node1 started at 2021-10-29 21:07:28 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:33.350: INFO: Container nginx-proxy ready: true, restart count 2 Oct 30 01:21:33.350: INFO: cmk-init-discover-node1-n4mcc started at 2021-10-29 21:19:28 +0000 UTC (0+3 container statuses recorded) Oct 30 01:21:33.350: INFO: Container discover ready: false, restart count 0 Oct 30 01:21:33.350: INFO: Container init ready: false, restart count 0 Oct 30 01:21:33.350: INFO: Container install ready: false, restart count 0 Oct 30 01:21:33.350: INFO: cmk-89lqq started at 2021-10-29 21:20:10 +0000 UTC (0+2 container statuses recorded) Oct 30 01:21:33.350: INFO: Container nodereport ready: true, restart count 0 Oct 30 01:21:33.350: INFO: Container reconcile ready: true, restart count 0 Oct 30 01:21:33.350: INFO: kube-multus-ds-amd64-68wrz started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:33.350: INFO: Container kube-multus ready: true, restart count 1 Oct 30 01:21:33.350: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-t789r started at 2021-10-29 21:17:10 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:33.350: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 30 01:21:33.350: INFO: pod-handle-http-request started at 2021-10-30 01:21:23 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:33.350: INFO: Container agnhost-container ready: true, restart count 0 Oct 30 01:21:33.350: INFO: kubernetes-metrics-scraper-5558854cb-5rmjw started at 2021-10-29 21:09:04 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:33.350: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Oct 30 01:21:33.350: INFO: node-exporter-256wm started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded) Oct 30 01:21:33.350: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 01:21:33.350: INFO: Container node-exporter ready: true, restart count 0 Oct 30 01:21:33.350: INFO: prometheus-k8s-0 started at 2021-10-29 21:21:17 +0000 UTC (0+4 container statuses recorded) Oct 30 01:21:33.350: INFO: Container config-reloader ready: true, restart count 0 Oct 30 01:21:33.350: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Oct 30 01:21:33.350: INFO: Container grafana ready: true, restart count 0 Oct 30 01:21:33.350: INFO: Container prometheus ready: true, restart count 1 W1030 01:21:33.366174 29 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 30 01:21:33.598: INFO: Latency metrics for node node1 Oct 30 01:21:33.598: INFO: Logging node info for node node2 Oct 30 01:21:33.601: INFO: Node Info: &Node{ObjectMeta:{node2 3b49ad19-ba56-4f4a-b1fa-eef102063de9 97405 0 2021-10-29 21:07:28 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-10-29 21:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-10-29 21:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-10-29 21:08:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-10-29 21:16:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-10-29 21:19:53 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-10-29 21:19:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:34 +0000 UTC,LastTransitionTime:2021-10-29 21:11:34 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 01:21:25 +0000 UTC,LastTransitionTime:2021-10-29 21:07:28 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 01:21:25 +0000 UTC,LastTransitionTime:2021-10-29 21:07:28 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 01:21:25 +0000 UTC,LastTransitionTime:2021-10-29 21:07:28 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 01:21:25 +0000 UTC,LastTransitionTime:2021-10-29 21:08:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7283436dd9e34722a6e4df817add95ed,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:c219e7bd-582b-4d6c-b379-1161acc70676,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[localhost:30500/cmk@sha256:430843a71fa03faf488543c9f5b50d3efbef49988d6784f9f48b8077cc806f60 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:2f1ff7ac170c0ac8079e232ea4ee89d23b7906d1b824d901927acb4e399c52c9 localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[localhost:30500/tasextender@sha256:b7d2fa8154ac5d9cff45866e4d3d210a7d390f8576611c301a2eed2b57273227 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 30 01:21:33.602: INFO: Logging kubelet events for node node2 Oct 30 01:21:33.605: INFO: Logging pods the kubelet thinks is on node node2 Oct 30 01:21:33.620: INFO: kube-flannel-f6s5v started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded) Oct 30 01:21:33.620: INFO: Init container install-cni ready: true, restart count 2 Oct 30 01:21:33.620: INFO: Container kube-flannel ready: true, restart count 3 Oct 30 01:21:33.620: INFO: kube-multus-ds-amd64-7tvbl started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:33.620: INFO: Container kube-multus ready: true, restart count 1 Oct 30 01:21:33.620: INFO: cmk-webhook-6c9d5f8578-ffk66 started at 2021-10-29 21:20:11 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:33.620: INFO: Container cmk-webhook ready: true, restart count 0 Oct 30 01:21:33.620: INFO: pod-exec-websocket-9739953e-3271-4761-ab7c-6457f83de8c9 started at 2021-10-30 01:21:32 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:33.620: INFO: Container main ready: false, restart count 0 Oct 30 01:21:33.620: INFO: nginx-proxy-node2 started at 2021-10-29 21:07:28 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:33.620: INFO: Container nginx-proxy ready: true, restart count 2 Oct 30 01:21:33.620: INFO: kubernetes-dashboard-785dcbb76d-pbjjt started at 2021-10-29 21:09:04 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:33.620: INFO: Container kubernetes-dashboard ready: true, restart count 1 Oct 30 01:21:33.620: INFO: cmk-init-discover-node2-2fmmt started at 2021-10-29 21:19:48 +0000 UTC (0+3 container statuses recorded) Oct 30 01:21:33.620: INFO: Container discover ready: false, restart count 0 Oct 30 01:21:33.620: INFO: Container init ready: false, restart count 0 Oct 30 01:21:33.620: INFO: Container install ready: false, restart count 0 Oct 30 01:21:33.620: INFO: pod-with-prestop-exec-hook started at 2021-10-30 01:21:27 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:33.620: INFO: Container pod-with-prestop-exec-hook ready: true, restart count 0 Oct 30 01:21:33.620: INFO: oidc-discovery-validator started at 2021-10-30 01:21:09 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:33.620: INFO: Container oidc-discovery-validator ready: false, restart count 0 Oct 30 01:21:33.621: INFO: concurrent-27259280-tlflb started at 2021-10-30 01:20:00 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:33.621: INFO: Container c ready: false, restart count 0 Oct 30 01:21:33.621: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-69pkg started at 2021-10-29 21:17:10 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:33.621: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 30 01:21:33.622: INFO: cmk-8bpbf started at 2021-10-29 21:20:11 +0000 UTC (0+2 container statuses recorded) Oct 30 01:21:33.622: INFO: Container nodereport ready: true, restart count 0 Oct 30 01:21:33.622: INFO: Container reconcile ready: true, restart count 0 Oct 30 01:21:33.622: INFO: node-exporter-r77s4 started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded) Oct 30 01:21:33.622: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 01:21:33.622: INFO: Container node-exporter ready: true, restart count 0 Oct 30 01:21:33.622: INFO: tas-telemetry-aware-scheduling-84ff454dfb-989mh started at 2021-10-29 21:24:23 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:33.622: INFO: Container tas-extender ready: true, restart count 0 Oct 30 01:21:33.622: INFO: adopt-release-65drz started at 2021-10-30 01:21:21 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:33.622: INFO: Container c ready: true, restart count 0 Oct 30 01:21:33.622: INFO: collectd-flvhl started at 2021-10-29 21:25:13 +0000 UTC (0+3 container statuses recorded) Oct 30 01:21:33.622: INFO: Container collectd ready: true, restart count 0 Oct 30 01:21:33.622: INFO: Container collectd-exporter ready: true, restart count 0 Oct 30 01:21:33.622: INFO: Container rbac-proxy ready: true, restart count 0 Oct 30 01:21:33.622: INFO: adopt-release-nnjww started at 2021-10-30 01:21:21 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:33.622: INFO: Container c ready: true, restart count 0 Oct 30 01:21:33.622: INFO: kube-proxy-76285 started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:33.622: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 01:21:33.622: INFO: forbid-27259278-ws8bb started at 2021-10-30 01:18:00 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:33.622: INFO: Container c ready: true, restart count 0 Oct 30 01:21:33.622: INFO: node-feature-discovery-worker-h6lcp started at 2021-10-29 21:15:58 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:33.622: INFO: Container nfd-worker ready: true, restart count 0 Oct 30 01:21:33.622: INFO: adopt-release-w5lt4 started at 2021-10-30 01:21:30 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:33.622: INFO: Container c ready: false, restart count 0 Oct 30 01:21:33.622: INFO: busybox-839dfaab-f3ac-49e4-8560-aa70d63f7aa7 started at 2021-10-30 01:18:48 +0000 UTC (0+1 container statuses recorded) Oct 30 01:21:33.622: INFO: Container busybox ready: true, restart count 0 W1030 01:21:33.635429 29 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 30 01:21:34.815: INFO: Latency metrics for node node2 Oct 30 01:21:34.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2728" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [151.094 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 01:21:23.723: Unexpected error: <*errors.errorString | 0xc001d56340>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31165 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31165 over TCP protocol occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2493 ------------------------------ {"msg":"FAILED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":34,"skipped":814,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:21:23.090: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. Oct 30 01:21:23.123: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:21:25.127: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:21:27.126: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the pod with lifecycle hook Oct 30 01:21:27.140: INFO: The status of Pod pod-with-prestop-exec-hook is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:21:29.145: INFO: The status of Pod pod-with-prestop-exec-hook is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:21:31.143: INFO: The status of Pod pod-with-prestop-exec-hook is Running (Ready = true) STEP: delete the pod with lifecycle hook Oct 30 01:21:31.151: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Oct 30 01:21:31.153: INFO: Pod pod-with-prestop-exec-hook still exists Oct 30 01:21:33.154: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Oct 30 01:21:33.157: INFO: Pod pod-with-prestop-exec-hook still exists Oct 30 01:21:35.155: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Oct 30 01:21:35.157: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:21:35.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5736" for this suite. • [SLOW TEST:12.087 seconds] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43 should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":197,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:21:32.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 01:21:32.597: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes Oct 30 01:21:32.611: INFO: The status of Pod pod-exec-websocket-9739953e-3271-4761-ab7c-6457f83de8c9 is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:21:34.613: INFO: The status of Pod pod-exec-websocket-9739953e-3271-4761-ab7c-6457f83de8c9 is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:21:36.615: INFO: The status of Pod pod-exec-websocket-9739953e-3271-4761-ab7c-6457f83de8c9 is Running (Ready = true) [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:21:36.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7065" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":-1,"completed":33,"skipped":428,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} S ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:21:35.242: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Oct 30 01:21:39.304: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:21:39.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5593" for this suite. • ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:21:37.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Oct 30 01:21:37.032: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4a726e4f-746f-4b47-bd16-cfb91d505447" in namespace "projected-6261" to be "Succeeded or Failed" Oct 30 01:21:37.035: INFO: Pod "downwardapi-volume-4a726e4f-746f-4b47-bd16-cfb91d505447": Phase="Pending", Reason="", readiness=false. Elapsed: 2.642273ms Oct 30 01:21:39.040: INFO: Pod "downwardapi-volume-4a726e4f-746f-4b47-bd16-cfb91d505447": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007364121s Oct 30 01:21:41.044: INFO: Pod "downwardapi-volume-4a726e4f-746f-4b47-bd16-cfb91d505447": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011796455s STEP: Saw pod success Oct 30 01:21:41.044: INFO: Pod "downwardapi-volume-4a726e4f-746f-4b47-bd16-cfb91d505447" satisfied condition "Succeeded or Failed" Oct 30 01:21:41.048: INFO: Trying to get logs from node node2 pod downwardapi-volume-4a726e4f-746f-4b47-bd16-cfb91d505447 container client-container: STEP: delete the pod Oct 30 01:21:42.021: INFO: Waiting for pod downwardapi-volume-4a726e4f-746f-4b47-bd16-cfb91d505447 to disappear Oct 30 01:21:42.023: INFO: Pod downwardapi-volume-4a726e4f-746f-4b47-bd16-cfb91d505447 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:21:42.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6261" for this suite. • [SLOW TEST:5.030 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":34,"skipped":429,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:21:28.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 30 01:21:29.345: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 30 01:21:31.354: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153689, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153689, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153689, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153689, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 30 01:21:34.366: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:21:44.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8441" for this suite. STEP: Destroying namespace "webhook-8441-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.618 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":-1,"completed":39,"skipped":806,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSS ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":233,"failed":0} [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:21:39.324: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-027da403-b3b5-4239-8f3e-2feb6714b0d0 STEP: Creating a pod to test consume configMaps Oct 30 01:21:39.356: INFO: Waiting up to 5m0s for pod "pod-configmaps-faabcb4b-5ed6-417a-8785-0abfd0c202b0" in namespace "configmap-7472" to be "Succeeded or Failed" Oct 30 01:21:39.358: INFO: Pod "pod-configmaps-faabcb4b-5ed6-417a-8785-0abfd0c202b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.369675ms Oct 30 01:21:41.364: INFO: Pod "pod-configmaps-faabcb4b-5ed6-417a-8785-0abfd0c202b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008666877s Oct 30 01:21:43.373: INFO: Pod "pod-configmaps-faabcb4b-5ed6-417a-8785-0abfd0c202b0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0169144s Oct 30 01:21:45.376: INFO: Pod "pod-configmaps-faabcb4b-5ed6-417a-8785-0abfd0c202b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.019804053s STEP: Saw pod success Oct 30 01:21:45.376: INFO: Pod "pod-configmaps-faabcb4b-5ed6-417a-8785-0abfd0c202b0" satisfied condition "Succeeded or Failed" Oct 30 01:21:45.378: INFO: Trying to get logs from node node2 pod pod-configmaps-faabcb4b-5ed6-417a-8785-0abfd0c202b0 container agnhost-container: STEP: delete the pod Oct 30 01:21:45.391: INFO: Waiting for pod pod-configmaps-faabcb4b-5ed6-417a-8785-0abfd0c202b0 to disappear Oct 30 01:21:45.393: INFO: Pod pod-configmaps-faabcb4b-5ed6-417a-8785-0abfd0c202b0 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:21:45.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7472" for this suite. • [SLOW TEST:6.078 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:21:09.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 01:21:09.369: INFO: created pod Oct 30 01:21:09.369: INFO: Waiting up to 5m0s for pod "oidc-discovery-validator" in namespace "svcaccounts-9516" to be "Succeeded or Failed" Oct 30 01:21:09.375: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 6.030272ms Oct 30 01:21:11.378: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009205197s Oct 30 01:21:13.381: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01276791s Oct 30 01:21:15.385: INFO: Pod "oidc-discovery-validator": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016423738s STEP: Saw pod success Oct 30 01:21:15.385: INFO: Pod "oidc-discovery-validator" satisfied condition "Succeeded or Failed" Oct 30 01:21:45.386: INFO: polling logs Oct 30 01:21:45.393: INFO: Pod logs: 2021/10/30 01:21:13 OK: Got token 2021/10/30 01:21:13 validating with in-cluster discovery 2021/10/30 01:21:13 OK: got issuer https://kubernetes.default.svc.cluster.local 2021/10/30 01:21:13 Full, not-validated claims: openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://kubernetes.default.svc.cluster.local", Subject:"system:serviceaccount:svcaccounts-9516:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1635557469, NotBefore:1635556869, IssuedAt:1635556869, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-9516", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"f924efa8-075c-4d6e-b1a9-2c6269d60b4d"}}} 2021/10/30 01:21:13 OK: Constructed OIDC provider for issuer https://kubernetes.default.svc.cluster.local 2021/10/30 01:21:13 OK: Validated signature on JWT 2021/10/30 01:21:13 OK: Got valid claims from token! 2021/10/30 01:21:13 Full, validated claims: &openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://kubernetes.default.svc.cluster.local", Subject:"system:serviceaccount:svcaccounts-9516:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1635557469, NotBefore:1635556869, IssuedAt:1635556869, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-9516", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"f924efa8-075c-4d6e-b1a9-2c6269d60b4d"}}} Oct 30 01:21:45.393: INFO: completed pod [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:21:45.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-9516" for this suite. • [SLOW TEST:36.069 seconds] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":233,"failed":0} S ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]","total":-1,"completed":51,"skipped":964,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:21:45.406: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Oct 30 01:21:45.435: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-2138 3ea85235-0f55-446e-887c-2d163c255588 97859 0 2021-10-30 01:21:45 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-10-30 01:21:45 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Oct 30 01:21:45.435: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-2138 3ea85235-0f55-446e-887c-2d163c255588 97862 0 2021-10-30 01:21:45 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-10-30 01:21:45 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Oct 30 01:21:45.448: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-2138 3ea85235-0f55-446e-887c-2d163c255588 97864 0 2021-10-30 01:21:45 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-10-30 01:21:45 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Oct 30 01:21:45.449: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-2138 3ea85235-0f55-446e-887c-2d163c255588 97866 0 2021-10-30 01:21:45 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-10-30 01:21:45 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:21:45.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2138" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":-1,"completed":21,"skipped":234,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:21:42.052: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-3fa37e70-1471-4a67-9895-598bb3f657e0 STEP: Creating a pod to test consume configMaps Oct 30 01:21:42.090: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b28fee4f-e0bb-473e-a1c2-a005298e9816" in namespace "projected-2221" to be "Succeeded or Failed" Oct 30 01:21:42.092: INFO: Pod "pod-projected-configmaps-b28fee4f-e0bb-473e-a1c2-a005298e9816": Phase="Pending", Reason="", readiness=false. Elapsed: 2.141009ms Oct 30 01:21:44.097: INFO: Pod "pod-projected-configmaps-b28fee4f-e0bb-473e-a1c2-a005298e9816": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006475871s Oct 30 01:21:46.100: INFO: Pod "pod-projected-configmaps-b28fee4f-e0bb-473e-a1c2-a005298e9816": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009414861s STEP: Saw pod success Oct 30 01:21:46.100: INFO: Pod "pod-projected-configmaps-b28fee4f-e0bb-473e-a1c2-a005298e9816" satisfied condition "Succeeded or Failed" Oct 30 01:21:46.102: INFO: Trying to get logs from node node1 pod pod-projected-configmaps-b28fee4f-e0bb-473e-a1c2-a005298e9816 container agnhost-container: STEP: delete the pod Oct 30 01:21:46.115: INFO: Waiting for pod pod-projected-configmaps-b28fee4f-e0bb-473e-a1c2-a005298e9816 to disappear Oct 30 01:21:46.117: INFO: Pod pod-projected-configmaps-b28fee4f-e0bb-473e-a1c2-a005298e9816 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:21:46.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2221" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":35,"skipped":440,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:21:44.510: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-9d69af73-46b4-4913-a36c-ffde79a64116 STEP: Creating a pod to test consume secrets Oct 30 01:21:44.542: INFO: Waiting up to 5m0s for pod "pod-secrets-6ba9ff33-6922-4873-84af-03229a677a4a" in namespace "secrets-9796" to be "Succeeded or Failed" Oct 30 01:21:44.544: INFO: Pod "pod-secrets-6ba9ff33-6922-4873-84af-03229a677a4a": Phase="Pending", Reason="", readiness=false. Elapsed: 1.828535ms Oct 30 01:21:46.548: INFO: Pod "pod-secrets-6ba9ff33-6922-4873-84af-03229a677a4a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005646521s Oct 30 01:21:48.552: INFO: Pod "pod-secrets-6ba9ff33-6922-4873-84af-03229a677a4a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009954353s STEP: Saw pod success Oct 30 01:21:48.552: INFO: Pod "pod-secrets-6ba9ff33-6922-4873-84af-03229a677a4a" satisfied condition "Succeeded or Failed" Oct 30 01:21:48.555: INFO: Trying to get logs from node node2 pod pod-secrets-6ba9ff33-6922-4873-84af-03229a677a4a container secret-volume-test: STEP: delete the pod Oct 30 01:21:48.568: INFO: Waiting for pod pod-secrets-6ba9ff33-6922-4873-84af-03229a677a4a to disappear Oct 30 01:21:48.570: INFO: Pod pod-secrets-6ba9ff33-6922-4873-84af-03229a677a4a no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:21:48.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9796" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":40,"skipped":815,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSS ------------------------------ [BeforeEach] [sig-instrumentation] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:21:48.588: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of events [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create set of events Oct 30 01:21:48.610: INFO: created test-event-1 Oct 30 01:21:48.613: INFO: created test-event-2 Oct 30 01:21:48.616: INFO: created test-event-3 STEP: get a list of Events with a label in the current namespace STEP: delete collection of events Oct 30 01:21:48.618: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity Oct 30 01:21:48.630: INFO: requesting list of events to confirm quantity [AfterEach] [sig-instrumentation] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:21:48.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-216" for this suite. • ------------------------------ {"msg":"PASSED [sig-instrumentation] Events should delete a collection of events [Conformance]","total":-1,"completed":41,"skipped":820,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:21:46.136: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Oct 30 01:21:50.185: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:21:50.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-441" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":36,"skipped":445,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:21:45.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Given a Pod with a 'name' label pod-adoption-release is created Oct 30 01:21:45.468: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:21:47.472: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:21:49.474: INFO: The status of Pod pod-adoption-release is Running (Ready = true) STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Oct 30 01:21:50.488: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:21:51.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-389" for this suite. • [SLOW TEST:6.088 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":-1,"completed":52,"skipped":976,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:21:50.233: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0644 on node default medium Oct 30 01:21:50.262: INFO: Waiting up to 5m0s for pod "pod-38a4b5e8-a70a-4c16-805d-e1d6d0ad9eb0" in namespace "emptydir-5661" to be "Succeeded or Failed" Oct 30 01:21:50.264: INFO: Pod "pod-38a4b5e8-a70a-4c16-805d-e1d6d0ad9eb0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004922ms Oct 30 01:21:52.267: INFO: Pod "pod-38a4b5e8-a70a-4c16-805d-e1d6d0ad9eb0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005069776s Oct 30 01:21:54.271: INFO: Pod "pod-38a4b5e8-a70a-4c16-805d-e1d6d0ad9eb0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008395082s Oct 30 01:21:56.276: INFO: Pod "pod-38a4b5e8-a70a-4c16-805d-e1d6d0ad9eb0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014065546s STEP: Saw pod success Oct 30 01:21:56.276: INFO: Pod "pod-38a4b5e8-a70a-4c16-805d-e1d6d0ad9eb0" satisfied condition "Succeeded or Failed" Oct 30 01:21:56.281: INFO: Trying to get logs from node node1 pod pod-38a4b5e8-a70a-4c16-805d-e1d6d0ad9eb0 container test-container: STEP: delete the pod Oct 30 01:21:56.291: INFO: Waiting for pod pod-38a4b5e8-a70a-4c16-805d-e1d6d0ad9eb0 to disappear Oct 30 01:21:56.294: INFO: Pod pod-38a4b5e8-a70a-4c16-805d-e1d6d0ad9eb0 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:21:56.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5661" for this suite. • [SLOW TEST:6.067 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":37,"skipped":463,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:21:45.479: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 01:21:45.498: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Oct 30 01:21:45.505: INFO: Pod name sample-pod: Found 0 pods out of 1 Oct 30 01:21:50.508: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Oct 30 01:21:50.508: INFO: Creating deployment "test-rolling-update-deployment" Oct 30 01:21:50.511: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Oct 30 01:21:50.519: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Oct 30 01:21:52.524: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Oct 30 01:21:52.526: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153710, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153710, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153710, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153710, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-585b757574\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 30 01:21:54.530: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153710, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153710, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153710, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153710, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-585b757574\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 30 01:21:56.529: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153710, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153710, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153710, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771153710, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-585b757574\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 30 01:21:58.531: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 Oct 30 01:21:58.538: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-6952 0aa32177-e8ea-43ee-8721-bc96c9d4df71 98204 1 2021-10-30 01:21:50 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2021-10-30 01:21:50 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-10-30 01:21:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc001b85fb8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2021-10-30 01:21:50 +0000 UTC,LastTransitionTime:2021-10-30 01:21:50 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-585b757574" has successfully progressed.,LastUpdateTime:2021-10-30 01:21:58 +0000 UTC,LastTransitionTime:2021-10-30 01:21:50 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Oct 30 01:21:58.541: INFO: New ReplicaSet "test-rolling-update-deployment-585b757574" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-585b757574 deployment-6952 0f33a089-dfad-4532-b2b0-52f7231e056c 98193 1 2021-10-30 01:21:50 +0000 UTC map[name:sample-pod pod-template-hash:585b757574] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 0aa32177-e8ea-43ee-8721-bc96c9d4df71 0xc004ab2467 0xc004ab2468}] [] [{kube-controller-manager Update apps/v1 2021-10-30 01:21:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0aa32177-e8ea-43ee-8721-bc96c9d4df71\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 585b757574,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:585b757574] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004ab24f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Oct 30 01:21:58.541: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Oct 30 01:21:58.541: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-6952 b94b26fc-3293-43db-a329-5dbb89a31074 98203 2 2021-10-30 01:21:45 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 0aa32177-e8ea-43ee-8721-bc96c9d4df71 0xc004ab2357 0xc004ab2358}] [] [{e2e.test Update apps/v1 2021-10-30 01:21:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-10-30 01:21:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0aa32177-e8ea-43ee-8721-bc96c9d4df71\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc004ab23f8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Oct 30 01:21:58.545: INFO: Pod "test-rolling-update-deployment-585b757574-s6kwx" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-585b757574-s6kwx test-rolling-update-deployment-585b757574- deployment-6952 48b9ea52-ffba-4e43-85f9-707729fa70df 98192 0 2021-10-30 01:21:50 +0000 UTC map[name:sample-pod pod-template-hash:585b757574] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.194" ], "mac": "f6:fe:13:29:9b:69", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.194" ], "mac": "f6:fe:13:29:9b:69", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-rolling-update-deployment-585b757574 0f33a089-dfad-4532-b2b0-52f7231e056c 0xc004ab290f 0xc004ab2920}] [] [{kube-controller-manager Update v1 2021-10-30 01:21:50 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0f33a089-dfad-4532-b2b0-52f7231e056c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-10-30 01:21:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-10-30 01:21:58 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.194\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-hbbgv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hbbgv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:21:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:21:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:21:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-30 01:21:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.3.194,StartTime:2021-10-30 01:21:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-30 01:21:53 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1,ContainerID:docker://00fa087e31629d67d443f7e3dfad59a2f443f031460657fe0622e08aaf807899,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.194,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:21:58.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6952" for this suite. • [SLOW TEST:13.075 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":22,"skipped":240,"failed":0} SSSSSS ------------------------------ Oct 30 01:21:58.568: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:21:51.544: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Oct 30 01:21:51.572: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ac758187-2f34-4fff-81a4-afda72182010" in namespace "projected-4625" to be "Succeeded or Failed" Oct 30 01:21:51.575: INFO: Pod "downwardapi-volume-ac758187-2f34-4fff-81a4-afda72182010": Phase="Pending", Reason="", readiness=false. Elapsed: 2.376846ms Oct 30 01:21:53.577: INFO: Pod "downwardapi-volume-ac758187-2f34-4fff-81a4-afda72182010": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004702349s Oct 30 01:21:55.580: INFO: Pod "downwardapi-volume-ac758187-2f34-4fff-81a4-afda72182010": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008177479s Oct 30 01:21:57.584: INFO: Pod "downwardapi-volume-ac758187-2f34-4fff-81a4-afda72182010": Phase="Pending", Reason="", readiness=false. Elapsed: 6.011902053s Oct 30 01:21:59.589: INFO: Pod "downwardapi-volume-ac758187-2f34-4fff-81a4-afda72182010": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.017094007s STEP: Saw pod success Oct 30 01:21:59.589: INFO: Pod "downwardapi-volume-ac758187-2f34-4fff-81a4-afda72182010" satisfied condition "Succeeded or Failed" Oct 30 01:21:59.592: INFO: Trying to get logs from node node1 pod downwardapi-volume-ac758187-2f34-4fff-81a4-afda72182010 container client-container: STEP: delete the pod Oct 30 01:22:00.256: INFO: Waiting for pod downwardapi-volume-ac758187-2f34-4fff-81a4-afda72182010 to disappear Oct 30 01:22:00.259: INFO: Pod downwardapi-volume-ac758187-2f34-4fff-81a4-afda72182010 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:22:00.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4625" for this suite. • [SLOW TEST:8.722 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":53,"skipped":993,"failed":0} Oct 30 01:22:00.268: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:21:56.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Oct 30 01:21:56.342: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4465e8a8-da4d-4091-b040-e6665fa1ca42" in namespace "projected-6018" to be "Succeeded or Failed" Oct 30 01:21:56.344: INFO: Pod "downwardapi-volume-4465e8a8-da4d-4091-b040-e6665fa1ca42": Phase="Pending", Reason="", readiness=false. Elapsed: 1.810344ms Oct 30 01:21:58.347: INFO: Pod "downwardapi-volume-4465e8a8-da4d-4091-b040-e6665fa1ca42": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004829529s Oct 30 01:22:00.351: INFO: Pod "downwardapi-volume-4465e8a8-da4d-4091-b040-e6665fa1ca42": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009018603s STEP: Saw pod success Oct 30 01:22:00.351: INFO: Pod "downwardapi-volume-4465e8a8-da4d-4091-b040-e6665fa1ca42" satisfied condition "Succeeded or Failed" Oct 30 01:22:00.353: INFO: Trying to get logs from node node2 pod downwardapi-volume-4465e8a8-da4d-4091-b040-e6665fa1ca42 container client-container: STEP: delete the pod Oct 30 01:22:00.370: INFO: Waiting for pod downwardapi-volume-4465e8a8-da4d-4091-b040-e6665fa1ca42 to disappear Oct 30 01:22:00.371: INFO: Pod downwardapi-volume-4465e8a8-da4d-4091-b040-e6665fa1ca42 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:22:00.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6018" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":38,"skipped":467,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} Oct 30 01:22:00.381: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:21:48.649: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 01:21:48.669: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Oct 30 01:21:57.175: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6176 --namespace=crd-publish-openapi-6176 create -f -' Oct 30 01:21:57.652: INFO: stderr: "" Oct 30 01:21:57.652: INFO: stdout: "e2e-test-crd-publish-openapi-9839-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Oct 30 01:21:57.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6176 --namespace=crd-publish-openapi-6176 delete e2e-test-crd-publish-openapi-9839-crds test-foo' Oct 30 01:21:57.800: INFO: stderr: "" Oct 30 01:21:57.800: INFO: stdout: "e2e-test-crd-publish-openapi-9839-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Oct 30 01:21:57.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6176 --namespace=crd-publish-openapi-6176 apply -f -' Oct 30 01:21:58.078: INFO: stderr: "" Oct 30 01:21:58.078: INFO: stdout: "e2e-test-crd-publish-openapi-9839-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Oct 30 01:21:58.078: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6176 --namespace=crd-publish-openapi-6176 delete e2e-test-crd-publish-openapi-9839-crds test-foo' Oct 30 01:21:58.235: INFO: stderr: "" Oct 30 01:21:58.235: INFO: stdout: "e2e-test-crd-publish-openapi-9839-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Oct 30 01:21:58.235: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6176 --namespace=crd-publish-openapi-6176 create -f -' Oct 30 01:21:58.544: INFO: rc: 1 Oct 30 01:21:58.544: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6176 --namespace=crd-publish-openapi-6176 apply -f -' Oct 30 01:21:58.842: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Oct 30 01:21:58.842: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6176 --namespace=crd-publish-openapi-6176 create -f -' Oct 30 01:21:59.135: INFO: rc: 1 Oct 30 01:21:59.135: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6176 --namespace=crd-publish-openapi-6176 apply -f -' Oct 30 01:21:59.443: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Oct 30 01:21:59.443: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6176 explain e2e-test-crd-publish-openapi-9839-crds' Oct 30 01:21:59.766: INFO: stderr: "" Oct 30 01:21:59.766: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9839-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Oct 30 01:21:59.767: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6176 explain e2e-test-crd-publish-openapi-9839-crds.metadata' Oct 30 01:22:00.110: INFO: stderr: "" Oct 30 01:22:00.111: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9839-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC.\n\n Populated by the system. Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested.\n\n Populated by the system when a graceful deletion is requested. Read-only.\n More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server.\n\n If this field is specified and the generated name exists, the server will\n NOT return a 409 - instead, it will either return 201 Created or 500 with\n Reason ServerTimeout indicating a unique name could not be found in the\n time allotted, and the client should retry (optionally after the time\n indicated in the Retry-After header).\n\n Applied only if Name is not specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within which each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty.\n\n Must be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources.\n\n Populated by the system. Read-only. Value must be treated as opaque by\n clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only.\n\n DEPRECATED Kubernetes will stop propagating this field in 1.20 release and\n the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations.\n\n Populated by the system. Read-only. More info:\n http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Oct 30 01:22:00.112: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6176 explain e2e-test-crd-publish-openapi-9839-crds.spec' Oct 30 01:22:00.416: INFO: stderr: "" Oct 30 01:22:00.416: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9839-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Oct 30 01:22:00.417: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6176 explain e2e-test-crd-publish-openapi-9839-crds.spec.bars' Oct 30 01:22:00.732: INFO: stderr: "" Oct 30 01:22:00.732: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9839-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Oct 30 01:22:00.732: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6176 explain e2e-test-crd-publish-openapi-9839-crds.spec.bars2' Oct 30 01:22:01.081: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:22:04.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6176" for this suite. • [SLOW TEST:15.969 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":-1,"completed":42,"skipped":824,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} Oct 30 01:22:04.620: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:20:53.867: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name cm-test-opt-del-415eda54-08b8-4855-824b-f69e660a0d42 STEP: Creating configMap with name cm-test-opt-upd-1d9273c0-33bf-41d3-97cd-7aaba5d13a29 STEP: Creating the pod Oct 30 01:20:53.920: INFO: The status of Pod pod-projected-configmaps-38781d87-ab78-459b-90a5-f23fa39caa8e is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:20:55.923: INFO: The status of Pod pod-projected-configmaps-38781d87-ab78-459b-90a5-f23fa39caa8e is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:20:57.922: INFO: The status of Pod pod-projected-configmaps-38781d87-ab78-459b-90a5-f23fa39caa8e is Pending, waiting for it to be Running (with Ready = true) Oct 30 01:20:59.924: INFO: The status of Pod pod-projected-configmaps-38781d87-ab78-459b-90a5-f23fa39caa8e is Running (Ready = true) STEP: Deleting configmap cm-test-opt-del-415eda54-08b8-4855-824b-f69e660a0d42 STEP: Updating configmap cm-test-opt-upd-1d9273c0-33bf-41d3-97cd-7aaba5d13a29 STEP: Creating configMap with name cm-test-opt-create-2a42306a-0b74-45f6-a74e-c9ba56c4504f STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:22:08.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3886" for this suite. • [SLOW TEST:74.448 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":34,"skipped":574,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} Oct 30 01:22:08.317: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:18:48.146: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod busybox-839dfaab-f3ac-49e4-8560-aa70d63f7aa7 in namespace container-probe-2974 Oct 30 01:19:00.237: INFO: Started pod busybox-839dfaab-f3ac-49e4-8560-aa70d63f7aa7 in namespace container-probe-2974 STEP: checking the pod's current state and verifying that restartCount is present Oct 30 01:19:00.240: INFO: Initial restart count of pod busybox-839dfaab-f3ac-49e4-8560-aa70d63f7aa7 is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:23:00.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2974" for this suite. • [SLOW TEST:252.733 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:17:33.630: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63 W1030 01:17:33.653036 23 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob [It] should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a ForbidConcurrent cronjob STEP: Ensuring a job is scheduled STEP: Ensuring exactly one is scheduled STEP: Ensuring exactly one running job exists by listing jobs explicitly STEP: Ensuring no more jobs are scheduled STEP: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:23:01.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-3326" for this suite. • [SLOW TEST:328.066 seconds] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","total":-1,"completed":14,"skipped":224,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} Oct 30 01:23:01.699: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:20:27.975: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod with failed condition STEP: updating the pod Oct 30 01:22:28.529: INFO: Successfully updated pod "var-expansion-a561c289-19d0-4595-8755-a2088ef58de3" STEP: waiting for pod running STEP: deleting the pod gracefully Oct 30 01:22:56.534: INFO: Deleting pod "var-expansion-a561c289-19d0-4595-8755-a2088ef58de3" in namespace "var-expansion-6505" Oct 30 01:22:56.541: INFO: Wait up to 5m0s for pod "var-expansion-a561c289-19d0-4595-8755-a2088ef58de3" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:23:34.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6505" for this suite. • [SLOW TEST:186.584 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]","total":-1,"completed":31,"skipped":532,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} Oct 30 01:23:34.562: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:19:45.499: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-5071 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating stateful set ss in namespace statefulset-5071 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-5071 Oct 30 01:19:45.528: INFO: Found 0 stateful pods, waiting for 1 Oct 30 01:19:55.531: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Oct 30 01:19:55.535: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5071 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 30 01:19:55.800: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Oct 30 01:19:55.800: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 30 01:19:55.800: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Oct 30 01:19:55.803: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Oct 30 01:20:05.806: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Oct 30 01:20:05.807: INFO: Waiting for statefulset status.replicas updated to 0 Oct 30 01:20:05.818: INFO: POD NODE PHASE GRACE CONDITIONS Oct 30 01:20:05.818: INFO: ss-0 node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:19:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:19:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:19:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:19:45 +0000 UTC }] Oct 30 01:20:05.818: INFO: Oct 30 01:20:05.818: INFO: StatefulSet ss has not reached scale 3, at 1 Oct 30 01:20:06.821: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.997362491s Oct 30 01:20:07.826: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.9936109s Oct 30 01:20:08.829: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.989347649s Oct 30 01:20:09.832: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.985832675s Oct 30 01:20:10.836: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.982154098s Oct 30 01:20:11.839: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.978999908s Oct 30 01:20:12.844: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.975023416s Oct 30 01:20:13.848: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.970950876s Oct 30 01:20:14.851: INFO: Verifying statefulset ss doesn't scale past 3 for another 967.33273ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5071 Oct 30 01:20:15.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5071 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:20:16.088: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Oct 30 01:20:16.088: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Oct 30 01:20:16.088: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Oct 30 01:20:16.088: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5071 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:20:16.361: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" Oct 30 01:20:16.361: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Oct 30 01:20:16.361: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Oct 30 01:20:16.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5071 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:20:16.595: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" Oct 30 01:20:16.595: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Oct 30 01:20:16.595: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Oct 30 01:20:16.598: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Oct 30 01:20:16.598: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Oct 30 01:20:16.598: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Oct 30 01:20:16.601: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5071 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 30 01:20:16.924: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Oct 30 01:20:16.924: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 30 01:20:16.924: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Oct 30 01:20:16.924: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5071 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 30 01:20:17.157: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Oct 30 01:20:17.157: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 30 01:20:17.157: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Oct 30 01:20:17.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5071 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 30 01:20:17.386: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Oct 30 01:20:17.386: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 30 01:20:17.386: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Oct 30 01:20:17.386: INFO: Waiting for statefulset status.replicas updated to 0 Oct 30 01:20:17.389: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Oct 30 01:20:27.395: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Oct 30 01:20:27.396: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Oct 30 01:20:27.396: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Oct 30 01:20:27.404: INFO: POD NODE PHASE GRACE CONDITIONS Oct 30 01:20:27.405: INFO: ss-0 node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:19:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:20:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:20:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:19:45 +0000 UTC }] Oct 30 01:20:27.405: INFO: ss-1 node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:20:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:20:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:20:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:20:05 +0000 UTC }] Oct 30 01:20:27.405: INFO: ss-2 node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:20:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:20:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:20:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:20:05 +0000 UTC }] Oct 30 01:20:27.405: INFO: Oct 30 01:20:27.405: INFO: StatefulSet ss has not reached scale 0, at 3 Oct 30 01:20:28.409: INFO: POD NODE PHASE GRACE CONDITIONS Oct 30 01:20:28.409: INFO: ss-0 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:19:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:20:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:20:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:19:45 +0000 UTC }] Oct 30 01:20:28.409: INFO: ss-1 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:20:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:20:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:20:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:20:05 +0000 UTC }] Oct 30 01:20:28.409: INFO: ss-2 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:20:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:20:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:20:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:20:05 +0000 UTC }] Oct 30 01:20:28.409: INFO: Oct 30 01:20:28.409: INFO: StatefulSet ss has not reached scale 0, at 3 Oct 30 01:20:29.413: INFO: POD NODE PHASE GRACE CONDITIONS Oct 30 01:20:29.413: INFO: ss-0 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:19:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:20:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:20:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:19:45 +0000 UTC }] Oct 30 01:20:29.413: INFO: ss-1 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:20:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:20:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:20:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:20:05 +0000 UTC }] Oct 30 01:20:29.413: INFO: ss-2 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:20:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:20:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:20:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:20:05 +0000 UTC }] Oct 30 01:20:29.413: INFO: Oct 30 01:20:29.413: INFO: StatefulSet ss has not reached scale 0, at 3 Oct 30 01:20:30.418: INFO: POD NODE PHASE GRACE CONDITIONS Oct 30 01:20:30.418: INFO: ss-0 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:19:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:20:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:20:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:19:45 +0000 UTC }] Oct 30 01:20:30.418: INFO: Oct 30 01:20:30.418: INFO: StatefulSet ss has not reached scale 0, at 1 Oct 30 01:20:31.421: INFO: POD NODE PHASE GRACE CONDITIONS Oct 30 01:20:31.421: INFO: ss-0 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:19:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:20:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:20:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:19:45 +0000 UTC }] Oct 30 01:20:31.421: INFO: Oct 30 01:20:31.421: INFO: StatefulSet ss has not reached scale 0, at 1 Oct 30 01:20:32.424: INFO: POD NODE PHASE GRACE CONDITIONS Oct 30 01:20:32.424: INFO: ss-0 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:19:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:20:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:20:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:19:45 +0000 UTC }] Oct 30 01:20:32.424: INFO: Oct 30 01:20:32.424: INFO: StatefulSet ss has not reached scale 0, at 1 Oct 30 01:20:33.427: INFO: POD NODE PHASE GRACE CONDITIONS Oct 30 01:20:33.427: INFO: ss-0 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:19:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:20:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:20:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:19:45 +0000 UTC }] Oct 30 01:20:33.427: INFO: Oct 30 01:20:33.427: INFO: StatefulSet ss has not reached scale 0, at 1 Oct 30 01:20:34.430: INFO: POD NODE PHASE GRACE CONDITIONS Oct 30 01:20:34.430: INFO: ss-0 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:19:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:20:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:20:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:19:45 +0000 UTC }] Oct 30 01:20:34.430: INFO: Oct 30 01:20:34.430: INFO: StatefulSet ss has not reached scale 0, at 1 Oct 30 01:20:35.434: INFO: POD NODE PHASE GRACE CONDITIONS Oct 30 01:20:35.434: INFO: ss-0 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:19:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:20:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:20:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:19:45 +0000 UTC }] Oct 30 01:20:35.434: INFO: Oct 30 01:20:35.434: INFO: StatefulSet ss has not reached scale 0, at 1 Oct 30 01:20:36.438: INFO: POD NODE PHASE GRACE CONDITIONS Oct 30 01:20:36.438: INFO: ss-0 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:19:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:20:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:20:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 01:19:45 +0000 UTC }] Oct 30 01:20:36.438: INFO: Oct 30 01:20:36.438: INFO: StatefulSet ss has not reached scale 0, at 1 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5071 Oct 30 01:20:37.442: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5071 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:20:37.646: INFO: rc: 1 Oct 30 01:20:37.646: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5071 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Oct 30 01:20:47.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5071 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:20:47.785: INFO: rc: 1 Oct 30 01:20:47.785: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5071 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Oct 30 01:20:57.787: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5071 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:20:57.937: INFO: rc: 1 Oct 30 01:20:57.937: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5071 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Oct 30 01:21:07.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5071 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:21:08.096: INFO: rc: 1 Oct 30 01:21:08.096: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5071 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Oct 30 01:21:18.098: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5071 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:21:18.248: INFO: rc: 1 Oct 30 01:21:18.248: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5071 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Oct 30 01:21:28.251: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5071 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:21:28.417: INFO: rc: 1 Oct 30 01:21:28.418: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5071 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Oct 30 01:21:38.418: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5071 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:21:38.576: INFO: rc: 1 Oct 30 01:21:38.577: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5071 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Oct 30 01:21:48.582: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5071 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:21:48.737: INFO: rc: 1 Oct 30 01:21:48.737: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5071 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Oct 30 01:21:58.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5071 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:21:58.875: INFO: rc: 1 Oct 30 01:21:58.875: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5071 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Oct 30 01:22:08.877: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5071 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:22:09.011: INFO: rc: 1 Oct 30 01:22:09.011: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5071 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Oct 30 01:22:19.017: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5071 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:22:19.151: INFO: rc: 1 Oct 30 01:22:19.151: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5071 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Oct 30 01:22:29.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5071 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:22:29.308: INFO: rc: 1 Oct 30 01:22:29.308: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5071 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Oct 30 01:22:39.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5071 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:22:39.474: INFO: rc: 1 Oct 30 01:22:39.474: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5071 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Oct 30 01:22:49.474: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5071 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:22:49.626: INFO: rc: 1 Oct 30 01:22:49.626: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5071 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Oct 30 01:22:59.629: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5071 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:22:59.784: INFO: rc: 1 Oct 30 01:22:59.784: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5071 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Oct 30 01:23:09.786: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5071 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:23:09.944: INFO: rc: 1 Oct 30 01:23:09.944: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5071 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Oct 30 01:23:19.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5071 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:23:20.101: INFO: rc: 1 Oct 30 01:23:20.101: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5071 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Oct 30 01:23:30.104: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5071 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:23:30.254: INFO: rc: 1 Oct 30 01:23:30.254: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5071 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Oct 30 01:23:40.263: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5071 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:23:40.414: INFO: rc: 1 Oct 30 01:23:40.414: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5071 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Oct 30 01:23:50.415: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5071 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:23:50.569: INFO: rc: 1 Oct 30 01:23:50.569: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5071 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Oct 30 01:24:00.570: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5071 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:24:00.728: INFO: rc: 1 Oct 30 01:24:00.728: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5071 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Oct 30 01:24:10.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5071 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:24:10.883: INFO: rc: 1 Oct 30 01:24:10.883: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5071 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Oct 30 01:24:20.885: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5071 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:24:21.023: INFO: rc: 1 Oct 30 01:24:21.023: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5071 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Oct 30 01:24:31.025: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5071 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:24:31.184: INFO: rc: 1 Oct 30 01:24:31.184: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5071 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Oct 30 01:24:41.186: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5071 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:24:41.334: INFO: rc: 1 Oct 30 01:24:41.334: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5071 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Oct 30 01:24:51.337: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5071 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:24:51.485: INFO: rc: 1 Oct 30 01:24:51.485: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5071 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Oct 30 01:25:01.486: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5071 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:25:01.628: INFO: rc: 1 Oct 30 01:25:01.628: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5071 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Oct 30 01:25:11.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5071 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:25:11.781: INFO: rc: 1 Oct 30 01:25:11.781: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5071 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Oct 30 01:25:21.783: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5071 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:25:21.942: INFO: rc: 1 Oct 30 01:25:21.942: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5071 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Oct 30 01:25:31.947: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5071 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:25:32.103: INFO: rc: 1 Oct 30 01:25:32.104: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5071 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Oct 30 01:25:42.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5071 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 30 01:25:42.264: INFO: rc: 1 Oct 30 01:25:42.264: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: Oct 30 01:25:42.264: INFO: Scaling statefulset ss to 0 Oct 30 01:25:42.283: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 Oct 30 01:25:42.286: INFO: Deleting all statefulset in ns statefulset-5071 Oct 30 01:25:42.288: INFO: Scaling statefulset ss to 0 Oct 30 01:25:42.296: INFO: Waiting for statefulset status.replicas updated to 0 Oct 30 01:25:42.298: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:25:42.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5071" for this suite. • [SLOW TEST:356.819 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":-1,"completed":19,"skipped":361,"failed":0} Oct 30 01:25:42.321: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:21:34.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63 W1030 01:21:34.882501 29 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob [It] should not schedule jobs when suspended [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a suspended cronjob STEP: Ensuring no jobs are scheduled STEP: Ensuring no job exists by listing jobs explicitly STEP: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:26:34.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-8336" for this suite. • [SLOW TEST:300.054 seconds] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should not schedule jobs when suspended [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]","total":-1,"completed":35,"skipped":828,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} Oct 30 01:26:34.915: INFO: Running AfterSuite actions on all nodes {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":401,"failed":0} Oct 30 01:23:00.882: INFO: Running AfterSuite actions on all nodes Oct 30 01:26:34.988: INFO: Running AfterSuite actions on node 1 Oct 30 01:26:34.988: INFO: Skipping dumping logs from cluster Summarizing 6 Failures: [Fail] [sig-network] Services [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2572 [Fail] [sig-network] Services [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2572 [Fail] [sig-network] Services [It] should be able to create a functioning NodePort service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1169 [Fail] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] [It] Should recreate evicted statefulset [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 [Fail] [sig-network] Services [It] should be able to change the type from ExternalName to NodePort [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1351 [Fail] [sig-network] Services [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2493 Ran 320 of 5770 Specs in 948.076 seconds FAIL! -- 314 Passed | 6 Failed | 0 Pending | 5450 Skipped Ginkgo ran 1 suite in 15m49.686555918s Test Suite Failed