I0130 23:59:25.175585 7 test_context.go:457] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0130 23:59:25.175786 7 e2e.go:129] Starting e2e run "79177e60-0ba8-472f-857f-d62460482b66" on Ginkgo node 1 {"msg":"Test Suite starting","total":311,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1612051163 - Will randomize all specs Will run 311 of 5703 specs Jan 30 23:59:25.239: INFO: >>> kubeConfig: /root/.kube/config Jan 30 23:59:25.241: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jan 30 23:59:25.265: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jan 30 23:59:25.300: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jan 30 23:59:25.300: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jan 30 23:59:25.300: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jan 30 23:59:25.307: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Jan 30 23:59:25.307: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jan 30 23:59:25.307: INFO: e2e test version: v1.21.0-alpha.1 Jan 30 23:59:25.308: INFO: kube-apiserver version: v1.21.0-alpha.0 Jan 30 23:59:25.308: INFO: >>> kubeConfig: /root/.kube/config Jan 30 23:59:25.313: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 30 23:59:25.313: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl Jan 30 23:59:25.618: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should support proxy with --port 0 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: starting the proxy server Jan 30 23:59:25.622: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-3906 proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 30 23:59:25.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3906" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":311,"completed":1,"skipped":18,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 30 23:59:25.733: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [BeforeEach] Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1314 STEP: creating the pod Jan 30 23:59:25.830: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-3619 create -f -' Jan 30 23:59:29.444: INFO: stderr: "" Jan 30 23:59:29.444: INFO: stdout: "pod/pause created\n" Jan 30 23:59:29.444: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Jan 30 23:59:29.444: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-3619" to be "running and ready" Jan 30 23:59:29.576: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 131.830439ms Jan 30 23:59:31.581: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.137144584s Jan 30 23:59:33.587: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.142969246s Jan 30 23:59:33.587: INFO: Pod "pause" satisfied condition "running and ready" Jan 30 23:59:33.587: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: adding the label testing-label with value testing-label-value to a pod Jan 30 23:59:33.587: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-3619 label pods pause testing-label=testing-label-value' Jan 30 23:59:33.696: INFO: stderr: "" Jan 30 23:59:33.696: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Jan 30 23:59:33.696: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-3619 get pod pause -L testing-label' Jan 30 23:59:33.796: INFO: stderr: "" Jan 30 23:59:33.796: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod Jan 30 23:59:33.796: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-3619 label pods pause testing-label-' Jan 30 23:59:33.904: INFO: stderr: "" Jan 30 23:59:33.904: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Jan 30 23:59:33.904: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-3619 get pod pause -L testing-label' Jan 30 23:59:34.012: INFO: stderr: "" Jan 30 23:59:34.013: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s \n" [AfterEach] Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1320 STEP: using delete to clean up resources Jan 30 23:59:34.013: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-3619 delete --grace-period=0 --force -f -' Jan 30 23:59:34.126: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 30 23:59:34.126: INFO: stdout: "pod \"pause\" force deleted\n" Jan 30 23:59:34.126: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-3619 get rc,svc -l name=pause --no-headers' Jan 30 23:59:34.235: INFO: stderr: "No resources found in kubectl-3619 namespace.\n" Jan 30 23:59:34.235: INFO: stdout: "" Jan 30 23:59:34.235: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-3619 get pods -l name=pause -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 30 23:59:34.323: INFO: stderr: "" Jan 30 23:59:34.323: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 30 23:59:34.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3619" for this suite. • [SLOW TEST:8.682 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1312 should update the label on a resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":311,"completed":2,"skipped":64,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 30 23:59:34.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: creating a test event STEP: listing events in all namespaces STEP: listing events in test namespace STEP: listing events with field selection filtering on source STEP: listing events with field selection filtering on reportingController STEP: getting the test event STEP: patching the test event STEP: getting the test event STEP: updating the test event STEP: getting the test event STEP: deleting the test event STEP: listing events in all namespaces STEP: listing events in test namespace [AfterEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 30 23:59:34.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-6879" for this suite. •{"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":311,"completed":3,"skipped":78,"failed":0} ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 30 23:59:34.709: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a pod to test substitution in container's args Jan 30 23:59:34.997: INFO: Waiting up to 5m0s for pod "var-expansion-cbab1961-7873-4d4f-a935-0d5291507b29" in namespace "var-expansion-162" to be "Succeeded or Failed" Jan 30 23:59:35.006: INFO: Pod "var-expansion-cbab1961-7873-4d4f-a935-0d5291507b29": Phase="Pending", Reason="", readiness=false. Elapsed: 8.505032ms Jan 30 23:59:37.010: INFO: Pod "var-expansion-cbab1961-7873-4d4f-a935-0d5291507b29": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012944247s Jan 30 23:59:39.015: INFO: Pod "var-expansion-cbab1961-7873-4d4f-a935-0d5291507b29": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017709741s Jan 30 23:59:41.018: INFO: Pod "var-expansion-cbab1961-7873-4d4f-a935-0d5291507b29": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.021207788s STEP: Saw pod success Jan 30 23:59:41.018: INFO: Pod "var-expansion-cbab1961-7873-4d4f-a935-0d5291507b29" satisfied condition "Succeeded or Failed" Jan 30 23:59:41.021: INFO: Trying to get logs from node latest-worker pod var-expansion-cbab1961-7873-4d4f-a935-0d5291507b29 container dapi-container: STEP: delete the pod Jan 30 23:59:41.060: INFO: Waiting for pod var-expansion-cbab1961-7873-4d4f-a935-0d5291507b29 to disappear Jan 30 23:59:41.071: INFO: Pod var-expansion-cbab1961-7873-4d4f-a935-0d5291507b29 no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 30 23:59:41.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-162" for this suite. • [SLOW TEST:6.368 seconds] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:635 should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":311,"completed":4,"skipped":78,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 30 23:59:41.077: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a pod to test env composition Jan 30 23:59:41.127: INFO: Waiting up to 5m0s for pod "var-expansion-8a7ea6a4-6684-4c43-9d5d-a4961218e117" in namespace "var-expansion-8791" to be "Succeeded or Failed" Jan 30 23:59:41.175: INFO: Pod "var-expansion-8a7ea6a4-6684-4c43-9d5d-a4961218e117": Phase="Pending", Reason="", readiness=false. Elapsed: 47.510841ms Jan 30 23:59:43.277: INFO: Pod "var-expansion-8a7ea6a4-6684-4c43-9d5d-a4961218e117": Phase="Pending", Reason="", readiness=false. Elapsed: 2.149215973s Jan 30 23:59:45.280: INFO: Pod "var-expansion-8a7ea6a4-6684-4c43-9d5d-a4961218e117": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.152805034s STEP: Saw pod success Jan 30 23:59:45.280: INFO: Pod "var-expansion-8a7ea6a4-6684-4c43-9d5d-a4961218e117" satisfied condition "Succeeded or Failed" Jan 30 23:59:45.282: INFO: Trying to get logs from node latest-worker pod var-expansion-8a7ea6a4-6684-4c43-9d5d-a4961218e117 container dapi-container: STEP: delete the pod Jan 30 23:59:45.453: INFO: Waiting for pod var-expansion-8a7ea6a4-6684-4c43-9d5d-a4961218e117 to disappear Jan 30 23:59:45.486: INFO: Pod var-expansion-8a7ea6a4-6684-4c43-9d5d-a4961218e117 no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 30 23:59:45.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8791" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":311,"completed":5,"skipped":97,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should test the lifecycle of an Endpoint [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 30 23:59:45.495: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should test the lifecycle of an Endpoint [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: creating an Endpoint STEP: waiting for available Endpoint STEP: listing all Endpoints STEP: updating the Endpoint STEP: fetching the Endpoint STEP: patching the Endpoint STEP: fetching the Endpoint STEP: deleting the Endpoint by Collection STEP: waiting for Endpoint deletion STEP: fetching the Endpoint [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 30 23:59:45.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1362" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 •{"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":311,"completed":6,"skipped":145,"failed":0} ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 30 23:59:45.763: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jan 30 23:59:55.987: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 30 23:59:56.006: INFO: Pod pod-with-poststart-http-hook still exists Jan 30 23:59:58.006: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 30 23:59:58.012: INFO: Pod pod-with-poststart-http-hook still exists Jan 31 00:00:00.006: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 31 00:00:00.012: INFO: Pod pod-with-poststart-http-hook still exists Jan 31 00:00:02.006: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 31 00:00:02.011: INFO: Pod pod-with-poststart-http-hook still exists Jan 31 00:00:04.006: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 31 00:00:04.049: INFO: Pod pod-with-poststart-http-hook still exists Jan 31 00:00:06.006: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 31 00:00:06.011: INFO: Pod pod-with-poststart-http-hook still exists Jan 31 00:00:08.006: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 31 00:00:08.011: INFO: Pod pod-with-poststart-http-hook still exists Jan 31 00:00:10.006: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 31 00:00:10.010: INFO: Pod pod-with-poststart-http-hook still exists Jan 31 00:00:12.006: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 31 00:00:12.011: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:00:12.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-2332" for this suite. • [SLOW TEST:26.257 seconds] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:635 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:43 should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":311,"completed":7,"skipped":145,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:00:12.021: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: creating the pod Jan 31 00:00:12.093: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:00:22.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8967" for this suite. • [SLOW TEST:10.279 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:635 should invoke init containers on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":311,"completed":8,"skipped":173,"failed":0} SSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:00:22.301: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:85 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 Jan 31 00:00:22.358: INFO: Creating deployment "test-recreate-deployment" Jan 31 00:00:22.362: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Jan 31 00:00:22.391: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Jan 31 00:00:24.413: INFO: Waiting deployment "test-recreate-deployment" to complete Jan 31 00:00:24.421: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747648022, loc:(*time.Location)(0x79bd420)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747648022, loc:(*time.Location)(0x79bd420)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747648022, loc:(*time.Location)(0x79bd420)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747648022, loc:(*time.Location)(0x79bd420)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-786dd7c454\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 31 00:00:26.427: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Jan 31 00:00:26.435: INFO: Updating deployment test-recreate-deployment Jan 31 00:00:26.436: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:79 Jan 31 00:00:27.016: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-9021 edd2dd72-65a6-486c-93d4-ddc8b51800fd 1107868 2 2021-01-31 00:00:22 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2021-01-31 00:00:26 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-01-31 00:00:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0040e0338 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2021-01-31 00:00:26 +0000 UTC,LastTransitionTime:2021-01-31 00:00:26 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-f79dd4667" is progressing.,LastUpdateTime:2021-01-31 00:00:26 +0000 UTC,LastTransitionTime:2021-01-31 00:00:22 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Jan 31 00:00:27.020: INFO: New ReplicaSet "test-recreate-deployment-f79dd4667" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-f79dd4667 deployment-9021 f5d1dca3-a53e-4cfa-8f28-a66ecb6240d7 1107866 1 2021-01-31 00:00:26 +0000 UTC map[name:sample-pod-3 pod-template-hash:f79dd4667] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment edd2dd72-65a6-486c-93d4-ddc8b51800fd 0xc0040e0790 0xc0040e0791}] [] [{kube-controller-manager Update apps/v1 2021-01-31 00:00:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"edd2dd72-65a6-486c-93d4-ddc8b51800fd\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: f79dd4667,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:f79dd4667] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0040e0808 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 31 00:00:27.020: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Jan 31 00:00:27.020: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-786dd7c454 deployment-9021 4bb6b04b-a3be-4e36-b4ac-34015f897697 1107857 2 2021-01-31 00:00:22 +0000 UTC map[name:sample-pod-3 pod-template-hash:786dd7c454] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment edd2dd72-65a6-486c-93d4-ddc8b51800fd 0xc0040e0697 0xc0040e0698}] [] [{kube-controller-manager Update apps/v1 2021-01-31 00:00:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"edd2dd72-65a6-486c-93d4-ddc8b51800fd\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 786dd7c454,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:786dd7c454] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.21 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0040e0728 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 31 00:00:27.023: INFO: Pod "test-recreate-deployment-f79dd4667-s8hhc" is not available: &Pod{ObjectMeta:{test-recreate-deployment-f79dd4667-s8hhc test-recreate-deployment-f79dd4667- deployment-9021 b3129a05-e307-44ee-992b-9cd240259583 1107869 0 2021-01-31 00:00:26 +0000 UTC map[name:sample-pod-3 pod-template-hash:f79dd4667] map[] [{apps/v1 ReplicaSet test-recreate-deployment-f79dd4667 f5d1dca3-a53e-4cfa-8f28-a66ecb6240d7 0xc0040e0c00 0xc0040e0c01}] [] [{kube-controller-manager Update v1 2021-01-31 00:00:26 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f5d1dca3-a53e-4cfa-8f28-a66ecb6240d7\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-31 00:00:26 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wxvrc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wxvrc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wxvrc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:00:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:00:26 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:00:26 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:00:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2021-01-31 00:00:26 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:00:27.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9021" for this suite. •{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":311,"completed":9,"skipped":181,"failed":0} SSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:00:27.033: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Jan 31 00:00:27.243: INFO: Waiting up to 1m0s for all nodes to be ready Jan 31 00:01:27.272: INFO: Waiting for terminating namespaces to be deleted... [It] validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Create pods that use 2/3 of node resources. Jan 31 00:01:27.312: INFO: Created pod: pod0-sched-preemption-low-priority Jan 31 00:01:27.386: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a critical pod that use same resources as that of a lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:02:19.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-3026" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:112.519 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]","total":311,"completed":10,"skipped":186,"failed":0} SSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:02:19.552: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a pod to test downward API volume plugin Jan 31 00:02:19.832: INFO: Waiting up to 5m0s for pod "downwardapi-volume-737ad9f6-f565-41db-988e-7f13d5234402" in namespace "downward-api-8860" to be "Succeeded or Failed" Jan 31 00:02:19.955: INFO: Pod "downwardapi-volume-737ad9f6-f565-41db-988e-7f13d5234402": Phase="Pending", Reason="", readiness=false. Elapsed: 123.338432ms Jan 31 00:02:21.959: INFO: Pod "downwardapi-volume-737ad9f6-f565-41db-988e-7f13d5234402": Phase="Pending", Reason="", readiness=false. Elapsed: 2.127630036s Jan 31 00:02:23.963: INFO: Pod "downwardapi-volume-737ad9f6-f565-41db-988e-7f13d5234402": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.131641702s STEP: Saw pod success Jan 31 00:02:23.963: INFO: Pod "downwardapi-volume-737ad9f6-f565-41db-988e-7f13d5234402" satisfied condition "Succeeded or Failed" Jan 31 00:02:23.967: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-737ad9f6-f565-41db-988e-7f13d5234402 container client-container: STEP: delete the pod Jan 31 00:02:24.082: INFO: Waiting for pod downwardapi-volume-737ad9f6-f565-41db-988e-7f13d5234402 to disappear Jan 31 00:02:24.113: INFO: Pod downwardapi-volume-737ad9f6-f565-41db-988e-7f13d5234402 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:02:24.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8860" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":311,"completed":11,"skipped":190,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:02:24.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating projection with configMap that has name projected-configmap-test-upd-3058ea01-8c78-4752-a0a7-b355ca69964a STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-3058ea01-8c78-4752-a0a7-b355ca69964a STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:03:54.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8249" for this suite. • [SLOW TEST:90.596 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":311,"completed":12,"skipped":198,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:03:54.719: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Jan 31 00:03:54.832: INFO: Pod name pod-release: Found 0 pods out of 1 Jan 31 00:03:59.840: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:03:59.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3345" for this suite. • [SLOW TEST:5.682 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":311,"completed":13,"skipped":210,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:04:00.401: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:85 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 Jan 31 00:04:00.601: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Jan 31 00:04:00.737: INFO: Pod name sample-pod: Found 0 pods out of 1 Jan 31 00:04:05.836: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jan 31 00:04:05.836: INFO: Creating deployment "test-rolling-update-deployment" Jan 31 00:04:05.998: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Jan 31 00:04:06.011: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Jan 31 00:04:08.019: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Jan 31 00:04:08.023: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747648246, loc:(*time.Location)(0x79bd420)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747648246, loc:(*time.Location)(0x79bd420)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747648246, loc:(*time.Location)(0x79bd420)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747648246, loc:(*time.Location)(0x79bd420)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-6b6bf9df46\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 31 00:04:10.027: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:79 Jan 31 00:04:10.039: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-2273 74944714-2891-45d7-bb38-24609bc26281 1108791 1 2021-01-31 00:04:05 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2021-01-31 00:04:05 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-01-31 00:04:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.21 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004d8ccb8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2021-01-31 00:04:06 +0000 UTC,LastTransitionTime:2021-01-31 00:04:06 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-6b6bf9df46" has successfully progressed.,LastUpdateTime:2021-01-31 00:04:09 +0000 UTC,LastTransitionTime:2021-01-31 00:04:06 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Jan 31 00:04:10.041: INFO: New ReplicaSet "test-rolling-update-deployment-6b6bf9df46" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-6b6bf9df46 deployment-2273 4031018b-ffdb-46b7-93ac-44a2a3a231a2 1108779 1 2021-01-31 00:04:06 +0000 UTC map[name:sample-pod pod-template-hash:6b6bf9df46] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 74944714-2891-45d7-bb38-24609bc26281 0xc003cb6707 0xc003cb6708}] [] [{kube-controller-manager Update apps/v1 2021-01-31 00:04:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"74944714-2891-45d7-bb38-24609bc26281\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 6b6bf9df46,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:6b6bf9df46] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.21 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003cb6798 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jan 31 00:04:10.041: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Jan 31 00:04:10.041: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-2273 6ed51e0d-67a7-4c85-a516-89cedb4dd710 1108790 2 2021-01-31 00:04:00 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 74944714-2891-45d7-bb38-24609bc26281 0xc003cb65ef 0xc003cb6600}] [] [{e2e.test Update apps/v1 2021-01-31 00:04:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-01-31 00:04:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"74944714-2891-45d7-bb38-24609bc26281\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc003cb6698 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 31 00:04:10.043: INFO: Pod "test-rolling-update-deployment-6b6bf9df46-fqkzz" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-6b6bf9df46-fqkzz test-rolling-update-deployment-6b6bf9df46- deployment-2273 2e65c314-bb85-4c3e-88ce-8523563ebdb4 1108778 0 2021-01-31 00:04:06 +0000 UTC map[name:sample-pod pod-template-hash:6b6bf9df46] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-6b6bf9df46 4031018b-ffdb-46b7-93ac-44a2a3a231a2 0xc003cb6b87 0xc003cb6b88}] [] [{kube-controller-manager Update v1 2021-01-31 00:04:06 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4031018b-ffdb-46b7-93ac-44a2a3a231a2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-31 00:04:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.252\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qhfzz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qhfzz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qhfzz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:04:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:04:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:04:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:04:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.2.252,StartTime:2021-01-31 00:04:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-01-31 00:04:09 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a,ContainerID:containerd://5735275db2ff486f643dc89b912001cb804e5cb4b2282c0aa030e44ea570a4e2,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.252,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:04:10.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2273" for this suite. • [SLOW TEST:9.649 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":311,"completed":14,"skipped":219,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:04:10.051: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jan 31 00:04:14.611: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:04:14.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3784" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":311,"completed":15,"skipped":287,"failed":0} SSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:04:14.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should serve a basic endpoint from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: creating service endpoint-test2 in namespace services-6644 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6644 to expose endpoints map[] Jan 31 00:04:14.876: INFO: Failed go get Endpoints object: endpoints "endpoint-test2" not found Jan 31 00:04:16.178: INFO: successfully validated that service endpoint-test2 in namespace services-6644 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-6644 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6644 to expose endpoints map[pod1:[80]] Jan 31 00:04:20.253: INFO: successfully validated that service endpoint-test2 in namespace services-6644 exposes endpoints map[pod1:[80]] STEP: Creating pod pod2 in namespace services-6644 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6644 to expose endpoints map[pod1:[80] pod2:[80]] Jan 31 00:04:24.324: INFO: successfully validated that service endpoint-test2 in namespace services-6644 exposes endpoints map[pod1:[80] pod2:[80]] STEP: Deleting pod pod1 in namespace services-6644 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6644 to expose endpoints map[pod2:[80]] Jan 31 00:04:24.422: INFO: successfully validated that service endpoint-test2 in namespace services-6644 exposes endpoints map[pod2:[80]] STEP: Deleting pod pod2 in namespace services-6644 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6644 to expose endpoints map[] Jan 31 00:04:24.632: INFO: successfully validated that service endpoint-test2 in namespace services-6644 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:04:24.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6644" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 • [SLOW TEST:10.367 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":311,"completed":16,"skipped":295,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:04:25.031: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [AfterEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:04:25.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1421" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":311,"completed":17,"skipped":312,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:04:25.944: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should support --unix-socket=/path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Starting the proxy Jan 31 00:04:26.354: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-5954 proxy --unix-socket=/tmp/kubectl-proxy-unix099401780/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:04:26.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5954" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":311,"completed":18,"skipped":340,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:04:26.572: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:92 Jan 31 00:04:26.717: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 31 00:04:26.746: INFO: Waiting for terminating namespaces to be deleted... Jan 31 00:04:26.749: INFO: Logging pods the apiserver thinks is on node latest-worker before test Jan 31 00:04:26.758: INFO: rally-20bad60a-x9qhaa1g from c-rally-20bad60a-lccsx42h started at 2021-01-31 00:04:20 +0000 UTC (1 container statuses recorded) Jan 31 00:04:26.758: INFO: Container rally-20bad60a-x9qhaa1g ready: true, restart count 0 Jan 31 00:04:26.758: INFO: chaos-controller-manager-69c479c674-tdrls from default started at 2021-01-26 11:59:56 +0000 UTC (1 container statuses recorded) Jan 31 00:04:26.758: INFO: Container chaos-mesh ready: true, restart count 0 Jan 31 00:04:26.758: INFO: chaos-daemon-vkxzr from default started at 2021-01-26 11:59:56 +0000 UTC (1 container statuses recorded) Jan 31 00:04:26.758: INFO: Container chaos-daemon ready: true, restart count 0 Jan 31 00:04:26.759: INFO: kindnet-5bf5g from kube-system started at 2021-01-26 08:08:41 +0000 UTC (1 container statuses recorded) Jan 31 00:04:26.759: INFO: Container kindnet-cni ready: true, restart count 0 Jan 31 00:04:26.759: INFO: kube-proxy-f59c8 from kube-system started at 2021-01-26 08:08:41 +0000 UTC (1 container statuses recorded) Jan 31 00:04:26.759: INFO: Container kube-proxy ready: true, restart count 0 Jan 31 00:04:26.759: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Jan 31 00:04:26.764: INFO: chaos-daemon-g67vf from default started at 2021-01-26 11:59:56 +0000 UTC (1 container statuses recorded) Jan 31 00:04:26.764: INFO: Container chaos-daemon ready: true, restart count 0 Jan 31 00:04:26.765: INFO: coredns-74ff55c5b-ngxdm from kube-system started at 2021-01-27 12:43:36 +0000 UTC (1 container statuses recorded) Jan 31 00:04:26.765: INFO: Container coredns ready: true, restart count 0 Jan 31 00:04:26.765: INFO: coredns-74ff55c5b-ntztq from kube-system started at 2021-01-27 12:43:35 +0000 UTC (1 container statuses recorded) Jan 31 00:04:26.765: INFO: Container coredns ready: true, restart count 0 Jan 31 00:04:26.765: INFO: kindnet-98jtw from kube-system started at 2021-01-26 08:08:41 +0000 UTC (1 container statuses recorded) Jan 31 00:04:26.765: INFO: Container kindnet-cni ready: true, restart count 0 Jan 31 00:04:26.765: INFO: kube-proxy-skm7x from kube-system started at 2021-01-26 08:08:41 +0000 UTC (1 container statuses recorded) Jan 31 00:04:26.765: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-a5f137e4-6708-42e0-800c-6c0857f4b921 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 172.18.0.14 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 172.18.0.14 but use UDP protocol on the node which pod2 resides STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54321 Jan 31 00:04:47.022: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 172.18.0.14 http://127.0.0.1:54321/hostname] Namespace:sched-pred-408 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 31 00:04:47.022: INFO: >>> kubeConfig: /root/.kube/config I0131 00:04:47.058933 7 log.go:181] (0xc003758a50) (0xc004caba40) Create stream I0131 00:04:47.058962 7 log.go:181] (0xc003758a50) (0xc004caba40) Stream added, broadcasting: 1 I0131 00:04:47.061537 7 log.go:181] (0xc003758a50) Reply frame received for 1 I0131 00:04:47.061575 7 log.go:181] (0xc003758a50) (0xc0003e7860) Create stream I0131 00:04:47.061592 7 log.go:181] (0xc003758a50) (0xc0003e7860) Stream added, broadcasting: 3 I0131 00:04:47.062422 7 log.go:181] (0xc003758a50) Reply frame received for 3 I0131 00:04:47.062451 7 log.go:181] (0xc003758a50) (0xc004cabae0) Create stream I0131 00:04:47.062461 7 log.go:181] (0xc003758a50) (0xc004cabae0) Stream added, broadcasting: 5 I0131 00:04:47.063295 7 log.go:181] (0xc003758a50) Reply frame received for 5 I0131 00:04:47.159232 7 log.go:181] (0xc003758a50) Data frame received for 5 I0131 00:04:47.159265 7 log.go:181] (0xc004cabae0) (5) Data frame handling I0131 00:04:47.159284 7 log.go:181] (0xc004cabae0) (5) Data frame sent I0131 00:04:47.159298 7 log.go:181] (0xc003758a50) Data frame received for 5 I0131 00:04:47.159309 7 log.go:181] (0xc004cabae0) (5) Data frame handling I0131 00:04:47.159332 7 log.go:181] (0xc003758a50) Data frame received for 3 I0131 00:04:47.159359 7 log.go:181] (0xc004cabae0) (5) Data frame sent I0131 00:04:47.159384 7 log.go:181] (0xc003758a50) Data frame received for 5 I0131 00:04:47.159395 7 log.go:181] (0xc004cabae0) (5) Data frame handling I0131 00:04:47.159401 7 log.go:181] (0xc004cabae0) (5) Data frame sent I0131 00:04:47.159406 7 log.go:181] (0xc003758a50) Data frame received for 5 I0131 00:04:47.159412 7 log.go:181] (0xc004cabae0) (5) Data frame handling I0131 00:04:47.159421 7 log.go:181] (0xc004cabae0) (5) Data frame sent I0131 00:04:47.159428 7 log.go:181] (0xc003758a50) Data frame received for 5 I0131 00:04:47.159433 7 log.go:181] (0xc004cabae0) (5) Data frame handling I0131 00:04:47.159446 7 log.go:181] (0xc004cabae0) (5) Data frame sent I0131 00:04:47.159458 7 log.go:181] (0xc003758a50) Data frame received for 5 I0131 00:04:47.159465 7 log.go:181] (0xc004cabae0) (5) Data frame handling I0131 00:04:47.159473 7 log.go:181] (0xc004cabae0) (5) Data frame sent I0131 00:04:47.159487 7 log.go:181] (0xc0003e7860) (3) Data frame handling I0131 00:04:47.159500 7 log.go:181] (0xc0003e7860) (3) Data frame sent I0131 00:04:47.159789 7 log.go:181] (0xc003758a50) Data frame received for 5 I0131 00:04:47.159809 7 log.go:181] (0xc004cabae0) (5) Data frame handling I0131 00:04:47.160084 7 log.go:181] (0xc003758a50) Data frame received for 3 I0131 00:04:47.160100 7 log.go:181] (0xc0003e7860) (3) Data frame handling I0131 00:04:47.161441 7 log.go:181] (0xc003758a50) Data frame received for 1 I0131 00:04:47.161462 7 log.go:181] (0xc004caba40) (1) Data frame handling I0131 00:04:47.161469 7 log.go:181] (0xc004caba40) (1) Data frame sent I0131 00:04:47.161476 7 log.go:181] (0xc003758a50) (0xc004caba40) Stream removed, broadcasting: 1 I0131 00:04:47.161498 7 log.go:181] (0xc003758a50) Go away received I0131 00:04:47.161719 7 log.go:181] (0xc003758a50) (0xc004caba40) Stream removed, broadcasting: 1 I0131 00:04:47.161728 7 log.go:181] (0xc003758a50) (0xc0003e7860) Stream removed, broadcasting: 3 I0131 00:04:47.161735 7 log.go:181] (0xc003758a50) (0xc004cabae0) Stream removed, broadcasting: 5 STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.14, port: 54321 Jan 31 00:04:47.161: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://172.18.0.14:54321/hostname] Namespace:sched-pred-408 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 31 00:04:47.161: INFO: >>> kubeConfig: /root/.kube/config I0131 00:04:47.189638 7 log.go:181] (0xc003759130) (0xc004cabd60) Create stream I0131 00:04:47.189670 7 log.go:181] (0xc003759130) (0xc004cabd60) Stream added, broadcasting: 1 I0131 00:04:47.192192 7 log.go:181] (0xc003759130) Reply frame received for 1 I0131 00:04:47.192246 7 log.go:181] (0xc003759130) (0xc004cabe00) Create stream I0131 00:04:47.192259 7 log.go:181] (0xc003759130) (0xc004cabe00) Stream added, broadcasting: 3 I0131 00:04:47.193559 7 log.go:181] (0xc003759130) Reply frame received for 3 I0131 00:04:47.193601 7 log.go:181] (0xc003759130) (0xc000533e00) Create stream I0131 00:04:47.193615 7 log.go:181] (0xc003759130) (0xc000533e00) Stream added, broadcasting: 5 I0131 00:04:47.194530 7 log.go:181] (0xc003759130) Reply frame received for 5 I0131 00:04:47.253941 7 log.go:181] (0xc003759130) Data frame received for 5 I0131 00:04:47.253967 7 log.go:181] (0xc000533e00) (5) Data frame handling I0131 00:04:47.253986 7 log.go:181] (0xc000533e00) (5) Data frame sent I0131 00:04:47.253993 7 log.go:181] (0xc003759130) Data frame received for 5 I0131 00:04:47.253998 7 log.go:181] (0xc000533e00) (5) Data frame handling I0131 00:04:47.254017 7 log.go:181] (0xc000533e00) (5) Data frame sent I0131 00:04:47.254040 7 log.go:181] (0xc003759130) Data frame received for 5 I0131 00:04:47.254048 7 log.go:181] (0xc000533e00) (5) Data frame handling I0131 00:04:47.254058 7 log.go:181] (0xc000533e00) (5) Data frame sent I0131 00:04:47.254170 7 log.go:181] (0xc003759130) Data frame received for 5 I0131 00:04:47.254215 7 log.go:181] (0xc000533e00) (5) Data frame handling I0131 00:04:47.254235 7 log.go:181] (0xc000533e00) (5) Data frame sent I0131 00:04:47.254247 7 log.go:181] (0xc003759130) Data frame received for 5 I0131 00:04:47.254255 7 log.go:181] (0xc000533e00) (5) Data frame handling I0131 00:04:47.254269 7 log.go:181] (0xc000533e00) (5) Data frame sent I0131 00:04:47.254297 7 log.go:181] (0xc003759130) Data frame received for 5 I0131 00:04:47.254307 7 log.go:181] (0xc000533e00) (5) Data frame handling I0131 00:04:47.254313 7 log.go:181] (0xc000533e00) (5) Data frame sent I0131 00:04:47.254320 7 log.go:181] (0xc003759130) Data frame received for 5 I0131 00:04:47.254330 7 log.go:181] (0xc000533e00) (5) Data frame handling I0131 00:04:47.254355 7 log.go:181] (0xc000533e00) (5) Data frame sent I0131 00:04:47.255000 7 log.go:181] (0xc003759130) Data frame received for 5 I0131 00:04:47.255022 7 log.go:181] (0xc000533e00) (5) Data frame handling I0131 00:04:47.255048 7 log.go:181] (0xc000533e00) (5) Data frame sent I0131 00:04:47.255060 7 log.go:181] (0xc003759130) Data frame received for 5 I0131 00:04:47.255074 7 log.go:181] (0xc000533e00) (5) Data frame handling I0131 00:04:47.255092 7 log.go:181] (0xc000533e00) (5) Data frame sent I0131 00:04:47.255108 7 log.go:181] (0xc003759130) Data frame received for 5 I0131 00:04:47.255124 7 log.go:181] (0xc000533e00) (5) Data frame handling I0131 00:04:47.255140 7 log.go:181] (0xc000533e00) (5) Data frame sent I0131 00:04:47.255206 7 log.go:181] (0xc003759130) Data frame received for 3 I0131 00:04:47.255221 7 log.go:181] (0xc004cabe00) (3) Data frame handling I0131 00:04:47.255234 7 log.go:181] (0xc004cabe00) (3) Data frame sent I0131 00:04:47.256255 7 log.go:181] (0xc003759130) Data frame received for 5 I0131 00:04:47.256285 7 log.go:181] (0xc003759130) Data frame received for 3 I0131 00:04:47.256338 7 log.go:181] (0xc004cabe00) (3) Data frame handling I0131 00:04:47.256410 7 log.go:181] (0xc000533e00) (5) Data frame handling I0131 00:04:47.258357 7 log.go:181] (0xc003759130) Data frame received for 1 I0131 00:04:47.258375 7 log.go:181] (0xc004cabd60) (1) Data frame handling I0131 00:04:47.258385 7 log.go:181] (0xc004cabd60) (1) Data frame sent I0131 00:04:47.258395 7 log.go:181] (0xc003759130) (0xc004cabd60) Stream removed, broadcasting: 1 I0131 00:04:47.258414 7 log.go:181] (0xc003759130) Go away received I0131 00:04:47.258496 7 log.go:181] (0xc003759130) (0xc004cabd60) Stream removed, broadcasting: 1 I0131 00:04:47.258550 7 log.go:181] (0xc003759130) (0xc004cabe00) Stream removed, broadcasting: 3 I0131 00:04:47.258577 7 log.go:181] (0xc003759130) (0xc000533e00) Stream removed, broadcasting: 5 STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.14, port: 54321 UDP Jan 31 00:04:47.258: INFO: ExecWithOptions {Command:[/bin/sh -c nc -vuz -w 5 172.18.0.14 54321] Namespace:sched-pred-408 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 31 00:04:47.258: INFO: >>> kubeConfig: /root/.kube/config I0131 00:04:47.287451 7 log.go:181] (0xc0000e5e40) (0xc000e0c280) Create stream I0131 00:04:47.287478 7 log.go:181] (0xc0000e5e40) (0xc000e0c280) Stream added, broadcasting: 1 I0131 00:04:47.289627 7 log.go:181] (0xc0000e5e40) Reply frame received for 1 I0131 00:04:47.289658 7 log.go:181] (0xc0000e5e40) (0xc004cabea0) Create stream I0131 00:04:47.289670 7 log.go:181] (0xc0000e5e40) (0xc004cabea0) Stream added, broadcasting: 3 I0131 00:04:47.290416 7 log.go:181] (0xc0000e5e40) Reply frame received for 3 I0131 00:04:47.290445 7 log.go:181] (0xc0000e5e40) (0xc0004ff400) Create stream I0131 00:04:47.290458 7 log.go:181] (0xc0000e5e40) (0xc0004ff400) Stream added, broadcasting: 5 I0131 00:04:47.291167 7 log.go:181] (0xc0000e5e40) Reply frame received for 5 I0131 00:04:52.357607 7 log.go:181] (0xc0000e5e40) Data frame received for 5 I0131 00:04:52.357747 7 log.go:181] (0xc0004ff400) (5) Data frame handling I0131 00:04:52.357945 7 log.go:181] (0xc0004ff400) (5) Data frame sent I0131 00:04:52.358088 7 log.go:181] (0xc0000e5e40) Data frame received for 5 I0131 00:04:52.358121 7 log.go:181] (0xc0004ff400) (5) Data frame handling I0131 00:04:52.358313 7 log.go:181] (0xc0000e5e40) Data frame received for 3 I0131 00:04:52.358343 7 log.go:181] (0xc004cabea0) (3) Data frame handling I0131 00:04:52.361546 7 log.go:181] (0xc0000e5e40) Data frame received for 1 I0131 00:04:52.361585 7 log.go:181] (0xc000e0c280) (1) Data frame handling I0131 00:04:52.361622 7 log.go:181] (0xc000e0c280) (1) Data frame sent I0131 00:04:52.361651 7 log.go:181] (0xc0000e5e40) (0xc000e0c280) Stream removed, broadcasting: 1 I0131 00:04:52.361706 7 log.go:181] (0xc0000e5e40) Go away received I0131 00:04:52.361862 7 log.go:181] (0xc0000e5e40) (0xc000e0c280) Stream removed, broadcasting: 1 I0131 00:04:52.361891 7 log.go:181] (0xc0000e5e40) (0xc004cabea0) Stream removed, broadcasting: 3 I0131 00:04:52.361905 7 log.go:181] (0xc0000e5e40) (0xc0004ff400) Stream removed, broadcasting: 5 STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54321 Jan 31 00:04:52.361: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 172.18.0.14 http://127.0.0.1:54321/hostname] Namespace:sched-pred-408 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 31 00:04:52.361: INFO: >>> kubeConfig: /root/.kube/config I0131 00:04:52.400201 7 log.go:181] (0xc0004fd130) (0xc000612a00) Create stream I0131 00:04:52.400232 7 log.go:181] (0xc0004fd130) (0xc000612a00) Stream added, broadcasting: 1 I0131 00:04:52.402896 7 log.go:181] (0xc0004fd130) Reply frame received for 1 I0131 00:04:52.402940 7 log.go:181] (0xc0004fd130) (0xc004cabf40) Create stream I0131 00:04:52.402958 7 log.go:181] (0xc0004fd130) (0xc004cabf40) Stream added, broadcasting: 3 I0131 00:04:52.404033 7 log.go:181] (0xc0004fd130) Reply frame received for 3 I0131 00:04:52.404086 7 log.go:181] (0xc0004fd130) (0xc000e0c320) Create stream I0131 00:04:52.404101 7 log.go:181] (0xc0004fd130) (0xc000e0c320) Stream added, broadcasting: 5 I0131 00:04:52.405234 7 log.go:181] (0xc0004fd130) Reply frame received for 5 I0131 00:04:52.461656 7 log.go:181] (0xc0004fd130) Data frame received for 5 I0131 00:04:52.461711 7 log.go:181] (0xc000e0c320) (5) Data frame handling I0131 00:04:52.461749 7 log.go:181] (0xc000e0c320) (5) Data frame sent I0131 00:04:52.461788 7 log.go:181] (0xc0004fd130) Data frame received for 5 I0131 00:04:52.461817 7 log.go:181] (0xc000e0c320) (5) Data frame handling I0131 00:04:52.461834 7 log.go:181] (0xc000e0c320) (5) Data frame sent I0131 00:04:52.461851 7 log.go:181] (0xc0004fd130) Data frame received for 5 I0131 00:04:52.461860 7 log.go:181] (0xc000e0c320) (5) Data frame handling I0131 00:04:52.461943 7 log.go:181] (0xc000e0c320) (5) Data frame sent I0131 00:04:52.461968 7 log.go:181] (0xc0004fd130) Data frame received for 5 I0131 00:04:52.461990 7 log.go:181] (0xc000e0c320) (5) Data frame handling I0131 00:04:52.462015 7 log.go:181] (0xc000e0c320) (5) Data frame sent I0131 00:04:52.462028 7 log.go:181] (0xc0004fd130) Data frame received for 5 I0131 00:04:52.462040 7 log.go:181] (0xc000e0c320) (5) Data frame handling I0131 00:04:52.462055 7 log.go:181] (0xc000e0c320) (5) Data frame sent I0131 00:04:52.462067 7 log.go:181] (0xc0004fd130) Data frame received for 5 I0131 00:04:52.462097 7 log.go:181] (0xc000e0c320) (5) Data frame handling I0131 00:04:52.462119 7 log.go:181] (0xc000e0c320) (5) Data frame sent I0131 00:04:52.462134 7 log.go:181] (0xc0004fd130) Data frame received for 5 I0131 00:04:52.462145 7 log.go:181] (0xc000e0c320) (5) Data frame handling I0131 00:04:52.462158 7 log.go:181] (0xc000e0c320) (5) Data frame sent I0131 00:04:52.462175 7 log.go:181] (0xc0004fd130) Data frame received for 5 I0131 00:04:52.462192 7 log.go:181] (0xc000e0c320) (5) Data frame handling I0131 00:04:52.462214 7 log.go:181] (0xc000e0c320) (5) Data frame sent I0131 00:04:52.462236 7 log.go:181] (0xc0004fd130) Data frame received for 5 I0131 00:04:52.462251 7 log.go:181] (0xc000e0c320) (5) Data frame handling I0131 00:04:52.462271 7 log.go:181] (0xc000e0c320) (5) Data frame sent I0131 00:04:52.462285 7 log.go:181] (0xc0004fd130) Data frame received for 5 I0131 00:04:52.462296 7 log.go:181] (0xc000e0c320) (5) Data frame handling I0131 00:04:52.462368 7 log.go:181] (0xc000e0c320) (5) Data frame sent I0131 00:04:52.462420 7 log.go:181] (0xc0004fd130) Data frame received for 5 I0131 00:04:52.462440 7 log.go:181] (0xc000e0c320) (5) Data frame handling I0131 00:04:52.462457 7 log.go:181] (0xc000e0c320) (5) Data frame sent I0131 00:04:52.462477 7 log.go:181] (0xc0004fd130) Data frame received for 5 I0131 00:04:52.462492 7 log.go:181] (0xc000e0c320) (5) Data frame handling I0131 00:04:52.462504 7 log.go:181] (0xc000e0c320) (5) Data frame sent I0131 00:04:52.462516 7 log.go:181] (0xc0004fd130) Data frame received for 5 I0131 00:04:52.462527 7 log.go:181] (0xc000e0c320) (5) Data frame handling I0131 00:04:52.462554 7 log.go:181] (0xc0004fd130) Data frame received for 3 I0131 00:04:52.462591 7 log.go:181] (0xc004cabf40) (3) Data frame handling I0131 00:04:52.462609 7 log.go:181] (0xc004cabf40) (3) Data frame sent I0131 00:04:52.462668 7 log.go:181] (0xc000e0c320) (5) Data frame sent I0131 00:04:52.463159 7 log.go:181] (0xc0004fd130) Data frame received for 3 I0131 00:04:52.463179 7 log.go:181] (0xc004cabf40) (3) Data frame handling I0131 00:04:52.463219 7 log.go:181] (0xc0004fd130) Data frame received for 5 I0131 00:04:52.463236 7 log.go:181] (0xc000e0c320) (5) Data frame handling I0131 00:04:52.464909 7 log.go:181] (0xc0004fd130) Data frame received for 1 I0131 00:04:52.464946 7 log.go:181] (0xc000612a00) (1) Data frame handling I0131 00:04:52.464955 7 log.go:181] (0xc000612a00) (1) Data frame sent I0131 00:04:52.464965 7 log.go:181] (0xc0004fd130) (0xc000612a00) Stream removed, broadcasting: 1 I0131 00:04:52.464980 7 log.go:181] (0xc0004fd130) Go away received I0131 00:04:52.465053 7 log.go:181] (0xc0004fd130) (0xc000612a00) Stream removed, broadcasting: 1 I0131 00:04:52.465078 7 log.go:181] (0xc0004fd130) (0xc004cabf40) Stream removed, broadcasting: 3 I0131 00:04:52.465097 7 log.go:181] (0xc0004fd130) (0xc000e0c320) Stream removed, broadcasting: 5 STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.14, port: 54321 Jan 31 00:04:52.465: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://172.18.0.14:54321/hostname] Namespace:sched-pred-408 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 31 00:04:52.465: INFO: >>> kubeConfig: /root/.kube/config I0131 00:04:52.519304 7 log.go:181] (0xc0004fd810) (0xc000612f00) Create stream I0131 00:04:52.519333 7 log.go:181] (0xc0004fd810) (0xc000612f00) Stream added, broadcasting: 1 I0131 00:04:52.522179 7 log.go:181] (0xc0004fd810) Reply frame received for 1 I0131 00:04:52.522241 7 log.go:181] (0xc0004fd810) (0xc000613040) Create stream I0131 00:04:52.522258 7 log.go:181] (0xc0004fd810) (0xc000613040) Stream added, broadcasting: 3 I0131 00:04:52.523294 7 log.go:181] (0xc0004fd810) Reply frame received for 3 I0131 00:04:52.523330 7 log.go:181] (0xc0004fd810) (0xc000e0c3c0) Create stream I0131 00:04:52.523342 7 log.go:181] (0xc0004fd810) (0xc000e0c3c0) Stream added, broadcasting: 5 I0131 00:04:52.524374 7 log.go:181] (0xc0004fd810) Reply frame received for 5 I0131 00:04:52.592515 7 log.go:181] (0xc0004fd810) Data frame received for 5 I0131 00:04:52.592563 7 log.go:181] (0xc000e0c3c0) (5) Data frame handling I0131 00:04:52.592579 7 log.go:181] (0xc000e0c3c0) (5) Data frame sent I0131 00:04:52.592625 7 log.go:181] (0xc0004fd810) Data frame received for 5 I0131 00:04:52.592639 7 log.go:181] (0xc000e0c3c0) (5) Data frame handling I0131 00:04:52.592691 7 log.go:181] (0xc000e0c3c0) (5) Data frame sent I0131 00:04:52.592706 7 log.go:181] (0xc0004fd810) Data frame received for 5 I0131 00:04:52.592720 7 log.go:181] (0xc000e0c3c0) (5) Data frame handling I0131 00:04:52.592737 7 log.go:181] (0xc000e0c3c0) (5) Data frame sent I0131 00:04:52.592759 7 log.go:181] (0xc0004fd810) Data frame received for 5 I0131 00:04:52.592781 7 log.go:181] (0xc000e0c3c0) (5) Data frame handling I0131 00:04:52.592803 7 log.go:181] (0xc000e0c3c0) (5) Data frame sent I0131 00:04:52.592818 7 log.go:181] (0xc0004fd810) Data frame received for 5 I0131 00:04:52.592962 7 log.go:181] (0xc000e0c3c0) (5) Data frame handling I0131 00:04:52.593059 7 log.go:181] (0xc000e0c3c0) (5) Data frame sent I0131 00:04:52.593134 7 log.go:181] (0xc0004fd810) Data frame received for 5 I0131 00:04:52.593204 7 log.go:181] (0xc000e0c3c0) (5) Data frame handling I0131 00:04:52.593250 7 log.go:181] (0xc000e0c3c0) (5) Data frame sent I0131 00:04:52.593277 7 log.go:181] (0xc0004fd810) Data frame received for 5 I0131 00:04:52.593303 7 log.go:181] (0xc000e0c3c0) (5) Data frame handling I0131 00:04:52.593324 7 log.go:181] (0xc0004fd810) Data frame received for 3 I0131 00:04:52.593341 7 log.go:181] (0xc000613040) (3) Data frame handling I0131 00:04:52.593356 7 log.go:181] (0xc000613040) (3) Data frame sent I0131 00:04:52.593385 7 log.go:181] (0xc000e0c3c0) (5) Data frame sent I0131 00:04:52.593399 7 log.go:181] (0xc0004fd810) Data frame received for 5 I0131 00:04:52.593413 7 log.go:181] (0xc000e0c3c0) (5) Data frame handling I0131 00:04:52.593435 7 log.go:181] (0xc000e0c3c0) (5) Data frame sent I0131 00:04:52.593451 7 log.go:181] (0xc0004fd810) Data frame received for 5 I0131 00:04:52.593465 7 log.go:181] (0xc000e0c3c0) (5) Data frame handling I0131 00:04:52.593484 7 log.go:181] (0xc000e0c3c0) (5) Data frame sent I0131 00:04:52.593498 7 log.go:181] (0xc0004fd810) Data frame received for 5 I0131 00:04:52.593514 7 log.go:181] (0xc000e0c3c0) (5) Data frame handling I0131 00:04:52.593534 7 log.go:181] (0xc000e0c3c0) (5) Data frame sent I0131 00:04:52.593548 7 log.go:181] (0xc0004fd810) Data frame received for 5 I0131 00:04:52.593563 7 log.go:181] (0xc000e0c3c0) (5) Data frame handling I0131 00:04:52.593595 7 log.go:181] (0xc000e0c3c0) (5) Data frame sent I0131 00:04:52.593618 7 log.go:181] (0xc0004fd810) Data frame received for 5 I0131 00:04:52.593644 7 log.go:181] (0xc000e0c3c0) (5) Data frame handling I0131 00:04:52.593672 7 log.go:181] (0xc000e0c3c0) (5) Data frame sent I0131 00:04:52.593692 7 log.go:181] (0xc0004fd810) Data frame received for 5 I0131 00:04:52.593710 7 log.go:181] (0xc000e0c3c0) (5) Data frame handling I0131 00:04:52.593740 7 log.go:181] (0xc000e0c3c0) (5) Data frame sent I0131 00:04:52.593758 7 log.go:181] (0xc0004fd810) Data frame received for 5 I0131 00:04:52.593775 7 log.go:181] (0xc000e0c3c0) (5) Data frame handling I0131 00:04:52.593803 7 log.go:181] (0xc000e0c3c0) (5) Data frame sent I0131 00:04:52.593821 7 log.go:181] (0xc0004fd810) Data frame received for 5 I0131 00:04:52.593838 7 log.go:181] (0xc000e0c3c0) (5) Data frame handling I0131 00:04:52.593868 7 log.go:181] (0xc000e0c3c0) (5) Data frame sent I0131 00:04:52.593892 7 log.go:181] (0xc0004fd810) Data frame received for 5 I0131 00:04:52.593912 7 log.go:181] (0xc000e0c3c0) (5) Data frame handling I0131 00:04:52.593955 7 log.go:181] (0xc000e0c3c0) (5) Data frame sent I0131 00:04:52.594504 7 log.go:181] (0xc0004fd810) Data frame received for 5 I0131 00:04:52.594544 7 log.go:181] (0xc000e0c3c0) (5) Data frame handling I0131 00:04:52.594577 7 log.go:181] (0xc0004fd810) Data frame received for 3 I0131 00:04:52.594590 7 log.go:181] (0xc000613040) (3) Data frame handling I0131 00:04:52.596285 7 log.go:181] (0xc0004fd810) Data frame received for 1 I0131 00:04:52.596319 7 log.go:181] (0xc000612f00) (1) Data frame handling I0131 00:04:52.596353 7 log.go:181] (0xc000612f00) (1) Data frame sent I0131 00:04:52.596382 7 log.go:181] (0xc0004fd810) (0xc000612f00) Stream removed, broadcasting: 1 I0131 00:04:52.596413 7 log.go:181] (0xc0004fd810) Go away received I0131 00:04:52.596512 7 log.go:181] (0xc0004fd810) (0xc000612f00) Stream removed, broadcasting: 1 I0131 00:04:52.596540 7 log.go:181] (0xc0004fd810) (0xc000613040) Stream removed, broadcasting: 3 I0131 00:04:52.596553 7 log.go:181] (0xc0004fd810) (0xc000e0c3c0) Stream removed, broadcasting: 5 STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.14, port: 54321 UDP Jan 31 00:04:52.596: INFO: ExecWithOptions {Command:[/bin/sh -c nc -vuz -w 5 172.18.0.14 54321] Namespace:sched-pred-408 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 31 00:04:52.596: INFO: >>> kubeConfig: /root/.kube/config I0131 00:04:52.626399 7 log.go:181] (0xc0004fdef0) (0xc0006132c0) Create stream I0131 00:04:52.626433 7 log.go:181] (0xc0004fdef0) (0xc0006132c0) Stream added, broadcasting: 1 I0131 00:04:52.628595 7 log.go:181] (0xc0004fdef0) Reply frame received for 1 I0131 00:04:52.628653 7 log.go:181] (0xc0004fdef0) (0xc0004ff680) Create stream I0131 00:04:52.628667 7 log.go:181] (0xc0004fdef0) (0xc0004ff680) Stream added, broadcasting: 3 I0131 00:04:52.629707 7 log.go:181] (0xc0004fdef0) Reply frame received for 3 I0131 00:04:52.629752 7 log.go:181] (0xc0004fdef0) (0xc000e0c460) Create stream I0131 00:04:52.629772 7 log.go:181] (0xc0004fdef0) (0xc000e0c460) Stream added, broadcasting: 5 I0131 00:04:52.630724 7 log.go:181] (0xc0004fdef0) Reply frame received for 5 I0131 00:04:57.707978 7 log.go:181] (0xc0004fdef0) Data frame received for 5 I0131 00:04:57.708014 7 log.go:181] (0xc000e0c460) (5) Data frame handling I0131 00:04:57.708029 7 log.go:181] (0xc000e0c460) (5) Data frame sent I0131 00:04:57.708198 7 log.go:181] (0xc0004fdef0) Data frame received for 5 I0131 00:04:57.708235 7 log.go:181] (0xc000e0c460) (5) Data frame handling I0131 00:04:57.708383 7 log.go:181] (0xc0004fdef0) Data frame received for 3 I0131 00:04:57.708432 7 log.go:181] (0xc0004ff680) (3) Data frame handling I0131 00:04:57.710579 7 log.go:181] (0xc0004fdef0) Data frame received for 1 I0131 00:04:57.710598 7 log.go:181] (0xc0006132c0) (1) Data frame handling I0131 00:04:57.710617 7 log.go:181] (0xc0006132c0) (1) Data frame sent I0131 00:04:57.710631 7 log.go:181] (0xc0004fdef0) (0xc0006132c0) Stream removed, broadcasting: 1 I0131 00:04:57.710673 7 log.go:181] (0xc0004fdef0) Go away received I0131 00:04:57.710689 7 log.go:181] (0xc0004fdef0) (0xc0006132c0) Stream removed, broadcasting: 1 I0131 00:04:57.710701 7 log.go:181] (0xc0004fdef0) (0xc0004ff680) Stream removed, broadcasting: 3 I0131 00:04:57.710727 7 log.go:181] (0xc0004fdef0) (0xc000e0c460) Stream removed, broadcasting: 5 STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54321 Jan 31 00:04:57.710: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 172.18.0.14 http://127.0.0.1:54321/hostname] Namespace:sched-pred-408 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 31 00:04:57.710: INFO: >>> kubeConfig: /root/.kube/config I0131 00:04:57.739334 7 log.go:181] (0xc003759810) (0xc0010d43c0) Create stream I0131 00:04:57.739383 7 log.go:181] (0xc003759810) (0xc0010d43c0) Stream added, broadcasting: 1 I0131 00:04:57.741498 7 log.go:181] (0xc003759810) Reply frame received for 1 I0131 00:04:57.741544 7 log.go:181] (0xc003759810) (0xc0003e7e00) Create stream I0131 00:04:57.741557 7 log.go:181] (0xc003759810) (0xc0003e7e00) Stream added, broadcasting: 3 I0131 00:04:57.742561 7 log.go:181] (0xc003759810) Reply frame received for 3 I0131 00:04:57.742599 7 log.go:181] (0xc003759810) (0xc0004ffb80) Create stream I0131 00:04:57.742621 7 log.go:181] (0xc003759810) (0xc0004ffb80) Stream added, broadcasting: 5 I0131 00:04:57.743679 7 log.go:181] (0xc003759810) Reply frame received for 5 I0131 00:04:57.811902 7 log.go:181] (0xc003759810) Data frame received for 5 I0131 00:04:57.811951 7 log.go:181] (0xc0004ffb80) (5) Data frame handling I0131 00:04:57.811998 7 log.go:181] (0xc0004ffb80) (5) Data frame sent I0131 00:04:57.812082 7 log.go:181] (0xc003759810) Data frame received for 5 I0131 00:04:57.812122 7 log.go:181] (0xc0004ffb80) (5) Data frame handling I0131 00:04:57.812204 7 log.go:181] (0xc0004ffb80) (5) Data frame sent I0131 00:04:57.812272 7 log.go:181] (0xc003759810) Data frame received for 5 I0131 00:04:57.812306 7 log.go:181] (0xc0004ffb80) (5) Data frame handling I0131 00:04:57.812338 7 log.go:181] (0xc0004ffb80) (5) Data frame sent I0131 00:04:57.812362 7 log.go:181] (0xc003759810) Data frame received for 5 I0131 00:04:57.812454 7 log.go:181] (0xc0004ffb80) (5) Data frame handling I0131 00:04:57.812491 7 log.go:181] (0xc0004ffb80) (5) Data frame sent I0131 00:04:57.812525 7 log.go:181] (0xc003759810) Data frame received for 3 I0131 00:04:57.812581 7 log.go:181] (0xc0003e7e00) (3) Data frame handling I0131 00:04:57.812630 7 log.go:181] (0xc0003e7e00) (3) Data frame sent I0131 00:04:57.812661 7 log.go:181] (0xc003759810) Data frame received for 5 I0131 00:04:57.812687 7 log.go:181] (0xc0004ffb80) (5) Data frame handling I0131 00:04:57.812718 7 log.go:181] (0xc0004ffb80) (5) Data frame sent I0131 00:04:57.812730 7 log.go:181] (0xc003759810) Data frame received for 5 I0131 00:04:57.812735 7 log.go:181] (0xc0004ffb80) (5) Data frame handling I0131 00:04:57.812748 7 log.go:181] (0xc0004ffb80) (5) Data frame sent I0131 00:04:57.812828 7 log.go:181] (0xc003759810) Data frame received for 5 I0131 00:04:57.812995 7 log.go:181] (0xc0004ffb80) (5) Data frame handling I0131 00:04:57.813037 7 log.go:181] (0xc003759810) Data frame received for 3 I0131 00:04:57.813059 7 log.go:181] (0xc0003e7e00) (3) Data frame handling I0131 00:04:57.815445 7 log.go:181] (0xc003759810) Data frame received for 1 I0131 00:04:57.815482 7 log.go:181] (0xc0010d43c0) (1) Data frame handling I0131 00:04:57.815510 7 log.go:181] (0xc0010d43c0) (1) Data frame sent I0131 00:04:57.815533 7 log.go:181] (0xc003759810) (0xc0010d43c0) Stream removed, broadcasting: 1 I0131 00:04:57.815554 7 log.go:181] (0xc003759810) Go away received I0131 00:04:57.815677 7 log.go:181] (0xc003759810) (0xc0010d43c0) Stream removed, broadcasting: 1 I0131 00:04:57.815744 7 log.go:181] (0xc003759810) (0xc0003e7e00) Stream removed, broadcasting: 3 I0131 00:04:57.815768 7 log.go:181] (0xc003759810) (0xc0004ffb80) Stream removed, broadcasting: 5 STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.14, port: 54321 Jan 31 00:04:57.815: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://172.18.0.14:54321/hostname] Namespace:sched-pred-408 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 31 00:04:57.815: INFO: >>> kubeConfig: /root/.kube/config I0131 00:04:57.849877 7 log.go:181] (0xc003a78420) (0xc000f88280) Create stream I0131 00:04:57.849911 7 log.go:181] (0xc003a78420) (0xc000f88280) Stream added, broadcasting: 1 I0131 00:04:57.851879 7 log.go:181] (0xc003a78420) Reply frame received for 1 I0131 00:04:57.851917 7 log.go:181] (0xc003a78420) (0xc000613360) Create stream I0131 00:04:57.851929 7 log.go:181] (0xc003a78420) (0xc000613360) Stream added, broadcasting: 3 I0131 00:04:57.852945 7 log.go:181] (0xc003a78420) Reply frame received for 3 I0131 00:04:57.852973 7 log.go:181] (0xc003a78420) (0xc0004ffe00) Create stream I0131 00:04:57.852984 7 log.go:181] (0xc003a78420) (0xc0004ffe00) Stream added, broadcasting: 5 I0131 00:04:57.854073 7 log.go:181] (0xc003a78420) Reply frame received for 5 I0131 00:04:57.914783 7 log.go:181] (0xc003a78420) Data frame received for 5 I0131 00:04:57.914813 7 log.go:181] (0xc0004ffe00) (5) Data frame handling I0131 00:04:57.914828 7 log.go:181] (0xc0004ffe00) (5) Data frame sent I0131 00:04:57.914838 7 log.go:181] (0xc003a78420) Data frame received for 5 I0131 00:04:57.914843 7 log.go:181] (0xc0004ffe00) (5) Data frame handling I0131 00:04:57.914874 7 log.go:181] (0xc0004ffe00) (5) Data frame sent I0131 00:04:57.914885 7 log.go:181] (0xc003a78420) Data frame received for 5 I0131 00:04:57.914892 7 log.go:181] (0xc0004ffe00) (5) Data frame handling I0131 00:04:57.914901 7 log.go:181] (0xc0004ffe00) (5) Data frame sent I0131 00:04:57.915192 7 log.go:181] (0xc003a78420) Data frame received for 5 I0131 00:04:57.915211 7 log.go:181] (0xc0004ffe00) (5) Data frame handling I0131 00:04:57.915222 7 log.go:181] (0xc0004ffe00) (5) Data frame sent I0131 00:04:57.915229 7 log.go:181] (0xc003a78420) Data frame received for 5 I0131 00:04:57.915236 7 log.go:181] (0xc0004ffe00) (5) Data frame handling I0131 00:04:57.915246 7 log.go:181] (0xc0004ffe00) (5) Data frame sent I0131 00:04:57.915253 7 log.go:181] (0xc003a78420) Data frame received for 5 I0131 00:04:57.915259 7 log.go:181] (0xc0004ffe00) (5) Data frame handling I0131 00:04:57.915266 7 log.go:181] (0xc0004ffe00) (5) Data frame sent I0131 00:04:57.915272 7 log.go:181] (0xc003a78420) Data frame received for 5 I0131 00:04:57.915278 7 log.go:181] (0xc0004ffe00) (5) Data frame handling I0131 00:04:57.915321 7 log.go:181] (0xc0004ffe00) (5) Data frame sent I0131 00:04:57.915365 7 log.go:181] (0xc003a78420) Data frame received for 3 I0131 00:04:57.915376 7 log.go:181] (0xc000613360) (3) Data frame handling I0131 00:04:57.915393 7 log.go:181] (0xc000613360) (3) Data frame sent I0131 00:04:57.916269 7 log.go:181] (0xc003a78420) Data frame received for 5 I0131 00:04:57.916294 7 log.go:181] (0xc0004ffe00) (5) Data frame handling I0131 00:04:57.916392 7 log.go:181] (0xc003a78420) Data frame received for 3 I0131 00:04:57.916416 7 log.go:181] (0xc000613360) (3) Data frame handling I0131 00:04:57.917992 7 log.go:181] (0xc003a78420) Data frame received for 1 I0131 00:04:57.918019 7 log.go:181] (0xc000f88280) (1) Data frame handling I0131 00:04:57.918034 7 log.go:181] (0xc000f88280) (1) Data frame sent I0131 00:04:57.918114 7 log.go:181] (0xc003a78420) (0xc000f88280) Stream removed, broadcasting: 1 I0131 00:04:57.918141 7 log.go:181] (0xc003a78420) Go away received I0131 00:04:57.918271 7 log.go:181] (0xc003a78420) (0xc000f88280) Stream removed, broadcasting: 1 I0131 00:04:57.918305 7 log.go:181] (0xc003a78420) (0xc000613360) Stream removed, broadcasting: 3 I0131 00:04:57.918330 7 log.go:181] (0xc003a78420) (0xc0004ffe00) Stream removed, broadcasting: 5 STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.14, port: 54321 UDP Jan 31 00:04:57.918: INFO: ExecWithOptions {Command:[/bin/sh -c nc -vuz -w 5 172.18.0.14 54321] Namespace:sched-pred-408 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 31 00:04:57.918: INFO: >>> kubeConfig: /root/.kube/config I0131 00:04:57.944367 7 log.go:181] (0xc002fa84d0) (0xc0006135e0) Create stream I0131 00:04:57.944398 7 log.go:181] (0xc002fa84d0) (0xc0006135e0) Stream added, broadcasting: 1 I0131 00:04:57.946416 7 log.go:181] (0xc002fa84d0) Reply frame received for 1 I0131 00:04:57.946486 7 log.go:181] (0xc002fa84d0) (0xc000613680) Create stream I0131 00:04:57.946508 7 log.go:181] (0xc002fa84d0) (0xc000613680) Stream added, broadcasting: 3 I0131 00:04:57.947303 7 log.go:181] (0xc002fa84d0) Reply frame received for 3 I0131 00:04:57.947361 7 log.go:181] (0xc002fa84d0) (0xc000e0c500) Create stream I0131 00:04:57.947389 7 log.go:181] (0xc002fa84d0) (0xc000e0c500) Stream added, broadcasting: 5 I0131 00:04:57.948151 7 log.go:181] (0xc002fa84d0) Reply frame received for 5 I0131 00:05:03.007807 7 log.go:181] (0xc002fa84d0) Data frame received for 5 I0131 00:05:03.007844 7 log.go:181] (0xc000e0c500) (5) Data frame handling I0131 00:05:03.007897 7 log.go:181] (0xc000e0c500) (5) Data frame sent I0131 00:05:03.008045 7 log.go:181] (0xc002fa84d0) Data frame received for 3 I0131 00:05:03.008060 7 log.go:181] (0xc000613680) (3) Data frame handling I0131 00:05:03.008087 7 log.go:181] (0xc002fa84d0) Data frame received for 5 I0131 00:05:03.008114 7 log.go:181] (0xc000e0c500) (5) Data frame handling I0131 00:05:03.010757 7 log.go:181] (0xc002fa84d0) Data frame received for 1 I0131 00:05:03.010794 7 log.go:181] (0xc0006135e0) (1) Data frame handling I0131 00:05:03.010839 7 log.go:181] (0xc0006135e0) (1) Data frame sent I0131 00:05:03.010881 7 log.go:181] (0xc002fa84d0) (0xc0006135e0) Stream removed, broadcasting: 1 I0131 00:05:03.010918 7 log.go:181] (0xc002fa84d0) Go away received I0131 00:05:03.011027 7 log.go:181] (0xc002fa84d0) (0xc0006135e0) Stream removed, broadcasting: 1 I0131 00:05:03.011062 7 log.go:181] (0xc002fa84d0) (0xc000613680) Stream removed, broadcasting: 3 I0131 00:05:03.011094 7 log.go:181] (0xc002fa84d0) (0xc000e0c500) Stream removed, broadcasting: 5 STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54321 Jan 31 00:05:03.011: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 172.18.0.14 http://127.0.0.1:54321/hostname] Namespace:sched-pred-408 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 31 00:05:03.011: INFO: >>> kubeConfig: /root/.kube/config I0131 00:05:03.047502 7 log.go:181] (0xc000927080) (0xc00118e460) Create stream I0131 00:05:03.047541 7 log.go:181] (0xc000927080) (0xc00118e460) Stream added, broadcasting: 1 I0131 00:05:03.050535 7 log.go:181] (0xc000927080) Reply frame received for 1 I0131 00:05:03.050614 7 log.go:181] (0xc000927080) (0xc000613720) Create stream I0131 00:05:03.050630 7 log.go:181] (0xc000927080) (0xc000613720) Stream added, broadcasting: 3 I0131 00:05:03.051773 7 log.go:181] (0xc000927080) Reply frame received for 3 I0131 00:05:03.051829 7 log.go:181] (0xc000927080) (0xc0010d4820) Create stream I0131 00:05:03.051843 7 log.go:181] (0xc000927080) (0xc0010d4820) Stream added, broadcasting: 5 I0131 00:05:03.052829 7 log.go:181] (0xc000927080) Reply frame received for 5 I0131 00:05:03.142638 7 log.go:181] (0xc000927080) Data frame received for 5 I0131 00:05:03.142684 7 log.go:181] (0xc0010d4820) (5) Data frame handling I0131 00:05:03.142707 7 log.go:181] (0xc0010d4820) (5) Data frame sent I0131 00:05:03.142719 7 log.go:181] (0xc000927080) Data frame received for 5 I0131 00:05:03.142739 7 log.go:181] (0xc0010d4820) (5) Data frame handling I0131 00:05:03.142768 7 log.go:181] (0xc0010d4820) (5) Data frame sent I0131 00:05:03.142784 7 log.go:181] (0xc000927080) Data frame received for 5 I0131 00:05:03.142797 7 log.go:181] (0xc0010d4820) (5) Data frame handling I0131 00:05:03.142822 7 log.go:181] (0xc0010d4820) (5) Data frame sent I0131 00:05:03.142850 7 log.go:181] (0xc000927080) Data frame received for 5 I0131 00:05:03.142869 7 log.go:181] (0xc0010d4820) (5) Data frame handling I0131 00:05:03.142886 7 log.go:181] (0xc0010d4820) (5) Data frame sent I0131 00:05:03.142900 7 log.go:181] (0xc000927080) Data frame received for 5 I0131 00:05:03.142916 7 log.go:181] (0xc0010d4820) (5) Data frame handling I0131 00:05:03.142936 7 log.go:181] (0xc0010d4820) (5) Data frame sent I0131 00:05:03.142949 7 log.go:181] (0xc000927080) Data frame received for 5 I0131 00:05:03.142963 7 log.go:181] (0xc0010d4820) (5) Data frame handling I0131 00:05:03.142982 7 log.go:181] (0xc0010d4820) (5) Data frame sent I0131 00:05:03.143441 7 log.go:181] (0xc000927080) Data frame received for 5 I0131 00:05:03.143460 7 log.go:181] (0xc0010d4820) (5) Data frame handling I0131 00:05:03.143479 7 log.go:181] (0xc0010d4820) (5) Data frame sent I0131 00:05:03.143508 7 log.go:181] (0xc000927080) Data frame received for 3 I0131 00:05:03.143546 7 log.go:181] (0xc000613720) (3) Data frame handling I0131 00:05:03.143569 7 log.go:181] (0xc000613720) (3) Data frame sent I0131 00:05:03.143592 7 log.go:181] (0xc000927080) Data frame received for 5 I0131 00:05:03.143613 7 log.go:181] (0xc0010d4820) (5) Data frame handling I0131 00:05:03.143636 7 log.go:181] (0xc0010d4820) (5) Data frame sent I0131 00:05:03.143981 7 log.go:181] (0xc000927080) Data frame received for 5 I0131 00:05:03.144005 7 log.go:181] (0xc0010d4820) (5) Data frame handling I0131 00:05:03.144047 7 log.go:181] (0xc000927080) Data frame received for 3 I0131 00:05:03.144088 7 log.go:181] (0xc000613720) (3) Data frame handling I0131 00:05:03.146037 7 log.go:181] (0xc000927080) Data frame received for 1 I0131 00:05:03.146069 7 log.go:181] (0xc00118e460) (1) Data frame handling I0131 00:05:03.146090 7 log.go:181] (0xc00118e460) (1) Data frame sent I0131 00:05:03.146106 7 log.go:181] (0xc000927080) (0xc00118e460) Stream removed, broadcasting: 1 I0131 00:05:03.146123 7 log.go:181] (0xc000927080) Go away received I0131 00:05:03.146332 7 log.go:181] (0xc000927080) (0xc00118e460) Stream removed, broadcasting: 1 I0131 00:05:03.146371 7 log.go:181] (0xc000927080) (0xc000613720) Stream removed, broadcasting: 3 I0131 00:05:03.146385 7 log.go:181] (0xc000927080) (0xc0010d4820) Stream removed, broadcasting: 5 STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.14, port: 54321 Jan 31 00:05:03.146: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://172.18.0.14:54321/hostname] Namespace:sched-pred-408 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 31 00:05:03.146: INFO: >>> kubeConfig: /root/.kube/config I0131 00:05:03.181761 7 log.go:181] (0xc003a78b00) (0xc000f88500) Create stream I0131 00:05:03.181788 7 log.go:181] (0xc003a78b00) (0xc000f88500) Stream added, broadcasting: 1 I0131 00:05:03.184096 7 log.go:181] (0xc003a78b00) Reply frame received for 1 I0131 00:05:03.184134 7 log.go:181] (0xc003a78b00) (0xc000f885a0) Create stream I0131 00:05:03.184152 7 log.go:181] (0xc003a78b00) (0xc000f885a0) Stream added, broadcasting: 3 I0131 00:05:03.185393 7 log.go:181] (0xc003a78b00) Reply frame received for 3 I0131 00:05:03.185455 7 log.go:181] (0xc003a78b00) (0xc000f88640) Create stream I0131 00:05:03.185491 7 log.go:181] (0xc003a78b00) (0xc000f88640) Stream added, broadcasting: 5 I0131 00:05:03.186502 7 log.go:181] (0xc003a78b00) Reply frame received for 5 I0131 00:05:03.238362 7 log.go:181] (0xc003a78b00) Data frame received for 5 I0131 00:05:03.238395 7 log.go:181] (0xc000f88640) (5) Data frame handling I0131 00:05:03.238420 7 log.go:181] (0xc000f88640) (5) Data frame sent I0131 00:05:03.238440 7 log.go:181] (0xc003a78b00) Data frame received for 5 I0131 00:05:03.238455 7 log.go:181] (0xc000f88640) (5) Data frame handling I0131 00:05:03.238488 7 log.go:181] (0xc000f88640) (5) Data frame sent I0131 00:05:03.238814 7 log.go:181] (0xc003a78b00) Data frame received for 5 I0131 00:05:03.238853 7 log.go:181] (0xc000f88640) (5) Data frame handling I0131 00:05:03.238888 7 log.go:181] (0xc000f88640) (5) Data frame sent I0131 00:05:03.238952 7 log.go:181] (0xc003a78b00) Data frame received for 3 I0131 00:05:03.238988 7 log.go:181] (0xc000f885a0) (3) Data frame handling I0131 00:05:03.239022 7 log.go:181] (0xc000f885a0) (3) Data frame sent I0131 00:05:03.239225 7 log.go:181] (0xc003a78b00) Data frame received for 3 I0131 00:05:03.239253 7 log.go:181] (0xc000f885a0) (3) Data frame handling I0131 00:05:03.239420 7 log.go:181] (0xc003a78b00) Data frame received for 5 I0131 00:05:03.239457 7 log.go:181] (0xc000f88640) (5) Data frame handling I0131 00:05:03.241412 7 log.go:181] (0xc003a78b00) Data frame received for 1 I0131 00:05:03.241440 7 log.go:181] (0xc000f88500) (1) Data frame handling I0131 00:05:03.241453 7 log.go:181] (0xc000f88500) (1) Data frame sent I0131 00:05:03.241468 7 log.go:181] (0xc003a78b00) (0xc000f88500) Stream removed, broadcasting: 1 I0131 00:05:03.241493 7 log.go:181] (0xc003a78b00) Go away received I0131 00:05:03.241584 7 log.go:181] (0xc003a78b00) (0xc000f88500) Stream removed, broadcasting: 1 I0131 00:05:03.241609 7 log.go:181] (0xc003a78b00) (0xc000f885a0) Stream removed, broadcasting: 3 I0131 00:05:03.241628 7 log.go:181] (0xc003a78b00) (0xc000f88640) Stream removed, broadcasting: 5 STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.14, port: 54321 UDP Jan 31 00:05:03.241: INFO: ExecWithOptions {Command:[/bin/sh -c nc -vuz -w 5 172.18.0.14 54321] Namespace:sched-pred-408 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 31 00:05:03.241: INFO: >>> kubeConfig: /root/.kube/config I0131 00:05:03.277884 7 log.go:181] (0xc00002f810) (0xc000e0c780) Create stream I0131 00:05:03.277910 7 log.go:181] (0xc00002f810) (0xc000e0c780) Stream added, broadcasting: 1 I0131 00:05:03.286404 7 log.go:181] (0xc00002f810) Reply frame received for 1 I0131 00:05:03.286472 7 log.go:181] (0xc00002f810) (0xc0006137c0) Create stream I0131 00:05:03.286502 7 log.go:181] (0xc00002f810) (0xc0006137c0) Stream added, broadcasting: 3 I0131 00:05:03.288576 7 log.go:181] (0xc00002f810) Reply frame received for 3 I0131 00:05:03.288685 7 log.go:181] (0xc00002f810) (0xc00118e5a0) Create stream I0131 00:05:03.288754 7 log.go:181] (0xc00002f810) (0xc00118e5a0) Stream added, broadcasting: 5 I0131 00:05:03.290277 7 log.go:181] (0xc00002f810) Reply frame received for 5 I0131 00:05:08.336082 7 log.go:181] (0xc00002f810) Data frame received for 5 I0131 00:05:08.336112 7 log.go:181] (0xc00118e5a0) (5) Data frame handling I0131 00:05:08.336121 7 log.go:181] (0xc00118e5a0) (5) Data frame sent I0131 00:05:08.336128 7 log.go:181] (0xc00002f810) Data frame received for 5 I0131 00:05:08.336134 7 log.go:181] (0xc00118e5a0) (5) Data frame handling I0131 00:05:08.336220 7 log.go:181] (0xc00002f810) Data frame received for 3 I0131 00:05:08.336270 7 log.go:181] (0xc0006137c0) (3) Data frame handling I0131 00:05:08.339384 7 log.go:181] (0xc00002f810) Data frame received for 1 I0131 00:05:08.339427 7 log.go:181] (0xc000e0c780) (1) Data frame handling I0131 00:05:08.339459 7 log.go:181] (0xc000e0c780) (1) Data frame sent I0131 00:05:08.339485 7 log.go:181] (0xc00002f810) (0xc000e0c780) Stream removed, broadcasting: 1 I0131 00:05:08.339510 7 log.go:181] (0xc00002f810) Go away received I0131 00:05:08.339577 7 log.go:181] (0xc00002f810) (0xc000e0c780) Stream removed, broadcasting: 1 I0131 00:05:08.339668 7 log.go:181] (0xc00002f810) (0xc0006137c0) Stream removed, broadcasting: 3 I0131 00:05:08.339699 7 log.go:181] (0xc00002f810) (0xc00118e5a0) Stream removed, broadcasting: 5 STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54321 Jan 31 00:05:08.339: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 172.18.0.14 http://127.0.0.1:54321/hostname] Namespace:sched-pred-408 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 31 00:05:08.339: INFO: >>> kubeConfig: /root/.kube/config I0131 00:05:08.394754 7 log.go:181] (0xc000927340) (0xc00118e960) Create stream I0131 00:05:08.394777 7 log.go:181] (0xc000927340) (0xc00118e960) Stream added, broadcasting: 1 I0131 00:05:08.397159 7 log.go:181] (0xc000927340) Reply frame received for 1 I0131 00:05:08.397206 7 log.go:181] (0xc000927340) (0xc0010d4960) Create stream I0131 00:05:08.397231 7 log.go:181] (0xc000927340) (0xc0010d4960) Stream added, broadcasting: 3 I0131 00:05:08.398529 7 log.go:181] (0xc000927340) Reply frame received for 3 I0131 00:05:08.398549 7 log.go:181] (0xc000927340) (0xc000613900) Create stream I0131 00:05:08.398561 7 log.go:181] (0xc000927340) (0xc000613900) Stream added, broadcasting: 5 I0131 00:05:08.399431 7 log.go:181] (0xc000927340) Reply frame received for 5 I0131 00:05:08.469846 7 log.go:181] (0xc000927340) Data frame received for 5 I0131 00:05:08.469921 7 log.go:181] (0xc000613900) (5) Data frame handling I0131 00:05:08.469958 7 log.go:181] (0xc000613900) (5) Data frame sent I0131 00:05:08.469978 7 log.go:181] (0xc000927340) Data frame received for 5 I0131 00:05:08.469987 7 log.go:181] (0xc000613900) (5) Data frame handling I0131 00:05:08.470042 7 log.go:181] (0xc000613900) (5) Data frame sent I0131 00:05:08.470055 7 log.go:181] (0xc000927340) Data frame received for 5 I0131 00:05:08.470074 7 log.go:181] (0xc000613900) (5) Data frame handling I0131 00:05:08.470091 7 log.go:181] (0xc000613900) (5) Data frame sent I0131 00:05:08.470231 7 log.go:181] (0xc000927340) Data frame received for 5 I0131 00:05:08.470270 7 log.go:181] (0xc000613900) (5) Data frame handling I0131 00:05:08.470299 7 log.go:181] (0xc000613900) (5) Data frame sent I0131 00:05:08.470313 7 log.go:181] (0xc000927340) Data frame received for 5 I0131 00:05:08.470323 7 log.go:181] (0xc000613900) (5) Data frame handling I0131 00:05:08.470338 7 log.go:181] (0xc000613900) (5) Data frame sent I0131 00:05:08.470352 7 log.go:181] (0xc000927340) Data frame received for 3 I0131 00:05:08.470361 7 log.go:181] (0xc0010d4960) (3) Data frame handling I0131 00:05:08.470377 7 log.go:181] (0xc0010d4960) (3) Data frame sent I0131 00:05:08.470837 7 log.go:181] (0xc000927340) Data frame received for 3 I0131 00:05:08.470855 7 log.go:181] (0xc0010d4960) (3) Data frame handling I0131 00:05:08.470889 7 log.go:181] (0xc000927340) Data frame received for 5 I0131 00:05:08.470916 7 log.go:181] (0xc000613900) (5) Data frame handling I0131 00:05:08.472142 7 log.go:181] (0xc000927340) Data frame received for 1 I0131 00:05:08.472161 7 log.go:181] (0xc00118e960) (1) Data frame handling I0131 00:05:08.472168 7 log.go:181] (0xc00118e960) (1) Data frame sent I0131 00:05:08.472177 7 log.go:181] (0xc000927340) (0xc00118e960) Stream removed, broadcasting: 1 I0131 00:05:08.472192 7 log.go:181] (0xc000927340) Go away received I0131 00:05:08.472287 7 log.go:181] (0xc000927340) (0xc00118e960) Stream removed, broadcasting: 1 I0131 00:05:08.472311 7 log.go:181] (0xc000927340) (0xc0010d4960) Stream removed, broadcasting: 3 I0131 00:05:08.472329 7 log.go:181] (0xc000927340) (0xc000613900) Stream removed, broadcasting: 5 STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.14, port: 54321 Jan 31 00:05:08.472: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://172.18.0.14:54321/hostname] Namespace:sched-pred-408 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 31 00:05:08.472: INFO: >>> kubeConfig: /root/.kube/config I0131 00:05:08.506305 7 log.go:181] (0xc0024b6370) (0xc0010d4c80) Create stream I0131 00:05:08.506332 7 log.go:181] (0xc0024b6370) (0xc0010d4c80) Stream added, broadcasting: 1 I0131 00:05:08.508562 7 log.go:181] (0xc0024b6370) Reply frame received for 1 I0131 00:05:08.508606 7 log.go:181] (0xc0024b6370) (0xc0010d4d20) Create stream I0131 00:05:08.508628 7 log.go:181] (0xc0024b6370) (0xc0010d4d20) Stream added, broadcasting: 3 I0131 00:05:08.509543 7 log.go:181] (0xc0024b6370) Reply frame received for 3 I0131 00:05:08.509587 7 log.go:181] (0xc0024b6370) (0xc000e0c820) Create stream I0131 00:05:08.509601 7 log.go:181] (0xc0024b6370) (0xc000e0c820) Stream added, broadcasting: 5 I0131 00:05:08.510404 7 log.go:181] (0xc0024b6370) Reply frame received for 5 I0131 00:05:08.574624 7 log.go:181] (0xc0024b6370) Data frame received for 5 I0131 00:05:08.574678 7 log.go:181] (0xc000e0c820) (5) Data frame handling I0131 00:05:08.574703 7 log.go:181] (0xc000e0c820) (5) Data frame sent I0131 00:05:08.574716 7 log.go:181] (0xc0024b6370) Data frame received for 5 I0131 00:05:08.574726 7 log.go:181] (0xc000e0c820) (5) Data frame handling I0131 00:05:08.574803 7 log.go:181] (0xc000e0c820) (5) Data frame sent I0131 00:05:08.574834 7 log.go:181] (0xc0024b6370) Data frame received for 5 I0131 00:05:08.574854 7 log.go:181] (0xc000e0c820) (5) Data frame handling I0131 00:05:08.574872 7 log.go:181] (0xc000e0c820) (5) Data frame sent I0131 00:05:08.574883 7 log.go:181] (0xc0024b6370) Data frame received for 5 I0131 00:05:08.574900 7 log.go:181] (0xc000e0c820) (5) Data frame handling I0131 00:05:08.574933 7 log.go:181] (0xc000e0c820) (5) Data frame sent I0131 00:05:08.575080 7 log.go:181] (0xc0024b6370) Data frame received for 5 I0131 00:05:08.575118 7 log.go:181] (0xc000e0c820) (5) Data frame handling I0131 00:05:08.575144 7 log.go:181] (0xc000e0c820) (5) Data frame sent I0131 00:05:08.575165 7 log.go:181] (0xc0024b6370) Data frame received for 5 I0131 00:05:08.575179 7 log.go:181] (0xc000e0c820) (5) Data frame handling I0131 00:05:08.575195 7 log.go:181] (0xc000e0c820) (5) Data frame sent I0131 00:05:08.575218 7 log.go:181] (0xc0024b6370) Data frame received for 3 I0131 00:05:08.575228 7 log.go:181] (0xc0010d4d20) (3) Data frame handling I0131 00:05:08.575242 7 log.go:181] (0xc0010d4d20) (3) Data frame sent I0131 00:05:08.576003 7 log.go:181] (0xc0024b6370) Data frame received for 5 I0131 00:05:08.576034 7 log.go:181] (0xc000e0c820) (5) Data frame handling I0131 00:05:08.576284 7 log.go:181] (0xc0024b6370) Data frame received for 3 I0131 00:05:08.576321 7 log.go:181] (0xc0010d4d20) (3) Data frame handling I0131 00:05:08.578312 7 log.go:181] (0xc0024b6370) Data frame received for 1 I0131 00:05:08.578340 7 log.go:181] (0xc0010d4c80) (1) Data frame handling I0131 00:05:08.578364 7 log.go:181] (0xc0010d4c80) (1) Data frame sent I0131 00:05:08.578383 7 log.go:181] (0xc0024b6370) (0xc0010d4c80) Stream removed, broadcasting: 1 I0131 00:05:08.578409 7 log.go:181] (0xc0024b6370) Go away received I0131 00:05:08.578483 7 log.go:181] (0xc0024b6370) (0xc0010d4c80) Stream removed, broadcasting: 1 I0131 00:05:08.578503 7 log.go:181] (0xc0024b6370) (0xc0010d4d20) Stream removed, broadcasting: 3 I0131 00:05:08.578513 7 log.go:181] (0xc0024b6370) (0xc000e0c820) Stream removed, broadcasting: 5 STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.14, port: 54321 UDP Jan 31 00:05:08.578: INFO: ExecWithOptions {Command:[/bin/sh -c nc -vuz -w 5 172.18.0.14 54321] Namespace:sched-pred-408 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 31 00:05:08.578: INFO: >>> kubeConfig: /root/.kube/config I0131 00:05:08.632473 7 log.go:181] (0xc0024b6a50) (0xc0010d52c0) Create stream I0131 00:05:08.632505 7 log.go:181] (0xc0024b6a50) (0xc0010d52c0) Stream added, broadcasting: 1 I0131 00:05:08.635153 7 log.go:181] (0xc0024b6a50) Reply frame received for 1 I0131 00:05:08.635203 7 log.go:181] (0xc0024b6a50) (0xc000e0c8c0) Create stream I0131 00:05:08.635226 7 log.go:181] (0xc0024b6a50) (0xc000e0c8c0) Stream added, broadcasting: 3 I0131 00:05:08.636431 7 log.go:181] (0xc0024b6a50) Reply frame received for 3 I0131 00:05:08.636467 7 log.go:181] (0xc0024b6a50) (0xc000e0c960) Create stream I0131 00:05:08.636483 7 log.go:181] (0xc0024b6a50) (0xc000e0c960) Stream added, broadcasting: 5 I0131 00:05:08.637753 7 log.go:181] (0xc0024b6a50) Reply frame received for 5 I0131 00:05:13.707231 7 log.go:181] (0xc0024b6a50) Data frame received for 5 I0131 00:05:13.707271 7 log.go:181] (0xc000e0c960) (5) Data frame handling I0131 00:05:13.707305 7 log.go:181] (0xc000e0c960) (5) Data frame sent I0131 00:05:13.707329 7 log.go:181] (0xc0024b6a50) Data frame received for 5 I0131 00:05:13.707342 7 log.go:181] (0xc000e0c960) (5) Data frame handling I0131 00:05:13.707535 7 log.go:181] (0xc0024b6a50) Data frame received for 3 I0131 00:05:13.707552 7 log.go:181] (0xc000e0c8c0) (3) Data frame handling I0131 00:05:13.709315 7 log.go:181] (0xc0024b6a50) Data frame received for 1 I0131 00:05:13.709350 7 log.go:181] (0xc0010d52c0) (1) Data frame handling I0131 00:05:13.709403 7 log.go:181] (0xc0010d52c0) (1) Data frame sent I0131 00:05:13.709429 7 log.go:181] (0xc0024b6a50) (0xc0010d52c0) Stream removed, broadcasting: 1 I0131 00:05:13.709545 7 log.go:181] (0xc0024b6a50) (0xc0010d52c0) Stream removed, broadcasting: 1 I0131 00:05:13.709579 7 log.go:181] (0xc0024b6a50) (0xc000e0c8c0) Stream removed, broadcasting: 3 I0131 00:05:13.709608 7 log.go:181] (0xc0024b6a50) (0xc000e0c960) Stream removed, broadcasting: 5 STEP: removing the label kubernetes.io/e2e-a5f137e4-6708-42e0-800c-6c0857f4b921 off the node latest-worker I0131 00:05:13.709695 7 log.go:181] (0xc0024b6a50) Go away received STEP: verifying the node doesn't have the label kubernetes.io/e2e-a5f137e4-6708-42e0-800c-6c0857f4b921 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:05:13.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-408" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:83 • [SLOW TEST:47.215 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":311,"completed":19,"skipped":383,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:05:13.788: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [BeforeEach] Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1520 [It] should create a pod from an image when restart is Never [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jan 31 00:05:13.918: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-4592 run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine' Jan 31 00:05:14.022: INFO: stderr: "" Jan 31 00:05:14.022: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1524 Jan 31 00:05:14.040: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-4592 delete pods e2e-test-httpd-pod' Jan 31 00:05:21.323: INFO: stderr: "" Jan 31 00:05:21.323: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:05:21.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4592" for this suite. • [SLOW TEST:7.589 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1517 should create a pod from an image when restart is Never [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":311,"completed":20,"skipped":416,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:05:21.378: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Jan 31 00:05:22.290: INFO: Waiting up to 1m0s for all nodes to be ready Jan 31 00:06:22.314: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:06:22.318: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:679 [It] verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 Jan 31 00:06:22.428: INFO: PriorityClass.scheduling.k8s.io "p1" is invalid: Value: Forbidden: may not be changed in an update. Jan 31 00:06:22.431: INFO: PriorityClass.scheduling.k8s.io "p2" is invalid: Value: Forbidden: may not be changed in an update. [AfterEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:06:22.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-7656" for this suite. [AfterEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:693 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:06:22.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-3717" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:61.208 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:673 verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance]","total":311,"completed":21,"skipped":492,"failed":0} SSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:06:22.586: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-vwnbl in namespace proxy-3862 I0131 00:06:22.758516 7 runners.go:190] Created replication controller with name: proxy-service-vwnbl, namespace: proxy-3862, replica count: 1 I0131 00:06:23.809100 7 runners.go:190] proxy-service-vwnbl Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0131 00:06:24.809419 7 runners.go:190] proxy-service-vwnbl Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0131 00:06:25.809653 7 runners.go:190] proxy-service-vwnbl Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0131 00:06:26.809909 7 runners.go:190] proxy-service-vwnbl Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0131 00:06:27.810068 7 runners.go:190] proxy-service-vwnbl Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0131 00:06:28.810270 7 runners.go:190] proxy-service-vwnbl Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0131 00:06:29.810511 7 runners.go:190] proxy-service-vwnbl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 31 00:06:29.813: INFO: setup took 7.118822397s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Jan 31 00:06:29.819: INFO: (0) /api/v1/namespaces/proxy-3862/pods/http:proxy-service-vwnbl-sxr8p:160/proxy/: foo (200; 5.613966ms) Jan 31 00:06:29.819: INFO: (0) /api/v1/namespaces/proxy-3862/pods/http:proxy-service-vwnbl-sxr8p:1080/proxy/: ... (200; 5.690934ms) Jan 31 00:06:29.820: INFO: (0) /api/v1/namespaces/proxy-3862/pods/proxy-service-vwnbl-sxr8p:1080/proxy/: test<... (200; 6.68665ms) Jan 31 00:06:29.820: INFO: (0) /api/v1/namespaces/proxy-3862/pods/proxy-service-vwnbl-sxr8p:162/proxy/: bar (200; 6.960508ms) Jan 31 00:06:29.821: INFO: (0) /api/v1/namespaces/proxy-3862/pods/http:proxy-service-vwnbl-sxr8p:162/proxy/: bar (200; 8.099354ms) Jan 31 00:06:29.821: INFO: (0) /api/v1/namespaces/proxy-3862/pods/proxy-service-vwnbl-sxr8p/proxy/: test (200; 7.990162ms) Jan 31 00:06:29.821: INFO: (0) /api/v1/namespaces/proxy-3862/pods/proxy-service-vwnbl-sxr8p:160/proxy/: foo (200; 8.147875ms) Jan 31 00:06:29.821: INFO: (0) /api/v1/namespaces/proxy-3862/services/proxy-service-vwnbl:portname2/proxy/: bar (200; 8.106931ms) Jan 31 00:06:29.822: INFO: (0) /api/v1/namespaces/proxy-3862/services/http:proxy-service-vwnbl:portname1/proxy/: foo (200; 9.174397ms) Jan 31 00:06:29.822: INFO: (0) /api/v1/namespaces/proxy-3862/services/proxy-service-vwnbl:portname1/proxy/: foo (200; 9.129383ms) Jan 31 00:06:29.822: INFO: (0) /api/v1/namespaces/proxy-3862/services/http:proxy-service-vwnbl:portname2/proxy/: bar (200; 9.259219ms) Jan 31 00:06:29.854: INFO: (0) /api/v1/namespaces/proxy-3862/pods/https:proxy-service-vwnbl-sxr8p:462/proxy/: tls qux (200; 40.947138ms) Jan 31 00:06:29.854: INFO: (0) /api/v1/namespaces/proxy-3862/services/https:proxy-service-vwnbl:tlsportname2/proxy/: tls qux (200; 40.875424ms) Jan 31 00:06:29.854: INFO: (0) /api/v1/namespaces/proxy-3862/pods/https:proxy-service-vwnbl-sxr8p:460/proxy/: tls baz (200; 40.884738ms) Jan 31 00:06:29.854: INFO: (0) /api/v1/namespaces/proxy-3862/services/https:proxy-service-vwnbl:tlsportname1/proxy/: tls baz (200; 40.920513ms) Jan 31 00:06:29.854: INFO: (0) /api/v1/namespaces/proxy-3862/pods/https:proxy-service-vwnbl-sxr8p:443/proxy/: ... (200; 4.900185ms) Jan 31 00:06:29.860: INFO: (1) /api/v1/namespaces/proxy-3862/pods/proxy-service-vwnbl-sxr8p:162/proxy/: bar (200; 6.00688ms) Jan 31 00:06:29.861: INFO: (1) /api/v1/namespaces/proxy-3862/pods/proxy-service-vwnbl-sxr8p:160/proxy/: foo (200; 7.059677ms) Jan 31 00:06:29.861: INFO: (1) /api/v1/namespaces/proxy-3862/pods/https:proxy-service-vwnbl-sxr8p:462/proxy/: tls qux (200; 7.160555ms) Jan 31 00:06:29.861: INFO: (1) /api/v1/namespaces/proxy-3862/services/proxy-service-vwnbl:portname2/proxy/: bar (200; 7.141979ms) Jan 31 00:06:29.863: INFO: (1) /api/v1/namespaces/proxy-3862/services/http:proxy-service-vwnbl:portname1/proxy/: foo (200; 8.919085ms) Jan 31 00:06:29.863: INFO: (1) /api/v1/namespaces/proxy-3862/pods/http:proxy-service-vwnbl-sxr8p:160/proxy/: foo (200; 8.976082ms) Jan 31 00:06:29.864: INFO: (1) /api/v1/namespaces/proxy-3862/pods/https:proxy-service-vwnbl-sxr8p:460/proxy/: tls baz (200; 9.80169ms) Jan 31 00:06:29.864: INFO: (1) /api/v1/namespaces/proxy-3862/pods/proxy-service-vwnbl-sxr8p:1080/proxy/: test<... (200; 9.884093ms) Jan 31 00:06:29.864: INFO: (1) /api/v1/namespaces/proxy-3862/services/https:proxy-service-vwnbl:tlsportname1/proxy/: tls baz (200; 9.979678ms) Jan 31 00:06:29.864: INFO: (1) /api/v1/namespaces/proxy-3862/services/http:proxy-service-vwnbl:portname2/proxy/: bar (200; 10.077317ms) Jan 31 00:06:29.864: INFO: (1) /api/v1/namespaces/proxy-3862/services/proxy-service-vwnbl:portname1/proxy/: foo (200; 10.073995ms) Jan 31 00:06:29.864: INFO: (1) /api/v1/namespaces/proxy-3862/services/https:proxy-service-vwnbl:tlsportname2/proxy/: tls qux (200; 10.028848ms) Jan 31 00:06:29.864: INFO: (1) /api/v1/namespaces/proxy-3862/pods/http:proxy-service-vwnbl-sxr8p:162/proxy/: bar (200; 10.082253ms) Jan 31 00:06:29.864: INFO: (1) /api/v1/namespaces/proxy-3862/pods/proxy-service-vwnbl-sxr8p/proxy/: test (200; 10.192795ms) Jan 31 00:06:29.864: INFO: (1) /api/v1/namespaces/proxy-3862/pods/https:proxy-service-vwnbl-sxr8p:443/proxy/: test<... (200; 2.504308ms) Jan 31 00:06:29.868: INFO: (2) /api/v1/namespaces/proxy-3862/pods/https:proxy-service-vwnbl-sxr8p:462/proxy/: tls qux (200; 3.047424ms) Jan 31 00:06:29.869: INFO: (2) /api/v1/namespaces/proxy-3862/pods/http:proxy-service-vwnbl-sxr8p:160/proxy/: foo (200; 4.422126ms) Jan 31 00:06:29.869: INFO: (2) /api/v1/namespaces/proxy-3862/services/http:proxy-service-vwnbl:portname1/proxy/: foo (200; 4.427951ms) Jan 31 00:06:29.869: INFO: (2) /api/v1/namespaces/proxy-3862/pods/https:proxy-service-vwnbl-sxr8p:460/proxy/: tls baz (200; 4.513635ms) Jan 31 00:06:29.869: INFO: (2) /api/v1/namespaces/proxy-3862/services/http:proxy-service-vwnbl:portname2/proxy/: bar (200; 4.579714ms) Jan 31 00:06:29.869: INFO: (2) /api/v1/namespaces/proxy-3862/pods/proxy-service-vwnbl-sxr8p:162/proxy/: bar (200; 4.5083ms) Jan 31 00:06:29.869: INFO: (2) /api/v1/namespaces/proxy-3862/pods/proxy-service-vwnbl-sxr8p/proxy/: test (200; 4.788492ms) Jan 31 00:06:29.870: INFO: (2) /api/v1/namespaces/proxy-3862/pods/proxy-service-vwnbl-sxr8p:160/proxy/: foo (200; 5.301712ms) Jan 31 00:06:29.870: INFO: (2) /api/v1/namespaces/proxy-3862/services/https:proxy-service-vwnbl:tlsportname2/proxy/: tls qux (200; 5.331014ms) Jan 31 00:06:29.870: INFO: (2) /api/v1/namespaces/proxy-3862/services/proxy-service-vwnbl:portname1/proxy/: foo (200; 5.425856ms) Jan 31 00:06:29.870: INFO: (2) /api/v1/namespaces/proxy-3862/pods/http:proxy-service-vwnbl-sxr8p:162/proxy/: bar (200; 5.452282ms) Jan 31 00:06:29.870: INFO: (2) /api/v1/namespaces/proxy-3862/pods/http:proxy-service-vwnbl-sxr8p:1080/proxy/: ... (200; 5.504452ms) Jan 31 00:06:29.870: INFO: (2) /api/v1/namespaces/proxy-3862/services/https:proxy-service-vwnbl:tlsportname1/proxy/: tls baz (200; 5.472335ms) Jan 31 00:06:29.870: INFO: (2) /api/v1/namespaces/proxy-3862/pods/https:proxy-service-vwnbl-sxr8p:443/proxy/: test (200; 4.773259ms) Jan 31 00:06:29.876: INFO: (3) /api/v1/namespaces/proxy-3862/services/https:proxy-service-vwnbl:tlsportname1/proxy/: tls baz (200; 4.789258ms) Jan 31 00:06:29.876: INFO: (3) /api/v1/namespaces/proxy-3862/services/http:proxy-service-vwnbl:portname2/proxy/: bar (200; 4.812962ms) Jan 31 00:06:29.876: INFO: (3) /api/v1/namespaces/proxy-3862/pods/http:proxy-service-vwnbl-sxr8p:1080/proxy/: ... (200; 4.796348ms) Jan 31 00:06:29.876: INFO: (3) /api/v1/namespaces/proxy-3862/services/proxy-service-vwnbl:portname2/proxy/: bar (200; 4.807543ms) Jan 31 00:06:29.876: INFO: (3) /api/v1/namespaces/proxy-3862/pods/proxy-service-vwnbl-sxr8p:162/proxy/: bar (200; 4.823649ms) Jan 31 00:06:29.876: INFO: (3) /api/v1/namespaces/proxy-3862/services/proxy-service-vwnbl:portname1/proxy/: foo (200; 4.786603ms) Jan 31 00:06:29.876: INFO: (3) /api/v1/namespaces/proxy-3862/pods/proxy-service-vwnbl-sxr8p:1080/proxy/: test<... (200; 4.973624ms) Jan 31 00:06:29.876: INFO: (3) /api/v1/namespaces/proxy-3862/pods/https:proxy-service-vwnbl-sxr8p:443/proxy/: ... (200; 4.700996ms) Jan 31 00:06:29.881: INFO: (4) /api/v1/namespaces/proxy-3862/services/http:proxy-service-vwnbl:portname2/proxy/: bar (200; 5.023535ms) Jan 31 00:06:29.881: INFO: (4) /api/v1/namespaces/proxy-3862/pods/http:proxy-service-vwnbl-sxr8p:160/proxy/: foo (200; 5.054186ms) Jan 31 00:06:29.881: INFO: (4) /api/v1/namespaces/proxy-3862/pods/https:proxy-service-vwnbl-sxr8p:462/proxy/: tls qux (200; 5.118175ms) Jan 31 00:06:29.881: INFO: (4) /api/v1/namespaces/proxy-3862/services/http:proxy-service-vwnbl:portname1/proxy/: foo (200; 5.14378ms) Jan 31 00:06:29.881: INFO: (4) /api/v1/namespaces/proxy-3862/pods/https:proxy-service-vwnbl-sxr8p:443/proxy/: test (200; 5.503939ms) Jan 31 00:06:29.882: INFO: (4) /api/v1/namespaces/proxy-3862/pods/proxy-service-vwnbl-sxr8p:1080/proxy/: test<... (200; 5.555779ms) Jan 31 00:06:29.882: INFO: (4) /api/v1/namespaces/proxy-3862/services/https:proxy-service-vwnbl:tlsportname1/proxy/: tls baz (200; 5.616888ms) Jan 31 00:06:29.882: INFO: (4) /api/v1/namespaces/proxy-3862/services/proxy-service-vwnbl:portname2/proxy/: bar (200; 5.643712ms) Jan 31 00:06:29.882: INFO: (4) /api/v1/namespaces/proxy-3862/services/https:proxy-service-vwnbl:tlsportname2/proxy/: tls qux (200; 5.706293ms) Jan 31 00:06:29.889: INFO: (5) /api/v1/namespaces/proxy-3862/pods/proxy-service-vwnbl-sxr8p:162/proxy/: bar (200; 6.939899ms) Jan 31 00:06:29.889: INFO: (5) /api/v1/namespaces/proxy-3862/pods/http:proxy-service-vwnbl-sxr8p:1080/proxy/: ... (200; 7.185264ms) Jan 31 00:06:29.889: INFO: (5) /api/v1/namespaces/proxy-3862/services/http:proxy-service-vwnbl:portname2/proxy/: bar (200; 7.205749ms) Jan 31 00:06:29.889: INFO: (5) /api/v1/namespaces/proxy-3862/pods/https:proxy-service-vwnbl-sxr8p:462/proxy/: tls qux (200; 7.494439ms) Jan 31 00:06:29.889: INFO: (5) /api/v1/namespaces/proxy-3862/services/https:proxy-service-vwnbl:tlsportname2/proxy/: tls qux (200; 7.497066ms) Jan 31 00:06:29.889: INFO: (5) /api/v1/namespaces/proxy-3862/pods/https:proxy-service-vwnbl-sxr8p:460/proxy/: tls baz (200; 7.523724ms) Jan 31 00:06:29.889: INFO: (5) /api/v1/namespaces/proxy-3862/pods/proxy-service-vwnbl-sxr8p/proxy/: test (200; 7.592176ms) Jan 31 00:06:29.889: INFO: (5) /api/v1/namespaces/proxy-3862/pods/https:proxy-service-vwnbl-sxr8p:443/proxy/: test<... (200; 7.627332ms) Jan 31 00:06:29.896: INFO: (5) /api/v1/namespaces/proxy-3862/services/http:proxy-service-vwnbl:portname1/proxy/: foo (200; 13.926246ms) Jan 31 00:06:29.896: INFO: (5) /api/v1/namespaces/proxy-3862/services/proxy-service-vwnbl:portname2/proxy/: bar (200; 14.009149ms) Jan 31 00:06:29.896: INFO: (5) /api/v1/namespaces/proxy-3862/services/https:proxy-service-vwnbl:tlsportname1/proxy/: tls baz (200; 14.007069ms) Jan 31 00:06:29.896: INFO: (5) /api/v1/namespaces/proxy-3862/services/proxy-service-vwnbl:portname1/proxy/: foo (200; 14.040959ms) Jan 31 00:06:29.899: INFO: (6) /api/v1/namespaces/proxy-3862/pods/proxy-service-vwnbl-sxr8p:162/proxy/: bar (200; 3.027332ms) Jan 31 00:06:29.900: INFO: (6) /api/v1/namespaces/proxy-3862/pods/proxy-service-vwnbl-sxr8p:1080/proxy/: test<... (200; 3.565152ms) Jan 31 00:06:29.900: INFO: (6) /api/v1/namespaces/proxy-3862/pods/http:proxy-service-vwnbl-sxr8p:160/proxy/: foo (200; 3.583759ms) Jan 31 00:06:29.900: INFO: (6) /api/v1/namespaces/proxy-3862/pods/proxy-service-vwnbl-sxr8p:160/proxy/: foo (200; 3.523464ms) Jan 31 00:06:29.900: INFO: (6) /api/v1/namespaces/proxy-3862/pods/proxy-service-vwnbl-sxr8p/proxy/: test (200; 4.122363ms) Jan 31 00:06:29.900: INFO: (6) /api/v1/namespaces/proxy-3862/pods/http:proxy-service-vwnbl-sxr8p:1080/proxy/: ... (200; 4.105736ms) Jan 31 00:06:29.900: INFO: (6) /api/v1/namespaces/proxy-3862/services/http:proxy-service-vwnbl:portname2/proxy/: bar (200; 4.198148ms) Jan 31 00:06:29.901: INFO: (6) /api/v1/namespaces/proxy-3862/pods/https:proxy-service-vwnbl-sxr8p:443/proxy/: test (200; 4.893775ms) Jan 31 00:06:29.906: INFO: (7) /api/v1/namespaces/proxy-3862/services/http:proxy-service-vwnbl:portname1/proxy/: foo (200; 4.882508ms) Jan 31 00:06:29.906: INFO: (7) /api/v1/namespaces/proxy-3862/pods/proxy-service-vwnbl-sxr8p:1080/proxy/: test<... (200; 4.995051ms) Jan 31 00:06:29.907: INFO: (7) /api/v1/namespaces/proxy-3862/pods/https:proxy-service-vwnbl-sxr8p:443/proxy/: ... (200; 6.82203ms) Jan 31 00:06:29.908: INFO: (7) /api/v1/namespaces/proxy-3862/pods/http:proxy-service-vwnbl-sxr8p:162/proxy/: bar (200; 6.90575ms) Jan 31 00:06:29.921: INFO: (8) /api/v1/namespaces/proxy-3862/pods/https:proxy-service-vwnbl-sxr8p:462/proxy/: tls qux (200; 12.740499ms) Jan 31 00:06:29.921: INFO: (8) /api/v1/namespaces/proxy-3862/services/https:proxy-service-vwnbl:tlsportname1/proxy/: tls baz (200; 12.715109ms) Jan 31 00:06:29.922: INFO: (8) /api/v1/namespaces/proxy-3862/services/proxy-service-vwnbl:portname2/proxy/: bar (200; 12.766262ms) Jan 31 00:06:29.922: INFO: (8) /api/v1/namespaces/proxy-3862/pods/proxy-service-vwnbl-sxr8p:1080/proxy/: test<... (200; 12.817677ms) Jan 31 00:06:29.922: INFO: (8) /api/v1/namespaces/proxy-3862/pods/https:proxy-service-vwnbl-sxr8p:443/proxy/: test (200; 13.097883ms) Jan 31 00:06:29.922: INFO: (8) /api/v1/namespaces/proxy-3862/pods/http:proxy-service-vwnbl-sxr8p:1080/proxy/: ... (200; 13.158447ms) Jan 31 00:06:29.922: INFO: (8) /api/v1/namespaces/proxy-3862/pods/proxy-service-vwnbl-sxr8p:162/proxy/: bar (200; 13.237603ms) Jan 31 00:06:29.926: INFO: (9) /api/v1/namespaces/proxy-3862/pods/https:proxy-service-vwnbl-sxr8p:443/proxy/: ... (200; 4.85609ms) Jan 31 00:06:29.927: INFO: (9) /api/v1/namespaces/proxy-3862/pods/proxy-service-vwnbl-sxr8p:162/proxy/: bar (200; 4.815355ms) Jan 31 00:06:29.927: INFO: (9) /api/v1/namespaces/proxy-3862/pods/proxy-service-vwnbl-sxr8p:1080/proxy/: test<... (200; 4.82359ms) Jan 31 00:06:29.927: INFO: (9) /api/v1/namespaces/proxy-3862/pods/proxy-service-vwnbl-sxr8p/proxy/: test (200; 4.858085ms) Jan 31 00:06:29.927: INFO: (9) /api/v1/namespaces/proxy-3862/pods/https:proxy-service-vwnbl-sxr8p:460/proxy/: tls baz (200; 4.845998ms) Jan 31 00:06:29.927: INFO: (9) /api/v1/namespaces/proxy-3862/pods/http:proxy-service-vwnbl-sxr8p:162/proxy/: bar (200; 4.950858ms) Jan 31 00:06:29.927: INFO: (9) /api/v1/namespaces/proxy-3862/services/https:proxy-service-vwnbl:tlsportname2/proxy/: tls qux (200; 4.868056ms) Jan 31 00:06:29.927: INFO: (9) /api/v1/namespaces/proxy-3862/services/http:proxy-service-vwnbl:portname1/proxy/: foo (200; 4.919251ms) Jan 31 00:06:29.931: INFO: (10) /api/v1/namespaces/proxy-3862/pods/proxy-service-vwnbl-sxr8p:1080/proxy/: test<... (200; 4.185896ms) Jan 31 00:06:29.931: INFO: (10) /api/v1/namespaces/proxy-3862/services/https:proxy-service-vwnbl:tlsportname1/proxy/: tls baz (200; 4.454951ms) Jan 31 00:06:29.932: INFO: (10) /api/v1/namespaces/proxy-3862/services/https:proxy-service-vwnbl:tlsportname2/proxy/: tls qux (200; 4.496806ms) Jan 31 00:06:29.932: INFO: (10) /api/v1/namespaces/proxy-3862/pods/https:proxy-service-vwnbl-sxr8p:460/proxy/: tls baz (200; 4.843054ms) Jan 31 00:06:29.932: INFO: (10) /api/v1/namespaces/proxy-3862/services/proxy-service-vwnbl:portname1/proxy/: foo (200; 4.926188ms) Jan 31 00:06:29.932: INFO: (10) /api/v1/namespaces/proxy-3862/services/http:proxy-service-vwnbl:portname1/proxy/: foo (200; 4.914995ms) Jan 31 00:06:29.932: INFO: (10) /api/v1/namespaces/proxy-3862/pods/proxy-service-vwnbl-sxr8p:160/proxy/: foo (200; 4.880159ms) Jan 31 00:06:29.932: INFO: (10) /api/v1/namespaces/proxy-3862/pods/proxy-service-vwnbl-sxr8p/proxy/: test (200; 4.917696ms) Jan 31 00:06:29.932: INFO: (10) /api/v1/namespaces/proxy-3862/pods/proxy-service-vwnbl-sxr8p:162/proxy/: bar (200; 4.949733ms) Jan 31 00:06:29.932: INFO: (10) /api/v1/namespaces/proxy-3862/services/http:proxy-service-vwnbl:portname2/proxy/: bar (200; 4.953143ms) Jan 31 00:06:29.932: INFO: (10) /api/v1/namespaces/proxy-3862/pods/http:proxy-service-vwnbl-sxr8p:1080/proxy/: ... (200; 5.142588ms) Jan 31 00:06:29.932: INFO: (10) /api/v1/namespaces/proxy-3862/services/proxy-service-vwnbl:portname2/proxy/: bar (200; 5.266718ms) Jan 31 00:06:29.932: INFO: (10) /api/v1/namespaces/proxy-3862/pods/http:proxy-service-vwnbl-sxr8p:160/proxy/: foo (200; 5.293993ms) Jan 31 00:06:29.932: INFO: (10) /api/v1/namespaces/proxy-3862/pods/https:proxy-service-vwnbl-sxr8p:443/proxy/: test (200; 2.626718ms) Jan 31 00:06:29.936: INFO: (11) /api/v1/namespaces/proxy-3862/pods/http:proxy-service-vwnbl-sxr8p:1080/proxy/: ... (200; 3.094055ms) Jan 31 00:06:29.936: INFO: (11) /api/v1/namespaces/proxy-3862/pods/https:proxy-service-vwnbl-sxr8p:443/proxy/: test<... (200; 5.437144ms) Jan 31 00:06:29.938: INFO: (11) /api/v1/namespaces/proxy-3862/services/https:proxy-service-vwnbl:tlsportname2/proxy/: tls qux (200; 5.582701ms) Jan 31 00:06:29.938: INFO: (11) /api/v1/namespaces/proxy-3862/services/proxy-service-vwnbl:portname2/proxy/: bar (200; 5.509948ms) Jan 31 00:06:29.938: INFO: (11) /api/v1/namespaces/proxy-3862/services/https:proxy-service-vwnbl:tlsportname1/proxy/: tls baz (200; 5.539006ms) Jan 31 00:06:29.941: INFO: (12) /api/v1/namespaces/proxy-3862/pods/http:proxy-service-vwnbl-sxr8p:160/proxy/: foo (200; 2.45736ms) Jan 31 00:06:29.941: INFO: (12) /api/v1/namespaces/proxy-3862/pods/https:proxy-service-vwnbl-sxr8p:462/proxy/: tls qux (200; 2.777659ms) Jan 31 00:06:29.941: INFO: (12) /api/v1/namespaces/proxy-3862/pods/https:proxy-service-vwnbl-sxr8p:460/proxy/: tls baz (200; 3.208619ms) Jan 31 00:06:29.941: INFO: (12) /api/v1/namespaces/proxy-3862/pods/proxy-service-vwnbl-sxr8p/proxy/: test (200; 3.163237ms) Jan 31 00:06:29.941: INFO: (12) /api/v1/namespaces/proxy-3862/pods/http:proxy-service-vwnbl-sxr8p:162/proxy/: bar (200; 3.354732ms) Jan 31 00:06:29.943: INFO: (12) /api/v1/namespaces/proxy-3862/pods/proxy-service-vwnbl-sxr8p:162/proxy/: bar (200; 4.513819ms) Jan 31 00:06:29.943: INFO: (12) /api/v1/namespaces/proxy-3862/pods/proxy-service-vwnbl-sxr8p:160/proxy/: foo (200; 4.691282ms) Jan 31 00:06:29.943: INFO: (12) /api/v1/namespaces/proxy-3862/services/http:proxy-service-vwnbl:portname1/proxy/: foo (200; 4.789278ms) Jan 31 00:06:29.943: INFO: (12) /api/v1/namespaces/proxy-3862/services/https:proxy-service-vwnbl:tlsportname2/proxy/: tls qux (200; 4.842389ms) Jan 31 00:06:29.943: INFO: (12) /api/v1/namespaces/proxy-3862/pods/proxy-service-vwnbl-sxr8p:1080/proxy/: test<... (200; 4.896447ms) Jan 31 00:06:29.943: INFO: (12) /api/v1/namespaces/proxy-3862/pods/http:proxy-service-vwnbl-sxr8p:1080/proxy/: ... (200; 5.067411ms) Jan 31 00:06:29.943: INFO: (12) /api/v1/namespaces/proxy-3862/services/http:proxy-service-vwnbl:portname2/proxy/: bar (200; 5.044123ms) Jan 31 00:06:29.943: INFO: (12) /api/v1/namespaces/proxy-3862/services/https:proxy-service-vwnbl:tlsportname1/proxy/: tls baz (200; 5.10078ms) Jan 31 00:06:29.943: INFO: (12) /api/v1/namespaces/proxy-3862/services/proxy-service-vwnbl:portname1/proxy/: foo (200; 5.125249ms) Jan 31 00:06:29.943: INFO: (12) /api/v1/namespaces/proxy-3862/services/proxy-service-vwnbl:portname2/proxy/: bar (200; 5.154338ms) Jan 31 00:06:29.943: INFO: (12) /api/v1/namespaces/proxy-3862/pods/https:proxy-service-vwnbl-sxr8p:443/proxy/: test (200; 26.617965ms) Jan 31 00:06:29.970: INFO: (13) /api/v1/namespaces/proxy-3862/pods/http:proxy-service-vwnbl-sxr8p:1080/proxy/: ... (200; 26.644557ms) Jan 31 00:06:29.970: INFO: (13) /api/v1/namespaces/proxy-3862/pods/http:proxy-service-vwnbl-sxr8p:162/proxy/: bar (200; 26.76571ms) Jan 31 00:06:29.971: INFO: (13) /api/v1/namespaces/proxy-3862/pods/https:proxy-service-vwnbl-sxr8p:462/proxy/: tls qux (200; 27.358375ms) Jan 31 00:06:29.972: INFO: (13) /api/v1/namespaces/proxy-3862/services/proxy-service-vwnbl:portname2/proxy/: bar (200; 28.405391ms) Jan 31 00:06:29.972: INFO: (13) /api/v1/namespaces/proxy-3862/services/http:proxy-service-vwnbl:portname2/proxy/: bar (200; 28.900901ms) Jan 31 00:06:29.972: INFO: (13) /api/v1/namespaces/proxy-3862/services/proxy-service-vwnbl:portname1/proxy/: foo (200; 29.112931ms) Jan 31 00:06:29.973: INFO: (13) /api/v1/namespaces/proxy-3862/services/https:proxy-service-vwnbl:tlsportname1/proxy/: tls baz (200; 29.090254ms) Jan 31 00:06:29.973: INFO: (13) /api/v1/namespaces/proxy-3862/pods/https:proxy-service-vwnbl-sxr8p:460/proxy/: tls baz (200; 29.177047ms) Jan 31 00:06:29.972: INFO: (13) /api/v1/namespaces/proxy-3862/pods/proxy-service-vwnbl-sxr8p:1080/proxy/: test<... (200; 29.130347ms) Jan 31 00:06:29.973: INFO: (13) /api/v1/namespaces/proxy-3862/services/http:proxy-service-vwnbl:portname1/proxy/: foo (200; 29.18276ms) Jan 31 00:06:29.973: INFO: (13) /api/v1/namespaces/proxy-3862/services/https:proxy-service-vwnbl:tlsportname2/proxy/: tls qux (200; 29.066844ms) Jan 31 00:06:29.978: INFO: (14) /api/v1/namespaces/proxy-3862/pods/proxy-service-vwnbl-sxr8p/proxy/: test (200; 5.039211ms) Jan 31 00:06:29.978: INFO: (14) /api/v1/namespaces/proxy-3862/services/proxy-service-vwnbl:portname2/proxy/: bar (200; 5.088794ms) Jan 31 00:06:29.978: INFO: (14) /api/v1/namespaces/proxy-3862/services/proxy-service-vwnbl:portname1/proxy/: foo (200; 5.507041ms) Jan 31 00:06:29.978: INFO: (14) /api/v1/namespaces/proxy-3862/pods/https:proxy-service-vwnbl-sxr8p:443/proxy/: test<... (200; 6.214276ms) Jan 31 00:06:29.979: INFO: (14) /api/v1/namespaces/proxy-3862/pods/proxy-service-vwnbl-sxr8p:160/proxy/: foo (200; 6.290215ms) Jan 31 00:06:29.979: INFO: (14) /api/v1/namespaces/proxy-3862/pods/http:proxy-service-vwnbl-sxr8p:1080/proxy/: ... (200; 6.420464ms) Jan 31 00:06:29.979: INFO: (14) /api/v1/namespaces/proxy-3862/pods/https:proxy-service-vwnbl-sxr8p:460/proxy/: tls baz (200; 6.664332ms) Jan 31 00:06:29.979: INFO: (14) /api/v1/namespaces/proxy-3862/services/http:proxy-service-vwnbl:portname1/proxy/: foo (200; 6.730831ms) Jan 31 00:06:29.979: INFO: (14) /api/v1/namespaces/proxy-3862/pods/proxy-service-vwnbl-sxr8p:162/proxy/: bar (200; 6.642048ms) Jan 31 00:06:29.979: INFO: (14) /api/v1/namespaces/proxy-3862/services/https:proxy-service-vwnbl:tlsportname2/proxy/: tls qux (200; 6.638752ms) Jan 31 00:06:29.979: INFO: (14) /api/v1/namespaces/proxy-3862/pods/https:proxy-service-vwnbl-sxr8p:462/proxy/: tls qux (200; 6.800209ms) Jan 31 00:06:29.983: INFO: (15) /api/v1/namespaces/proxy-3862/pods/proxy-service-vwnbl-sxr8p:160/proxy/: foo (200; 3.353315ms) Jan 31 00:06:29.983: INFO: (15) /api/v1/namespaces/proxy-3862/pods/http:proxy-service-vwnbl-sxr8p:160/proxy/: foo (200; 3.377399ms) Jan 31 00:06:29.983: INFO: (15) /api/v1/namespaces/proxy-3862/pods/https:proxy-service-vwnbl-sxr8p:443/proxy/: ... (200; 3.399964ms) Jan 31 00:06:29.983: INFO: (15) /api/v1/namespaces/proxy-3862/pods/proxy-service-vwnbl-sxr8p:1080/proxy/: test<... (200; 3.341646ms) Jan 31 00:06:29.983: INFO: (15) /api/v1/namespaces/proxy-3862/pods/https:proxy-service-vwnbl-sxr8p:460/proxy/: tls baz (200; 3.448641ms) Jan 31 00:06:29.983: INFO: (15) /api/v1/namespaces/proxy-3862/pods/http:proxy-service-vwnbl-sxr8p:162/proxy/: bar (200; 3.712681ms) Jan 31 00:06:29.983: INFO: (15) /api/v1/namespaces/proxy-3862/pods/proxy-service-vwnbl-sxr8p:162/proxy/: bar (200; 3.723945ms) Jan 31 00:06:29.985: INFO: (15) /api/v1/namespaces/proxy-3862/services/proxy-service-vwnbl:portname2/proxy/: bar (200; 5.218322ms) Jan 31 00:06:29.985: INFO: (15) /api/v1/namespaces/proxy-3862/services/https:proxy-service-vwnbl:tlsportname2/proxy/: tls qux (200; 5.199377ms) Jan 31 00:06:29.985: INFO: (15) /api/v1/namespaces/proxy-3862/services/proxy-service-vwnbl:portname1/proxy/: foo (200; 5.1661ms) Jan 31 00:06:29.985: INFO: (15) /api/v1/namespaces/proxy-3862/services/http:proxy-service-vwnbl:portname2/proxy/: bar (200; 5.15949ms) Jan 31 00:06:29.985: INFO: (15) /api/v1/namespaces/proxy-3862/services/http:proxy-service-vwnbl:portname1/proxy/: foo (200; 5.197176ms) Jan 31 00:06:29.985: INFO: (15) /api/v1/namespaces/proxy-3862/pods/proxy-service-vwnbl-sxr8p/proxy/: test (200; 5.465085ms) Jan 31 00:06:29.985: INFO: (15) /api/v1/namespaces/proxy-3862/pods/https:proxy-service-vwnbl-sxr8p:462/proxy/: tls qux (200; 5.534346ms) Jan 31 00:06:29.985: INFO: (15) /api/v1/namespaces/proxy-3862/services/https:proxy-service-vwnbl:tlsportname1/proxy/: tls baz (200; 5.525576ms) Jan 31 00:06:29.989: INFO: (16) /api/v1/namespaces/proxy-3862/pods/proxy-service-vwnbl-sxr8p:162/proxy/: bar (200; 3.2992ms) Jan 31 00:06:29.989: INFO: (16) /api/v1/namespaces/proxy-3862/pods/http:proxy-service-vwnbl-sxr8p:162/proxy/: bar (200; 3.670008ms) Jan 31 00:06:29.989: INFO: (16) /api/v1/namespaces/proxy-3862/pods/proxy-service-vwnbl-sxr8p/proxy/: test (200; 3.645136ms) Jan 31 00:06:29.989: INFO: (16) /api/v1/namespaces/proxy-3862/pods/proxy-service-vwnbl-sxr8p:1080/proxy/: test<... (200; 3.757594ms) Jan 31 00:06:29.989: INFO: (16) /api/v1/namespaces/proxy-3862/pods/proxy-service-vwnbl-sxr8p:160/proxy/: foo (200; 4.267003ms) Jan 31 00:06:29.989: INFO: (16) /api/v1/namespaces/proxy-3862/pods/http:proxy-service-vwnbl-sxr8p:1080/proxy/: ... (200; 4.203128ms) Jan 31 00:06:29.990: INFO: (16) /api/v1/namespaces/proxy-3862/pods/https:proxy-service-vwnbl-sxr8p:443/proxy/: test (200; 2.896131ms) Jan 31 00:06:29.996: INFO: (17) /api/v1/namespaces/proxy-3862/services/http:proxy-service-vwnbl:portname2/proxy/: bar (200; 4.485991ms) Jan 31 00:06:29.997: INFO: (17) /api/v1/namespaces/proxy-3862/services/https:proxy-service-vwnbl:tlsportname2/proxy/: tls qux (200; 4.833922ms) Jan 31 00:06:29.997: INFO: (17) /api/v1/namespaces/proxy-3862/services/http:proxy-service-vwnbl:portname1/proxy/: foo (200; 4.852575ms) Jan 31 00:06:29.997: INFO: (17) /api/v1/namespaces/proxy-3862/pods/http:proxy-service-vwnbl-sxr8p:1080/proxy/: ... (200; 4.924253ms) Jan 31 00:06:29.997: INFO: (17) /api/v1/namespaces/proxy-3862/pods/proxy-service-vwnbl-sxr8p:160/proxy/: foo (200; 4.946212ms) Jan 31 00:06:29.997: INFO: (17) /api/v1/namespaces/proxy-3862/services/proxy-service-vwnbl:portname1/proxy/: foo (200; 4.968726ms) Jan 31 00:06:29.997: INFO: (17) /api/v1/namespaces/proxy-3862/services/https:proxy-service-vwnbl:tlsportname1/proxy/: tls baz (200; 4.922286ms) Jan 31 00:06:29.997: INFO: (17) /api/v1/namespaces/proxy-3862/pods/http:proxy-service-vwnbl-sxr8p:160/proxy/: foo (200; 4.956833ms) Jan 31 00:06:29.997: INFO: (17) /api/v1/namespaces/proxy-3862/pods/http:proxy-service-vwnbl-sxr8p:162/proxy/: bar (200; 4.936724ms) Jan 31 00:06:29.997: INFO: (17) /api/v1/namespaces/proxy-3862/pods/https:proxy-service-vwnbl-sxr8p:462/proxy/: tls qux (200; 5.031465ms) Jan 31 00:06:29.997: INFO: (17) /api/v1/namespaces/proxy-3862/pods/https:proxy-service-vwnbl-sxr8p:443/proxy/: test<... (200; 5.33165ms) Jan 31 00:06:30.003: INFO: (18) /api/v1/namespaces/proxy-3862/pods/http:proxy-service-vwnbl-sxr8p:162/proxy/: bar (200; 5.615831ms) Jan 31 00:06:30.005: INFO: (18) /api/v1/namespaces/proxy-3862/pods/proxy-service-vwnbl-sxr8p:162/proxy/: bar (200; 7.452176ms) Jan 31 00:06:30.005: INFO: (18) /api/v1/namespaces/proxy-3862/pods/https:proxy-service-vwnbl-sxr8p:462/proxy/: tls qux (200; 7.770357ms) Jan 31 00:06:30.005: INFO: (18) /api/v1/namespaces/proxy-3862/pods/https:proxy-service-vwnbl-sxr8p:443/proxy/: test<... (200; 8.523777ms) Jan 31 00:06:30.006: INFO: (18) /api/v1/namespaces/proxy-3862/services/proxy-service-vwnbl:portname1/proxy/: foo (200; 8.613851ms) Jan 31 00:06:30.006: INFO: (18) /api/v1/namespaces/proxy-3862/services/https:proxy-service-vwnbl:tlsportname1/proxy/: tls baz (200; 8.559543ms) Jan 31 00:06:30.006: INFO: (18) /api/v1/namespaces/proxy-3862/pods/proxy-service-vwnbl-sxr8p:160/proxy/: foo (200; 8.572899ms) Jan 31 00:06:30.006: INFO: (18) /api/v1/namespaces/proxy-3862/pods/http:proxy-service-vwnbl-sxr8p:1080/proxy/: ... (200; 8.574636ms) Jan 31 00:06:30.006: INFO: (18) /api/v1/namespaces/proxy-3862/services/proxy-service-vwnbl:portname2/proxy/: bar (200; 8.637298ms) Jan 31 00:06:30.006: INFO: (18) /api/v1/namespaces/proxy-3862/pods/proxy-service-vwnbl-sxr8p/proxy/: test (200; 8.591887ms) Jan 31 00:06:30.006: INFO: (18) /api/v1/namespaces/proxy-3862/services/http:proxy-service-vwnbl:portname2/proxy/: bar (200; 8.593294ms) Jan 31 00:06:30.006: INFO: (18) /api/v1/namespaces/proxy-3862/pods/http:proxy-service-vwnbl-sxr8p:160/proxy/: foo (200; 9.040137ms) Jan 31 00:06:30.006: INFO: (18) /api/v1/namespaces/proxy-3862/pods/https:proxy-service-vwnbl-sxr8p:460/proxy/: tls baz (200; 9.077494ms) Jan 31 00:06:30.010: INFO: (19) /api/v1/namespaces/proxy-3862/services/proxy-service-vwnbl:portname2/proxy/: bar (200; 3.555344ms) Jan 31 00:06:30.010: INFO: (19) /api/v1/namespaces/proxy-3862/pods/proxy-service-vwnbl-sxr8p/proxy/: test (200; 3.936073ms) Jan 31 00:06:30.010: INFO: (19) /api/v1/namespaces/proxy-3862/services/http:proxy-service-vwnbl:portname1/proxy/: foo (200; 3.981271ms) Jan 31 00:06:30.010: INFO: (19) /api/v1/namespaces/proxy-3862/services/http:proxy-service-vwnbl:portname2/proxy/: bar (200; 3.891384ms) Jan 31 00:06:30.010: INFO: (19) /api/v1/namespaces/proxy-3862/pods/https:proxy-service-vwnbl-sxr8p:443/proxy/: test<... (200; 3.957695ms) Jan 31 00:06:30.011: INFO: (19) /api/v1/namespaces/proxy-3862/pods/http:proxy-service-vwnbl-sxr8p:162/proxy/: bar (200; 4.352651ms) Jan 31 00:06:30.011: INFO: (19) /api/v1/namespaces/proxy-3862/pods/proxy-service-vwnbl-sxr8p:162/proxy/: bar (200; 4.330613ms) Jan 31 00:06:30.011: INFO: (19) /api/v1/namespaces/proxy-3862/pods/proxy-service-vwnbl-sxr8p:160/proxy/: foo (200; 4.321923ms) Jan 31 00:06:30.011: INFO: (19) /api/v1/namespaces/proxy-3862/services/https:proxy-service-vwnbl:tlsportname2/proxy/: tls qux (200; 4.335291ms) Jan 31 00:06:30.011: INFO: (19) /api/v1/namespaces/proxy-3862/pods/https:proxy-service-vwnbl-sxr8p:462/proxy/: tls qux (200; 4.365741ms) Jan 31 00:06:30.011: INFO: (19) /api/v1/namespaces/proxy-3862/services/https:proxy-service-vwnbl:tlsportname1/proxy/: tls baz (200; 4.397332ms) Jan 31 00:06:30.011: INFO: (19) /api/v1/namespaces/proxy-3862/pods/https:proxy-service-vwnbl-sxr8p:460/proxy/: tls baz (200; 4.37216ms) Jan 31 00:06:30.011: INFO: (19) /api/v1/namespaces/proxy-3862/pods/http:proxy-service-vwnbl-sxr8p:1080/proxy/: ... (200; 4.475067ms) Jan 31 00:06:30.011: INFO: (19) /api/v1/namespaces/proxy-3862/pods/http:proxy-service-vwnbl-sxr8p:160/proxy/: foo (200; 4.485491ms) Jan 31 00:06:30.011: INFO: (19) /api/v1/namespaces/proxy-3862/services/proxy-service-vwnbl:portname1/proxy/: foo (200; 4.465283ms) STEP: deleting ReplicationController proxy-service-vwnbl in namespace proxy-3862, will wait for the garbage collector to delete the pods Jan 31 00:06:30.070: INFO: Deleting ReplicationController proxy-service-vwnbl took: 7.673374ms Jan 31 00:06:30.671: INFO: Terminating ReplicationController proxy-service-vwnbl pods took: 600.244686ms [AfterEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:07:21.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-3862" for this suite. • [SLOW TEST:58.592 seconds] [sig-network] Proxy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:59 should proxy through a service and a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":311,"completed":22,"skipped":500,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:07:21.179: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a pod to test downward API volume plugin Jan 31 00:07:21.253: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ce220386-22e6-48c3-8d27-107ec0b51c9e" in namespace "projected-2521" to be "Succeeded or Failed" Jan 31 00:07:21.270: INFO: Pod "downwardapi-volume-ce220386-22e6-48c3-8d27-107ec0b51c9e": Phase="Pending", Reason="", readiness=false. Elapsed: 17.460724ms Jan 31 00:07:23.275: INFO: Pod "downwardapi-volume-ce220386-22e6-48c3-8d27-107ec0b51c9e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022353077s Jan 31 00:07:25.279: INFO: Pod "downwardapi-volume-ce220386-22e6-48c3-8d27-107ec0b51c9e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02612759s STEP: Saw pod success Jan 31 00:07:25.279: INFO: Pod "downwardapi-volume-ce220386-22e6-48c3-8d27-107ec0b51c9e" satisfied condition "Succeeded or Failed" Jan 31 00:07:25.282: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-ce220386-22e6-48c3-8d27-107ec0b51c9e container client-container: STEP: delete the pod Jan 31 00:07:25.331: INFO: Waiting for pod downwardapi-volume-ce220386-22e6-48c3-8d27-107ec0b51c9e to disappear Jan 31 00:07:25.370: INFO: Pod downwardapi-volume-ce220386-22e6-48c3-8d27-107ec0b51c9e no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:07:25.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2521" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":311,"completed":23,"skipped":542,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:07:25.378: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 31 00:07:26.037: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 31 00:07:28.227: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747648446, loc:(*time.Location)(0x79bd420)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747648446, loc:(*time.Location)(0x79bd420)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747648446, loc:(*time.Location)(0x79bd420)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747648446, loc:(*time.Location)(0x79bd420)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 31 00:07:31.287: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 Jan 31 00:07:31.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-2846-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:07:32.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7810" for this suite. STEP: Destroying namespace "webhook-7810-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:7.139 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":311,"completed":24,"skipped":560,"failed":0} SS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:07:32.518: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Performing setup for networking test in namespace pod-network-test-5809 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 31 00:07:32.623: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jan 31 00:07:32.962: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 31 00:07:34.975: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 31 00:07:36.966: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 31 00:07:38.966: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 31 00:07:40.970: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 31 00:07:42.993: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 31 00:07:44.969: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 31 00:07:46.966: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 31 00:07:48.966: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 31 00:07:50.966: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 31 00:07:52.967: INFO: The status of Pod netserver-0 is Running (Ready = true) Jan 31 00:07:52.979: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Jan 31 00:07:57.025: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 Jan 31 00:07:57.025: INFO: Breadth first check of 10.244.2.12 on host 172.18.0.14... Jan 31 00:07:57.027: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.13:9080/dial?request=hostname&protocol=udp&host=10.244.2.12&port=8081&tries=1'] Namespace:pod-network-test-5809 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 31 00:07:57.027: INFO: >>> kubeConfig: /root/.kube/config I0131 00:07:57.054898 7 log.go:181] (0xc003a78420) (0xc004700a00) Create stream I0131 00:07:57.054930 7 log.go:181] (0xc003a78420) (0xc004700a00) Stream added, broadcasting: 1 I0131 00:07:57.056532 7 log.go:181] (0xc003a78420) Reply frame received for 1 I0131 00:07:57.056581 7 log.go:181] (0xc003a78420) (0xc004700aa0) Create stream I0131 00:07:57.056603 7 log.go:181] (0xc003a78420) (0xc004700aa0) Stream added, broadcasting: 3 I0131 00:07:57.057580 7 log.go:181] (0xc003a78420) Reply frame received for 3 I0131 00:07:57.057612 7 log.go:181] (0xc003a78420) (0xc004040820) Create stream I0131 00:07:57.057624 7 log.go:181] (0xc003a78420) (0xc004040820) Stream added, broadcasting: 5 I0131 00:07:57.058478 7 log.go:181] (0xc003a78420) Reply frame received for 5 I0131 00:07:57.158918 7 log.go:181] (0xc003a78420) Data frame received for 3 I0131 00:07:57.158981 7 log.go:181] (0xc004700aa0) (3) Data frame handling I0131 00:07:57.159001 7 log.go:181] (0xc004700aa0) (3) Data frame sent I0131 00:07:57.159038 7 log.go:181] (0xc003a78420) Data frame received for 3 I0131 00:07:57.159059 7 log.go:181] (0xc004700aa0) (3) Data frame handling I0131 00:07:57.159633 7 log.go:181] (0xc003a78420) Data frame received for 5 I0131 00:07:57.159657 7 log.go:181] (0xc004040820) (5) Data frame handling I0131 00:07:57.160490 7 log.go:181] (0xc003a78420) Data frame received for 1 I0131 00:07:57.160511 7 log.go:181] (0xc004700a00) (1) Data frame handling I0131 00:07:57.160708 7 log.go:181] (0xc004700a00) (1) Data frame sent I0131 00:07:57.160757 7 log.go:181] (0xc003a78420) (0xc004700a00) Stream removed, broadcasting: 1 I0131 00:07:57.160803 7 log.go:181] (0xc003a78420) Go away received I0131 00:07:57.161029 7 log.go:181] (0xc003a78420) (0xc004700a00) Stream removed, broadcasting: 1 I0131 00:07:57.161085 7 log.go:181] (0xc003a78420) (0xc004700aa0) Stream removed, broadcasting: 3 I0131 00:07:57.161140 7 log.go:181] (0xc003a78420) (0xc004040820) Stream removed, broadcasting: 5 Jan 31 00:07:57.161: INFO: Waiting for responses: map[] Jan 31 00:07:57.161: INFO: reached 10.244.2.12 after 0/1 tries Jan 31 00:07:57.161: INFO: Breadth first check of 10.244.1.124 on host 172.18.0.16... Jan 31 00:07:57.165: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.13:9080/dial?request=hostname&protocol=udp&host=10.244.1.124&port=8081&tries=1'] Namespace:pod-network-test-5809 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 31 00:07:57.165: INFO: >>> kubeConfig: /root/.kube/config I0131 00:07:57.189945 7 log.go:181] (0xc003a78b00) (0xc004700e60) Create stream I0131 00:07:57.189980 7 log.go:181] (0xc003a78b00) (0xc004700e60) Stream added, broadcasting: 1 I0131 00:07:57.191419 7 log.go:181] (0xc003a78b00) Reply frame received for 1 I0131 00:07:57.191448 7 log.go:181] (0xc003a78b00) (0xc004a56000) Create stream I0131 00:07:57.191460 7 log.go:181] (0xc003a78b00) (0xc004a56000) Stream added, broadcasting: 3 I0131 00:07:57.192504 7 log.go:181] (0xc003a78b00) Reply frame received for 3 I0131 00:07:57.192546 7 log.go:181] (0xc003a78b00) (0xc0014ec000) Create stream I0131 00:07:57.192556 7 log.go:181] (0xc003a78b00) (0xc0014ec000) Stream added, broadcasting: 5 I0131 00:07:57.193468 7 log.go:181] (0xc003a78b00) Reply frame received for 5 I0131 00:07:57.254896 7 log.go:181] (0xc003a78b00) Data frame received for 3 I0131 00:07:57.254927 7 log.go:181] (0xc004a56000) (3) Data frame handling I0131 00:07:57.254947 7 log.go:181] (0xc004a56000) (3) Data frame sent I0131 00:07:57.255622 7 log.go:181] (0xc003a78b00) Data frame received for 5 I0131 00:07:57.255646 7 log.go:181] (0xc0014ec000) (5) Data frame handling I0131 00:07:57.255684 7 log.go:181] (0xc003a78b00) Data frame received for 3 I0131 00:07:57.255704 7 log.go:181] (0xc004a56000) (3) Data frame handling I0131 00:07:57.257454 7 log.go:181] (0xc003a78b00) Data frame received for 1 I0131 00:07:57.257469 7 log.go:181] (0xc004700e60) (1) Data frame handling I0131 00:07:57.257476 7 log.go:181] (0xc004700e60) (1) Data frame sent I0131 00:07:57.257484 7 log.go:181] (0xc003a78b00) (0xc004700e60) Stream removed, broadcasting: 1 I0131 00:07:57.257548 7 log.go:181] (0xc003a78b00) Go away received I0131 00:07:57.257597 7 log.go:181] (0xc003a78b00) (0xc004700e60) Stream removed, broadcasting: 1 I0131 00:07:57.257634 7 log.go:181] (0xc003a78b00) (0xc004a56000) Stream removed, broadcasting: 3 I0131 00:07:57.257656 7 log.go:181] (0xc003a78b00) (0xc0014ec000) Stream removed, broadcasting: 5 Jan 31 00:07:57.257: INFO: Waiting for responses: map[] Jan 31 00:07:57.257: INFO: reached 10.244.1.124 after 0/1 tries Jan 31 00:07:57.257: INFO: Going to retry 0 out of 2 pods.... [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:07:57.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-5809" for this suite. • [SLOW TEST:24.749 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:27 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:30 should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":311,"completed":25,"skipped":562,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:07:57.267: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:08:08.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7890" for this suite. • [SLOW TEST:11.172 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":311,"completed":26,"skipped":576,"failed":0} SSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:08:08.439: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: getting the auto-created API token Jan 31 00:08:09.072: INFO: created pod pod-service-account-defaultsa Jan 31 00:08:09.072: INFO: pod pod-service-account-defaultsa service account token volume mount: true Jan 31 00:08:09.091: INFO: created pod pod-service-account-mountsa Jan 31 00:08:09.091: INFO: pod pod-service-account-mountsa service account token volume mount: true Jan 31 00:08:09.108: INFO: created pod pod-service-account-nomountsa Jan 31 00:08:09.108: INFO: pod pod-service-account-nomountsa service account token volume mount: false Jan 31 00:08:09.121: INFO: created pod pod-service-account-defaultsa-mountspec Jan 31 00:08:09.121: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Jan 31 00:08:09.212: INFO: created pod pod-service-account-mountsa-mountspec Jan 31 00:08:09.212: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Jan 31 00:08:09.246: INFO: created pod pod-service-account-nomountsa-mountspec Jan 31 00:08:09.246: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Jan 31 00:08:09.282: INFO: created pod pod-service-account-defaultsa-nomountspec Jan 31 00:08:09.282: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Jan 31 00:08:09.337: INFO: created pod pod-service-account-mountsa-nomountspec Jan 31 00:08:09.337: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Jan 31 00:08:09.358: INFO: created pod pod-service-account-nomountsa-nomountspec Jan 31 00:08:09.358: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:08:09.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-9653" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":311,"completed":27,"skipped":580,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:08:09.427: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating pod busybox-2e9ed2de-8089-4145-b18a-d486d0f2a315 in namespace container-probe-422 Jan 31 00:08:22.144: INFO: Started pod busybox-2e9ed2de-8089-4145-b18a-d486d0f2a315 in namespace container-probe-422 STEP: checking the pod's current state and verifying that restartCount is present Jan 31 00:08:22.202: INFO: Initial restart count of pod busybox-2e9ed2de-8089-4145-b18a-d486d0f2a315 is 0 Jan 31 00:09:10.656: INFO: Restart count of pod container-probe-422/busybox-2e9ed2de-8089-4145-b18a-d486d0f2a315 is now 1 (48.454129976s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:09:10.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-422" for this suite. • [SLOW TEST:61.327 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:635 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":311,"completed":28,"skipped":590,"failed":0} SSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:09:10.754: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-1537.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-1537.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1537.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-1537.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-1537.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1537.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 31 00:09:17.004: INFO: DNS probes using dns-1537/dns-test-d858cd54-80cd-4001-9ee2-7400dbd11a8f succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:09:17.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1537" for this suite. • [SLOW TEST:6.419 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":311,"completed":29,"skipped":595,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:09:17.174: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 31 00:09:18.660: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 31 00:09:20.759: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747648559, loc:(*time.Location)(0x79bd420)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747648559, loc:(*time.Location)(0x79bd420)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747648559, loc:(*time.Location)(0x79bd420)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747648558, loc:(*time.Location)(0x79bd420)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 31 00:09:22.764: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747648559, loc:(*time.Location)(0x79bd420)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747648559, loc:(*time.Location)(0x79bd420)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747648559, loc:(*time.Location)(0x79bd420)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747648558, loc:(*time.Location)(0x79bd420)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 31 00:09:25.938: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 Jan 31 00:09:26.010: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-5485-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:09:27.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8170" for this suite. STEP: Destroying namespace "webhook-8170-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:10.308 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":311,"completed":30,"skipped":671,"failed":0} S ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:09:27.483: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:129 [It] should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 Jan 31 00:09:27.555: INFO: Create a RollingUpdate DaemonSet Jan 31 00:09:27.560: INFO: Check that daemon pods launch on every node of the cluster Jan 31 00:09:27.630: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:09:27.633: INFO: Number of nodes with available pods: 0 Jan 31 00:09:27.633: INFO: Node latest-worker is running more than one daemon pod Jan 31 00:09:28.640: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:09:28.644: INFO: Number of nodes with available pods: 0 Jan 31 00:09:28.644: INFO: Node latest-worker is running more than one daemon pod Jan 31 00:09:29.637: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:09:29.640: INFO: Number of nodes with available pods: 0 Jan 31 00:09:29.640: INFO: Node latest-worker is running more than one daemon pod Jan 31 00:09:30.798: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:09:30.801: INFO: Number of nodes with available pods: 0 Jan 31 00:09:30.801: INFO: Node latest-worker is running more than one daemon pod Jan 31 00:09:31.638: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:09:31.640: INFO: Number of nodes with available pods: 0 Jan 31 00:09:31.640: INFO: Node latest-worker is running more than one daemon pod Jan 31 00:09:32.637: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:09:32.640: INFO: Number of nodes with available pods: 2 Jan 31 00:09:32.640: INFO: Number of running nodes: 2, number of available pods: 2 Jan 31 00:09:32.641: INFO: Update the DaemonSet to trigger a rollout Jan 31 00:09:32.647: INFO: Updating DaemonSet daemon-set Jan 31 00:10:01.745: INFO: Roll back the DaemonSet before rollout is complete Jan 31 00:10:01.753: INFO: Updating DaemonSet daemon-set Jan 31 00:10:01.753: INFO: Make sure DaemonSet rollback is complete Jan 31 00:10:01.850: INFO: Wrong image for pod: daemon-set-dcdq5. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 31 00:10:01.850: INFO: Pod daemon-set-dcdq5 is not available Jan 31 00:10:01.867: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:10:02.872: INFO: Wrong image for pod: daemon-set-dcdq5. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 31 00:10:02.872: INFO: Pod daemon-set-dcdq5 is not available Jan 31 00:10:02.877: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:10:03.872: INFO: Wrong image for pod: daemon-set-dcdq5. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 31 00:10:03.872: INFO: Pod daemon-set-dcdq5 is not available Jan 31 00:10:03.878: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:10:04.872: INFO: Wrong image for pod: daemon-set-dcdq5. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 31 00:10:04.872: INFO: Pod daemon-set-dcdq5 is not available Jan 31 00:10:04.876: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:10:05.872: INFO: Wrong image for pod: daemon-set-dcdq5. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 31 00:10:05.872: INFO: Pod daemon-set-dcdq5 is not available Jan 31 00:10:05.877: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:10:06.872: INFO: Wrong image for pod: daemon-set-dcdq5. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 31 00:10:06.872: INFO: Pod daemon-set-dcdq5 is not available Jan 31 00:10:06.877: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:10:07.873: INFO: Wrong image for pod: daemon-set-dcdq5. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 31 00:10:07.873: INFO: Pod daemon-set-dcdq5 is not available Jan 31 00:10:07.878: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:10:08.873: INFO: Wrong image for pod: daemon-set-dcdq5. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 31 00:10:08.873: INFO: Pod daemon-set-dcdq5 is not available Jan 31 00:10:08.877: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:10:09.873: INFO: Wrong image for pod: daemon-set-dcdq5. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 31 00:10:09.873: INFO: Pod daemon-set-dcdq5 is not available Jan 31 00:10:09.878: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:10:10.873: INFO: Wrong image for pod: daemon-set-dcdq5. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 31 00:10:10.873: INFO: Pod daemon-set-dcdq5 is not available Jan 31 00:10:10.877: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:10:11.872: INFO: Wrong image for pod: daemon-set-dcdq5. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 31 00:10:11.872: INFO: Pod daemon-set-dcdq5 is not available Jan 31 00:10:11.877: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:10:12.872: INFO: Wrong image for pod: daemon-set-dcdq5. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 31 00:10:12.872: INFO: Pod daemon-set-dcdq5 is not available Jan 31 00:10:12.877: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:10:13.873: INFO: Wrong image for pod: daemon-set-dcdq5. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 31 00:10:13.873: INFO: Pod daemon-set-dcdq5 is not available Jan 31 00:10:13.878: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:10:14.871: INFO: Wrong image for pod: daemon-set-dcdq5. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 31 00:10:14.871: INFO: Pod daemon-set-dcdq5 is not available Jan 31 00:10:14.875: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:10:15.871: INFO: Wrong image for pod: daemon-set-dcdq5. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 31 00:10:15.871: INFO: Pod daemon-set-dcdq5 is not available Jan 31 00:10:15.875: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:10:16.872: INFO: Wrong image for pod: daemon-set-dcdq5. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 31 00:10:16.872: INFO: Pod daemon-set-dcdq5 is not available Jan 31 00:10:16.876: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:10:17.873: INFO: Wrong image for pod: daemon-set-dcdq5. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 31 00:10:17.873: INFO: Pod daemon-set-dcdq5 is not available Jan 31 00:10:17.877: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:10:18.872: INFO: Wrong image for pod: daemon-set-dcdq5. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 31 00:10:18.872: INFO: Pod daemon-set-dcdq5 is not available Jan 31 00:10:18.877: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:10:19.960: INFO: Wrong image for pod: daemon-set-dcdq5. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 31 00:10:19.960: INFO: Pod daemon-set-dcdq5 is not available Jan 31 00:10:19.964: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:10:20.872: INFO: Wrong image for pod: daemon-set-dcdq5. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 31 00:10:20.872: INFO: Pod daemon-set-dcdq5 is not available Jan 31 00:10:20.877: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:10:21.872: INFO: Wrong image for pod: daemon-set-dcdq5. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 31 00:10:21.872: INFO: Pod daemon-set-dcdq5 is not available Jan 31 00:10:21.877: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:10:22.872: INFO: Wrong image for pod: daemon-set-dcdq5. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 31 00:10:22.872: INFO: Pod daemon-set-dcdq5 is not available Jan 31 00:10:22.876: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:10:23.881: INFO: Wrong image for pod: daemon-set-dcdq5. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 31 00:10:23.881: INFO: Pod daemon-set-dcdq5 is not available Jan 31 00:10:23.884: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:10:24.873: INFO: Wrong image for pod: daemon-set-dcdq5. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 31 00:10:24.873: INFO: Pod daemon-set-dcdq5 is not available Jan 31 00:10:24.877: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:10:25.872: INFO: Wrong image for pod: daemon-set-dcdq5. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 31 00:10:25.872: INFO: Pod daemon-set-dcdq5 is not available Jan 31 00:10:25.881: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:10:26.872: INFO: Wrong image for pod: daemon-set-dcdq5. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 31 00:10:26.872: INFO: Pod daemon-set-dcdq5 is not available Jan 31 00:10:26.877: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:10:27.901: INFO: Wrong image for pod: daemon-set-dcdq5. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 31 00:10:27.901: INFO: Pod daemon-set-dcdq5 is not available Jan 31 00:10:27.905: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:10:28.873: INFO: Wrong image for pod: daemon-set-dcdq5. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 31 00:10:28.873: INFO: Pod daemon-set-dcdq5 is not available Jan 31 00:10:28.877: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:10:29.901: INFO: Wrong image for pod: daemon-set-dcdq5. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 31 00:10:29.901: INFO: Pod daemon-set-dcdq5 is not available Jan 31 00:10:29.905: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:10:30.871: INFO: Wrong image for pod: daemon-set-dcdq5. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 31 00:10:30.871: INFO: Pod daemon-set-dcdq5 is not available Jan 31 00:10:30.875: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:10:31.871: INFO: Wrong image for pod: daemon-set-dcdq5. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 31 00:10:31.871: INFO: Pod daemon-set-dcdq5 is not available Jan 31 00:10:31.876: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:10:32.872: INFO: Wrong image for pod: daemon-set-dcdq5. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 31 00:10:32.872: INFO: Pod daemon-set-dcdq5 is not available Jan 31 00:10:32.876: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:10:33.873: INFO: Wrong image for pod: daemon-set-dcdq5. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 31 00:10:33.873: INFO: Pod daemon-set-dcdq5 is not available Jan 31 00:10:33.877: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:10:34.872: INFO: Wrong image for pod: daemon-set-dcdq5. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 31 00:10:34.872: INFO: Pod daemon-set-dcdq5 is not available Jan 31 00:10:34.875: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:10:35.873: INFO: Wrong image for pod: daemon-set-dcdq5. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 31 00:10:35.873: INFO: Pod daemon-set-dcdq5 is not available Jan 31 00:10:35.878: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:10:36.872: INFO: Wrong image for pod: daemon-set-dcdq5. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 31 00:10:36.872: INFO: Pod daemon-set-dcdq5 is not available Jan 31 00:10:36.876: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:10:37.872: INFO: Wrong image for pod: daemon-set-dcdq5. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 31 00:10:37.872: INFO: Pod daemon-set-dcdq5 is not available Jan 31 00:10:37.876: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:10:38.872: INFO: Wrong image for pod: daemon-set-dcdq5. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 31 00:10:38.872: INFO: Pod daemon-set-dcdq5 is not available Jan 31 00:10:38.875: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:10:39.872: INFO: Wrong image for pod: daemon-set-dcdq5. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 31 00:10:39.872: INFO: Pod daemon-set-dcdq5 is not available Jan 31 00:10:39.877: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:10:40.873: INFO: Wrong image for pod: daemon-set-dcdq5. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 31 00:10:40.873: INFO: Pod daemon-set-dcdq5 is not available Jan 31 00:10:40.878: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:10:41.872: INFO: Wrong image for pod: daemon-set-dcdq5. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 31 00:10:41.872: INFO: Pod daemon-set-dcdq5 is not available Jan 31 00:10:41.876: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:10:42.872: INFO: Wrong image for pod: daemon-set-dcdq5. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 31 00:10:42.872: INFO: Pod daemon-set-dcdq5 is not available Jan 31 00:10:42.876: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:10:43.872: INFO: Wrong image for pod: daemon-set-dcdq5. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 31 00:10:43.872: INFO: Pod daemon-set-dcdq5 is not available Jan 31 00:10:43.877: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:10:44.872: INFO: Wrong image for pod: daemon-set-dcdq5. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 31 00:10:44.872: INFO: Pod daemon-set-dcdq5 is not available Jan 31 00:10:44.874: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:10:45.871: INFO: Wrong image for pod: daemon-set-dcdq5. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 31 00:10:45.871: INFO: Pod daemon-set-dcdq5 is not available Jan 31 00:10:45.874: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:10:46.872: INFO: Wrong image for pod: daemon-set-dcdq5. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 31 00:10:46.872: INFO: Pod daemon-set-dcdq5 is not available Jan 31 00:10:46.877: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:10:47.873: INFO: Wrong image for pod: daemon-set-dcdq5. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 31 00:10:47.873: INFO: Pod daemon-set-dcdq5 is not available Jan 31 00:10:47.878: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:10:48.888: INFO: Wrong image for pod: daemon-set-dcdq5. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 31 00:10:48.888: INFO: Pod daemon-set-dcdq5 is not available Jan 31 00:10:48.893: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:10:49.873: INFO: Wrong image for pod: daemon-set-dcdq5. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 31 00:10:49.873: INFO: Pod daemon-set-dcdq5 is not available Jan 31 00:10:49.878: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:10:50.872: INFO: Wrong image for pod: daemon-set-dcdq5. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 31 00:10:50.872: INFO: Pod daemon-set-dcdq5 is not available Jan 31 00:10:50.880: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:10:51.872: INFO: Wrong image for pod: daemon-set-dcdq5. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 31 00:10:51.872: INFO: Pod daemon-set-dcdq5 is not available Jan 31 00:10:51.877: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:10:52.872: INFO: Wrong image for pod: daemon-set-dcdq5. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 31 00:10:52.872: INFO: Pod daemon-set-dcdq5 is not available Jan 31 00:10:52.878: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:10:53.873: INFO: Wrong image for pod: daemon-set-dcdq5. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 31 00:10:53.873: INFO: Pod daemon-set-dcdq5 is not available Jan 31 00:10:53.879: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:10:54.872: INFO: Wrong image for pod: daemon-set-dcdq5. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 31 00:10:54.872: INFO: Pod daemon-set-dcdq5 is not available Jan 31 00:10:54.877: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:10:55.873: INFO: Wrong image for pod: daemon-set-dcdq5. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 31 00:10:55.873: INFO: Pod daemon-set-dcdq5 is not available Jan 31 00:10:55.878: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:10:56.893: INFO: Wrong image for pod: daemon-set-dcdq5. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 31 00:10:56.894: INFO: Pod daemon-set-dcdq5 is not available Jan 31 00:10:56.899: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:10:57.872: INFO: Wrong image for pod: daemon-set-dcdq5. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 31 00:10:57.872: INFO: Pod daemon-set-dcdq5 is not available Jan 31 00:10:57.876: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:10:58.872: INFO: Wrong image for pod: daemon-set-dcdq5. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 31 00:10:58.872: INFO: Pod daemon-set-dcdq5 is not available Jan 31 00:10:58.876: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:10:59.873: INFO: Wrong image for pod: daemon-set-dcdq5. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 31 00:10:59.873: INFO: Pod daemon-set-dcdq5 is not available Jan 31 00:10:59.877: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:11:00.891: INFO: Pod daemon-set-2pmmw is not available Jan 31 00:11:00.895: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:95 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1758, will wait for the garbage collector to delete the pods Jan 31 00:11:00.959: INFO: Deleting DaemonSet.extensions daemon-set took: 6.322894ms Jan 31 00:11:01.560: INFO: Terminating DaemonSet.extensions daemon-set pods took: 600.246441ms Jan 31 00:12:00.767: INFO: Number of nodes with available pods: 0 Jan 31 00:12:00.767: INFO: Number of running nodes: 0, number of available pods: 0 Jan 31 00:12:00.773: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"1110952"},"items":null} Jan 31 00:12:00.776: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1110952"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:12:00.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1758" for this suite. • [SLOW TEST:153.311 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":311,"completed":31,"skipped":672,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:12:00.794: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Jan 31 00:12:00.859: INFO: >>> kubeConfig: /root/.kube/config Jan 31 00:12:04.494: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:12:18.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2040" for this suite. • [SLOW TEST:17.632 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":311,"completed":32,"skipped":681,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:12:18.426: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating pod busybox-4f4e2bb5-1897-4471-ba0e-197ccd3b590e in namespace container-probe-6081 Jan 31 00:12:22.539: INFO: Started pod busybox-4f4e2bb5-1897-4471-ba0e-197ccd3b590e in namespace container-probe-6081 STEP: checking the pod's current state and verifying that restartCount is present Jan 31 00:12:22.542: INFO: Initial restart count of pod busybox-4f4e2bb5-1897-4471-ba0e-197ccd3b590e is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:16:23.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6081" for this suite. • [SLOW TEST:244.965 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:635 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":311,"completed":33,"skipped":693,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:16:23.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating configMap with name configmap-test-volume-map-ee234811-1e2b-41b2-9afc-d9dfdc7ca02a STEP: Creating a pod to test consume configMaps Jan 31 00:16:23.629: INFO: Waiting up to 5m0s for pod "pod-configmaps-57c1bc54-8858-4f09-94ea-8658e6cef61f" in namespace "configmap-8761" to be "Succeeded or Failed" Jan 31 00:16:23.654: INFO: Pod "pod-configmaps-57c1bc54-8858-4f09-94ea-8658e6cef61f": Phase="Pending", Reason="", readiness=false. Elapsed: 24.358234ms Jan 31 00:16:25.657: INFO: Pod "pod-configmaps-57c1bc54-8858-4f09-94ea-8658e6cef61f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027905167s Jan 31 00:16:27.663: INFO: Pod "pod-configmaps-57c1bc54-8858-4f09-94ea-8658e6cef61f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033221885s Jan 31 00:16:29.667: INFO: Pod "pod-configmaps-57c1bc54-8858-4f09-94ea-8658e6cef61f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.037221456s STEP: Saw pod success Jan 31 00:16:29.667: INFO: Pod "pod-configmaps-57c1bc54-8858-4f09-94ea-8658e6cef61f" satisfied condition "Succeeded or Failed" Jan 31 00:16:29.670: INFO: Trying to get logs from node latest-worker pod pod-configmaps-57c1bc54-8858-4f09-94ea-8658e6cef61f container agnhost-container: STEP: delete the pod Jan 31 00:16:29.725: INFO: Waiting for pod pod-configmaps-57c1bc54-8858-4f09-94ea-8658e6cef61f to disappear Jan 31 00:16:29.735: INFO: Pod pod-configmaps-57c1bc54-8858-4f09-94ea-8658e6cef61f no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:16:29.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8761" for this suite. • [SLOW TEST:6.353 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":311,"completed":34,"skipped":721,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:16:29.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating projection with secret that has name projected-secret-test-c37140a7-ca66-405a-97e2-e7bfa46b4419 STEP: Creating a pod to test consume secrets Jan 31 00:16:29.853: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-1080c568-f6cc-46e7-9e36-2f70212d4f8c" in namespace "projected-9653" to be "Succeeded or Failed" Jan 31 00:16:29.868: INFO: Pod "pod-projected-secrets-1080c568-f6cc-46e7-9e36-2f70212d4f8c": Phase="Pending", Reason="", readiness=false. Elapsed: 14.75039ms Jan 31 00:16:31.872: INFO: Pod "pod-projected-secrets-1080c568-f6cc-46e7-9e36-2f70212d4f8c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018761896s Jan 31 00:16:33.877: INFO: Pod "pod-projected-secrets-1080c568-f6cc-46e7-9e36-2f70212d4f8c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02394773s STEP: Saw pod success Jan 31 00:16:33.877: INFO: Pod "pod-projected-secrets-1080c568-f6cc-46e7-9e36-2f70212d4f8c" satisfied condition "Succeeded or Failed" Jan 31 00:16:33.880: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-1080c568-f6cc-46e7-9e36-2f70212d4f8c container projected-secret-volume-test: STEP: delete the pod Jan 31 00:16:33.912: INFO: Waiting for pod pod-projected-secrets-1080c568-f6cc-46e7-9e36-2f70212d4f8c to disappear Jan 31 00:16:33.927: INFO: Pod pod-projected-secrets-1080c568-f6cc-46e7-9e36-2f70212d4f8c no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:16:33.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9653" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":311,"completed":35,"skipped":738,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:16:34.108: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jan 31 00:16:38.783: INFO: Successfully updated pod "pod-update-activedeadlineseconds-1bb410f4-8d0d-4601-bc3c-a3bce983cf0a" Jan 31 00:16:38.783: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-1bb410f4-8d0d-4601-bc3c-a3bce983cf0a" in namespace "pods-6744" to be "terminated due to deadline exceeded" Jan 31 00:16:38.789: INFO: Pod "pod-update-activedeadlineseconds-1bb410f4-8d0d-4601-bc3c-a3bce983cf0a": Phase="Running", Reason="", readiness=true. Elapsed: 5.395476ms Jan 31 00:16:40.822: INFO: Pod "pod-update-activedeadlineseconds-1bb410f4-8d0d-4601-bc3c-a3bce983cf0a": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.038992631s Jan 31 00:16:40.822: INFO: Pod "pod-update-activedeadlineseconds-1bb410f4-8d0d-4601-bc3c-a3bce983cf0a" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:16:40.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6744" for this suite. • [SLOW TEST:6.757 seconds] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:635 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":311,"completed":36,"skipped":746,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:16:40.865: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating pod pod-subpath-test-configmap-7mvk STEP: Creating a pod to test atomic-volume-subpath Jan 31 00:16:41.007: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-7mvk" in namespace "subpath-6672" to be "Succeeded or Failed" Jan 31 00:16:41.011: INFO: Pod "pod-subpath-test-configmap-7mvk": Phase="Pending", Reason="", readiness=false. Elapsed: 3.791262ms Jan 31 00:16:43.015: INFO: Pod "pod-subpath-test-configmap-7mvk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007680273s Jan 31 00:16:45.018: INFO: Pod "pod-subpath-test-configmap-7mvk": Phase="Running", Reason="", readiness=true. Elapsed: 4.011451496s Jan 31 00:16:47.022: INFO: Pod "pod-subpath-test-configmap-7mvk": Phase="Running", Reason="", readiness=true. Elapsed: 6.015280432s Jan 31 00:16:49.027: INFO: Pod "pod-subpath-test-configmap-7mvk": Phase="Running", Reason="", readiness=true. Elapsed: 8.020314477s Jan 31 00:16:51.032: INFO: Pod "pod-subpath-test-configmap-7mvk": Phase="Running", Reason="", readiness=true. Elapsed: 10.024773484s Jan 31 00:16:53.036: INFO: Pod "pod-subpath-test-configmap-7mvk": Phase="Running", Reason="", readiness=true. Elapsed: 12.028784341s Jan 31 00:16:55.041: INFO: Pod "pod-subpath-test-configmap-7mvk": Phase="Running", Reason="", readiness=true. Elapsed: 14.033619077s Jan 31 00:16:57.044: INFO: Pod "pod-subpath-test-configmap-7mvk": Phase="Running", Reason="", readiness=true. Elapsed: 16.037198934s Jan 31 00:16:59.048: INFO: Pod "pod-subpath-test-configmap-7mvk": Phase="Running", Reason="", readiness=true. Elapsed: 18.041579246s Jan 31 00:17:01.054: INFO: Pod "pod-subpath-test-configmap-7mvk": Phase="Running", Reason="", readiness=true. Elapsed: 20.046692621s Jan 31 00:17:03.058: INFO: Pod "pod-subpath-test-configmap-7mvk": Phase="Running", Reason="", readiness=true. Elapsed: 22.051037747s Jan 31 00:17:05.062: INFO: Pod "pod-subpath-test-configmap-7mvk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.055258452s STEP: Saw pod success Jan 31 00:17:05.062: INFO: Pod "pod-subpath-test-configmap-7mvk" satisfied condition "Succeeded or Failed" Jan 31 00:17:05.068: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-configmap-7mvk container test-container-subpath-configmap-7mvk: STEP: delete the pod Jan 31 00:17:05.193: INFO: Waiting for pod pod-subpath-test-configmap-7mvk to disappear Jan 31 00:17:05.222: INFO: Pod pod-subpath-test-configmap-7mvk no longer exists STEP: Deleting pod pod-subpath-test-configmap-7mvk Jan 31 00:17:05.222: INFO: Deleting pod "pod-subpath-test-configmap-7mvk" in namespace "subpath-6672" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:17:05.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6672" for this suite. • [SLOW TEST:24.367 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":311,"completed":37,"skipped":761,"failed":0} SSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:17:05.233: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0131 00:17:06.916563 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Jan 31 00:18:08.948: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:18:08.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2740" for this suite. • [SLOW TEST:63.727 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":311,"completed":38,"skipped":764,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:18:08.961: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-5033 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a new StatefulSet Jan 31 00:18:09.110: INFO: Found 0 stateful pods, waiting for 3 Jan 31 00:18:19.427: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 31 00:18:19.427: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 31 00:18:19.427: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 31 00:18:29.130: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 31 00:18:29.130: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 31 00:18:29.130: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Jan 31 00:18:29.162: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Jan 31 00:18:39.225: INFO: Updating stateful set ss2 Jan 31 00:18:39.261: INFO: Waiting for Pod statefulset-5033/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 31 00:18:49.267: INFO: Waiting for Pod statefulset-5033/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 31 00:18:59.267: INFO: Waiting for Pod statefulset-5033/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Jan 31 00:19:10.159: INFO: Found 2 stateful pods, waiting for 3 Jan 31 00:19:20.167: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 31 00:19:20.167: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 31 00:19:20.167: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Jan 31 00:19:20.220: INFO: Updating stateful set ss2 Jan 31 00:19:20.371: INFO: Waiting for Pod statefulset-5033/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 31 00:19:30.400: INFO: Updating stateful set ss2 Jan 31 00:19:30.442: INFO: Waiting for StatefulSet statefulset-5033/ss2 to complete update Jan 31 00:19:30.442: INFO: Waiting for Pod statefulset-5033/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 31 00:19:40.450: INFO: Waiting for StatefulSet statefulset-5033/ss2 to complete update Jan 31 00:19:40.450: INFO: Waiting for Pod statefulset-5033/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 31 00:19:50.450: INFO: Waiting for StatefulSet statefulset-5033/ss2 to complete update Jan 31 00:19:50.450: INFO: Waiting for Pod statefulset-5033/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 31 00:20:00.451: INFO: Waiting for StatefulSet statefulset-5033/ss2 to complete update Jan 31 00:20:00.451: INFO: Waiting for Pod statefulset-5033/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 31 00:20:10.450: INFO: Waiting for StatefulSet statefulset-5033/ss2 to complete update Jan 31 00:20:10.450: INFO: Waiting for Pod statefulset-5033/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 31 00:20:20.451: INFO: Waiting for StatefulSet statefulset-5033/ss2 to complete update Jan 31 00:20:20.451: INFO: Waiting for Pod statefulset-5033/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 31 00:20:30.451: INFO: Waiting for StatefulSet statefulset-5033/ss2 to complete update Jan 31 00:20:30.451: INFO: Waiting for Pod statefulset-5033/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Jan 31 00:20:40.451: INFO: Deleting all statefulset in ns statefulset-5033 Jan 31 00:20:40.454: INFO: Scaling statefulset ss2 to 0 Jan 31 00:21:40.488: INFO: Waiting for statefulset status.replicas updated to 0 Jan 31 00:21:40.490: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:21:40.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5033" for this suite. • [SLOW TEST:211.576 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:635 should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":311,"completed":39,"skipped":792,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:21:40.538: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a pod to test emptydir 0644 on tmpfs Jan 31 00:21:40.651: INFO: Waiting up to 5m0s for pod "pod-7fa08648-2535-490a-ac06-826ceefca328" in namespace "emptydir-5310" to be "Succeeded or Failed" Jan 31 00:21:40.662: INFO: Pod "pod-7fa08648-2535-490a-ac06-826ceefca328": Phase="Pending", Reason="", readiness=false. Elapsed: 11.248377ms Jan 31 00:21:42.681: INFO: Pod "pod-7fa08648-2535-490a-ac06-826ceefca328": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03005077s Jan 31 00:21:44.685: INFO: Pod "pod-7fa08648-2535-490a-ac06-826ceefca328": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034120841s STEP: Saw pod success Jan 31 00:21:44.685: INFO: Pod "pod-7fa08648-2535-490a-ac06-826ceefca328" satisfied condition "Succeeded or Failed" Jan 31 00:21:44.687: INFO: Trying to get logs from node latest-worker pod pod-7fa08648-2535-490a-ac06-826ceefca328 container test-container: STEP: delete the pod Jan 31 00:21:44.754: INFO: Waiting for pod pod-7fa08648-2535-490a-ac06-826ceefca328 to disappear Jan 31 00:21:44.789: INFO: Pod pod-7fa08648-2535-490a-ac06-826ceefca328 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:21:44.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5310" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":40,"skipped":834,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:21:44.803: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 31 00:21:45.550: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 31 00:21:47.588: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747649305, loc:(*time.Location)(0x79bd420)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747649305, loc:(*time.Location)(0x79bd420)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747649305, loc:(*time.Location)(0x79bd420)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747649305, loc:(*time.Location)(0x79bd420)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 31 00:21:49.594: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747649305, loc:(*time.Location)(0x79bd420)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747649305, loc:(*time.Location)(0x79bd420)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747649305, loc:(*time.Location)(0x79bd420)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747649305, loc:(*time.Location)(0x79bd420)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 31 00:21:52.642: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:22:02.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1331" for this suite. STEP: Destroying namespace "webhook-1331-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:18.227 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":311,"completed":41,"skipped":891,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:22:03.030: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating pod pod-subpath-test-configmap-ckgc STEP: Creating a pod to test atomic-volume-subpath Jan 31 00:22:03.145: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-ckgc" in namespace "subpath-2494" to be "Succeeded or Failed" Jan 31 00:22:03.148: INFO: Pod "pod-subpath-test-configmap-ckgc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.979364ms Jan 31 00:22:05.153: INFO: Pod "pod-subpath-test-configmap-ckgc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007984918s Jan 31 00:22:07.158: INFO: Pod "pod-subpath-test-configmap-ckgc": Phase="Running", Reason="", readiness=true. Elapsed: 4.013150426s Jan 31 00:22:09.163: INFO: Pod "pod-subpath-test-configmap-ckgc": Phase="Running", Reason="", readiness=true. Elapsed: 6.017893116s Jan 31 00:22:11.186: INFO: Pod "pod-subpath-test-configmap-ckgc": Phase="Running", Reason="", readiness=true. Elapsed: 8.040944742s Jan 31 00:22:13.190: INFO: Pod "pod-subpath-test-configmap-ckgc": Phase="Running", Reason="", readiness=true. Elapsed: 10.04539242s Jan 31 00:22:15.195: INFO: Pod "pod-subpath-test-configmap-ckgc": Phase="Running", Reason="", readiness=true. Elapsed: 12.050061513s Jan 31 00:22:17.199: INFO: Pod "pod-subpath-test-configmap-ckgc": Phase="Running", Reason="", readiness=true. Elapsed: 14.054009351s Jan 31 00:22:19.203: INFO: Pod "pod-subpath-test-configmap-ckgc": Phase="Running", Reason="", readiness=true. Elapsed: 16.058578496s Jan 31 00:22:21.213: INFO: Pod "pod-subpath-test-configmap-ckgc": Phase="Running", Reason="", readiness=true. Elapsed: 18.068237033s Jan 31 00:22:23.218: INFO: Pod "pod-subpath-test-configmap-ckgc": Phase="Running", Reason="", readiness=true. Elapsed: 20.072667767s Jan 31 00:22:25.223: INFO: Pod "pod-subpath-test-configmap-ckgc": Phase="Running", Reason="", readiness=true. Elapsed: 22.078015962s Jan 31 00:22:27.228: INFO: Pod "pod-subpath-test-configmap-ckgc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.083132526s STEP: Saw pod success Jan 31 00:22:27.228: INFO: Pod "pod-subpath-test-configmap-ckgc" satisfied condition "Succeeded or Failed" Jan 31 00:22:27.231: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-configmap-ckgc container test-container-subpath-configmap-ckgc: STEP: delete the pod Jan 31 00:22:27.296: INFO: Waiting for pod pod-subpath-test-configmap-ckgc to disappear Jan 31 00:22:27.302: INFO: Pod pod-subpath-test-configmap-ckgc no longer exists STEP: Deleting pod pod-subpath-test-configmap-ckgc Jan 31 00:22:27.302: INFO: Deleting pod "pod-subpath-test-configmap-ckgc" in namespace "subpath-2494" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:22:27.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2494" for this suite. • [SLOW TEST:24.288 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":311,"completed":42,"skipped":904,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:22:27.319: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 31 00:22:28.371: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 31 00:22:30.382: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747649348, loc:(*time.Location)(0x79bd420)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747649348, loc:(*time.Location)(0x79bd420)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747649348, loc:(*time.Location)(0x79bd420)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747649348, loc:(*time.Location)(0x79bd420)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 31 00:22:32.414: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747649348, loc:(*time.Location)(0x79bd420)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747649348, loc:(*time.Location)(0x79bd420)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747649348, loc:(*time.Location)(0x79bd420)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747649348, loc:(*time.Location)(0x79bd420)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 31 00:22:35.437: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:22:35.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2757" for this suite. STEP: Destroying namespace "webhook-2757-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:8.844 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":311,"completed":43,"skipped":918,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:22:36.163: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:85 [It] deployment should delete old replica sets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 Jan 31 00:22:36.255: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Jan 31 00:22:41.259: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jan 31 00:22:41.259: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:79 Jan 31 00:22:41.336: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-1219 2978d332-f2eb-4135-8939-0f65dc59b99b 1113493 1 2021-01-31 00:22:41 +0000 UTC map[name:cleanup-pod] map[] [] [] [{e2e.test Update apps/v1 2021-01-31 00:22:41 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.21 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0039fb6c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Jan 31 00:22:41.382: INFO: New ReplicaSet "test-cleanup-deployment-685c4f8568" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-685c4f8568 deployment-1219 0a4101ea-9531-46df-a764-5c354e3f8264 1113495 1 2021-01-31 00:22:41 +0000 UTC map[name:cleanup-pod pod-template-hash:685c4f8568] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 2978d332-f2eb-4135-8939-0f65dc59b99b 0xc0039fbbf7 0xc0039fbbf8}] [] [{kube-controller-manager Update apps/v1 2021-01-31 00:22:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2978d332-f2eb-4135-8939-0f65dc59b99b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 685c4f8568,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:685c4f8568] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.21 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0039fbc88 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 31 00:22:41.382: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Jan 31 00:22:41.382: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-1219 7ddf42ef-4983-4217-99a1-17534d8e385d 1113494 1 2021-01-31 00:22:36 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 2978d332-f2eb-4135-8939-0f65dc59b99b 0xc0039fbae7 0xc0039fbae8}] [] [{e2e.test Update apps/v1 2021-01-31 00:22:36 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-01-31 00:22:41 +0000 UTC FieldsV1 {"f:metadata":{"f:ownerReferences":{".":{},"k:{\"uid\":\"2978d332-f2eb-4135-8939-0f65dc59b99b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0039fbb88 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jan 31 00:22:41.427: INFO: Pod "test-cleanup-controller-dgnck" is available: &Pod{ObjectMeta:{test-cleanup-controller-dgnck test-cleanup-controller- deployment-1219 b87c9727-69ef-48fe-8aa1-6f6914b24ddf 1113475 0 2021-01-31 00:22:36 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller 7ddf42ef-4983-4217-99a1-17534d8e385d 0xc0045e2b67 0xc0045e2b68}] [] [{kube-controller-manager Update v1 2021-01-31 00:22:36 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ddf42ef-4983-4217-99a1-17534d8e385d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-31 00:22:39 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.56\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zxlrd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zxlrd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zxlrd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:22:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:22:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:22:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:22:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.2.56,StartTime:2021-01-31 00:22:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-01-31 00:22:39 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://3d43e9df54ce01505bf424a21dbb8a1c7765d577e1e5f1d5960d157a87d85d26,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.56,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 31 00:22:41.427: INFO: Pod "test-cleanup-deployment-685c4f8568-vc2pt" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-685c4f8568-vc2pt test-cleanup-deployment-685c4f8568- deployment-1219 92df48fc-6395-44f9-8be1-6d313e78e060 1113500 0 2021-01-31 00:22:41 +0000 UTC map[name:cleanup-pod pod-template-hash:685c4f8568] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-685c4f8568 0a4101ea-9531-46df-a764-5c354e3f8264 0xc0045e2d57 0xc0045e2d58}] [] [{kube-controller-manager Update v1 2021-01-31 00:22:41 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0a4101ea-9531-46df-a764-5c354e3f8264\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zxlrd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zxlrd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zxlrd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:22:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:22:41.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1219" for this suite. • [SLOW TEST:5.334 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":311,"completed":44,"skipped":938,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:22:41.498: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should provide secure master service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:22:41.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-968" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":311,"completed":45,"skipped":1010,"failed":0} SSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:22:41.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Jan 31 00:22:41.878: INFO: Waiting up to 1m0s for all nodes to be ready Jan 31 00:23:41.902: INFO: Waiting for terminating namespaces to be deleted... [It] validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Create pods that use 2/3 of node resources. Jan 31 00:23:41.945: INFO: Created pod: pod0-sched-preemption-low-priority Jan 31 00:23:42.020: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a high priority pod that has same requirements as that of lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:24:36.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-1090" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:114.497 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]","total":311,"completed":46,"skipped":1015,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:24:36.160: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating projection with secret that has name secret-emptykey-test-7d710a16-eb81-4e28-9d82-a67d5362edb3 [AfterEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:24:36.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2380" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":311,"completed":47,"skipped":1026,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:24:36.297: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:85 [It] deployment should support proportional scaling [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 Jan 31 00:24:36.380: INFO: Creating deployment "webserver-deployment" Jan 31 00:24:36.397: INFO: Waiting for observed generation 1 Jan 31 00:24:38.476: INFO: Waiting for all required pods to come up Jan 31 00:24:38.481: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Jan 31 00:24:50.491: INFO: Waiting for deployment "webserver-deployment" to complete Jan 31 00:24:50.497: INFO: Updating deployment "webserver-deployment" with a non-existent image Jan 31 00:24:50.506: INFO: Updating deployment webserver-deployment Jan 31 00:24:50.506: INFO: Waiting for observed generation 2 Jan 31 00:24:52.536: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Jan 31 00:24:52.538: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Jan 31 00:24:52.542: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Jan 31 00:24:52.549: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Jan 31 00:24:52.549: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Jan 31 00:24:52.552: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Jan 31 00:24:52.556: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Jan 31 00:24:52.556: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Jan 31 00:24:52.567: INFO: Updating deployment webserver-deployment Jan 31 00:24:52.567: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Jan 31 00:24:53.427: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Jan 31 00:24:56.325: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:79 Jan 31 00:24:57.237: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-3903 6e6dc663-4849-4d17-b556-5ee5c86e31e4 1114141 3 2021-01-31 00:24:36 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2021-01-31 00:24:36 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-01-31 00:24:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003452648 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2021-01-31 00:24:52 +0000 UTC,LastTransitionTime:2021-01-31 00:24:52 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-795d758f88" is progressing.,LastUpdateTime:2021-01-31 00:24:54 +0000 UTC,LastTransitionTime:2021-01-31 00:24:36 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Jan 31 00:24:57.518: INFO: New ReplicaSet "webserver-deployment-795d758f88" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-795d758f88 deployment-3903 620a98bf-8bc4-4e39-bb8e-8c2f677ba9cf 1114137 3 2021-01-31 00:24:50 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 6e6dc663-4849-4d17-b556-5ee5c86e31e4 0xc003452a07 0xc003452a08}] [] [{kube-controller-manager Update apps/v1 2021-01-31 00:24:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e6dc663-4849-4d17-b556-5ee5c86e31e4\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 795d758f88,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003452a88 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 31 00:24:57.518: INFO: All old ReplicaSets of Deployment "webserver-deployment": Jan 31 00:24:57.518: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-dd94f59b7 deployment-3903 1bd5fc24-4605-46c3-b784-5f87165ccf28 1114122 3 2021-01-31 00:24:36 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 6e6dc663-4849-4d17-b556-5ee5c86e31e4 0xc003452ae7 0xc003452ae8}] [] [{kube-controller-manager Update apps/v1 2021-01-31 00:24:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e6dc663-4849-4d17-b556-5ee5c86e31e4\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: dd94f59b7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003452b58 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Jan 31 00:24:57.771: INFO: Pod "webserver-deployment-795d758f88-2nhpq" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-2nhpq webserver-deployment-795d758f88- deployment-3903 e35a606d-094e-4db6-a7ea-921c3a0f8775 1114183 0 2021-01-31 00:24:53 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 620a98bf-8bc4-4e39-bb8e-8c2f677ba9cf 0xc003934700 0xc003934701}] [] [{kube-controller-manager Update v1 2021-01-31 00:24:53 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"620a98bf-8bc4-4e39-bb8e-8c2f677ba9cf\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-31 00:24:55 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gjxkv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gjxkv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gjxkv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:,StartTime:2021-01-31 00:24:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 31 00:24:57.772: INFO: Pod "webserver-deployment-795d758f88-6zldq" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-6zldq webserver-deployment-795d758f88- deployment-3903 9acb0689-0e64-4bcc-8fc8-4508f712a15a 1114153 0 2021-01-31 00:24:53 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 620a98bf-8bc4-4e39-bb8e-8c2f677ba9cf 0xc0039348a7 0xc0039348a8}] [] [{kube-controller-manager Update v1 2021-01-31 00:24:53 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"620a98bf-8bc4-4e39-bb8e-8c2f677ba9cf\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-31 00:24:54 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gjxkv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gjxkv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gjxkv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2021-01-31 00:24:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 31 00:24:57.772: INFO: Pod "webserver-deployment-795d758f88-8df6j" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-8df6j webserver-deployment-795d758f88- deployment-3903 6990dbf6-79b0-4cfc-a00d-3596e3d49eaf 1114043 0 2021-01-31 00:24:50 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 620a98bf-8bc4-4e39-bb8e-8c2f677ba9cf 0xc003934a57 0xc003934a58}] [] [{kube-controller-manager Update v1 2021-01-31 00:24:50 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"620a98bf-8bc4-4e39-bb8e-8c2f677ba9cf\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-31 00:24:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gjxkv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gjxkv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gjxkv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:,StartTime:2021-01-31 00:24:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 31 00:24:57.772: INFO: Pod "webserver-deployment-795d758f88-dhmbg" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-dhmbg webserver-deployment-795d758f88- deployment-3903 662d8eaf-d3d8-4498-9f36-7ed3554f48c0 1114167 0 2021-01-31 00:24:53 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 620a98bf-8bc4-4e39-bb8e-8c2f677ba9cf 0xc003934c07 0xc003934c08}] [] [{kube-controller-manager Update v1 2021-01-31 00:24:53 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"620a98bf-8bc4-4e39-bb8e-8c2f677ba9cf\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-31 00:24:54 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gjxkv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gjxkv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gjxkv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:,StartTime:2021-01-31 00:24:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 31 00:24:57.773: INFO: Pod "webserver-deployment-795d758f88-j54dq" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-j54dq webserver-deployment-795d758f88- deployment-3903 7754e9ed-b5df-4e00-9a7f-ec4027f220f5 1114191 0 2021-01-31 00:24:50 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 620a98bf-8bc4-4e39-bb8e-8c2f677ba9cf 0xc003934db7 0xc003934db8}] [] [{kube-controller-manager Update v1 2021-01-31 00:24:50 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"620a98bf-8bc4-4e39-bb8e-8c2f677ba9cf\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-31 00:24:56 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.140\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gjxkv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gjxkv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gjxkv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:10.244.1.140,StartTime:2021-01-31 00:24:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.140,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 31 00:24:57.773: INFO: Pod "webserver-deployment-795d758f88-lj9sg" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-lj9sg webserver-deployment-795d758f88- deployment-3903 3959a353-74ea-49f3-adc5-b15d25975a57 1114063 0 2021-01-31 00:24:50 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 620a98bf-8bc4-4e39-bb8e-8c2f677ba9cf 0xc003934f97 0xc003934f98}] [] [{kube-controller-manager Update v1 2021-01-31 00:24:50 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"620a98bf-8bc4-4e39-bb8e-8c2f677ba9cf\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-31 00:24:51 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gjxkv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gjxkv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gjxkv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2021-01-31 00:24:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 31 00:24:57.773: INFO: Pod "webserver-deployment-795d758f88-ls9rm" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-ls9rm webserver-deployment-795d758f88- deployment-3903 ebcf3e72-504d-46c7-8341-719b6aedeca6 1114129 0 2021-01-31 00:24:52 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 620a98bf-8bc4-4e39-bb8e-8c2f677ba9cf 0xc003935157 0xc003935158}] [] [{kube-controller-manager Update v1 2021-01-31 00:24:52 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"620a98bf-8bc4-4e39-bb8e-8c2f677ba9cf\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-31 00:24:54 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gjxkv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gjxkv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gjxkv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2021-01-31 00:24:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 31 00:24:57.773: INFO: Pod "webserver-deployment-795d758f88-ltcjs" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-ltcjs webserver-deployment-795d758f88- deployment-3903 aadb42aa-9ae2-4fe4-9396-74c08c0a8965 1114059 0 2021-01-31 00:24:50 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 620a98bf-8bc4-4e39-bb8e-8c2f677ba9cf 0xc003935307 0xc003935308}] [] [{kube-controller-manager Update v1 2021-01-31 00:24:50 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"620a98bf-8bc4-4e39-bb8e-8c2f677ba9cf\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-31 00:24:51 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gjxkv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gjxkv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gjxkv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:,StartTime:2021-01-31 00:24:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 31 00:24:57.773: INFO: Pod "webserver-deployment-795d758f88-rlfbc" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-rlfbc webserver-deployment-795d758f88- deployment-3903 cda9cea3-0d3c-46b4-becc-6752ef9ca1ac 1114169 0 2021-01-31 00:24:53 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 620a98bf-8bc4-4e39-bb8e-8c2f677ba9cf 0xc0039354b7 0xc0039354b8}] [] [{kube-controller-manager Update v1 2021-01-31 00:24:53 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"620a98bf-8bc4-4e39-bb8e-8c2f677ba9cf\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-31 00:24:55 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gjxkv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gjxkv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gjxkv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2021-01-31 00:24:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 31 00:24:57.773: INFO: Pod "webserver-deployment-795d758f88-t5sjm" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-t5sjm webserver-deployment-795d758f88- deployment-3903 e0091c97-e114-4b53-a0ac-db844066a65a 1114164 0 2021-01-31 00:24:53 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 620a98bf-8bc4-4e39-bb8e-8c2f677ba9cf 0xc003935667 0xc003935668}] [] [{kube-controller-manager Update v1 2021-01-31 00:24:53 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"620a98bf-8bc4-4e39-bb8e-8c2f677ba9cf\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-31 00:24:54 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gjxkv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gjxkv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gjxkv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2021-01-31 00:24:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 31 00:24:57.774: INFO: Pod "webserver-deployment-795d758f88-tpsfs" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-tpsfs webserver-deployment-795d758f88- deployment-3903 c95708a3-ddc1-43fe-ae28-4a32bb353eaf 1114171 0 2021-01-31 00:24:53 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 620a98bf-8bc4-4e39-bb8e-8c2f677ba9cf 0xc003935817 0xc003935818}] [] [{kube-controller-manager Update v1 2021-01-31 00:24:53 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"620a98bf-8bc4-4e39-bb8e-8c2f677ba9cf\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-31 00:24:55 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gjxkv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gjxkv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gjxkv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:,StartTime:2021-01-31 00:24:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 31 00:24:57.774: INFO: Pod "webserver-deployment-795d758f88-vrkh5" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-vrkh5 webserver-deployment-795d758f88- deployment-3903 2633a62e-632f-488f-b34e-d08f14d79ddb 1114142 0 2021-01-31 00:24:53 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 620a98bf-8bc4-4e39-bb8e-8c2f677ba9cf 0xc0039359c7 0xc0039359c8}] [] [{kube-controller-manager Update v1 2021-01-31 00:24:53 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"620a98bf-8bc4-4e39-bb8e-8c2f677ba9cf\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-31 00:24:54 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gjxkv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gjxkv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gjxkv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:,StartTime:2021-01-31 00:24:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 31 00:24:57.774: INFO: Pod "webserver-deployment-795d758f88-xqdzn" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-xqdzn webserver-deployment-795d758f88- deployment-3903 233efe03-6642-480c-8154-233522314deb 1114204 0 2021-01-31 00:24:50 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 620a98bf-8bc4-4e39-bb8e-8c2f677ba9cf 0xc003935b77 0xc003935b78}] [] [{kube-controller-manager Update v1 2021-01-31 00:24:50 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"620a98bf-8bc4-4e39-bb8e-8c2f677ba9cf\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-31 00:24:57 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.67\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gjxkv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gjxkv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gjxkv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.2.67,StartTime:2021-01-31 00:24:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.67,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 31 00:24:57.774: INFO: Pod "webserver-deployment-dd94f59b7-4bqkj" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-4bqkj webserver-deployment-dd94f59b7- deployment-3903 e545f9a5-710f-4cec-9bf8-a2b562b28868 1114138 0 2021-01-31 00:24:52 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 1bd5fc24-4605-46c3-b784-5f87165ccf28 0xc003935d57 0xc003935d58}] [] [{kube-controller-manager Update v1 2021-01-31 00:24:52 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1bd5fc24-4605-46c3-b784-5f87165ccf28\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-31 00:24:54 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gjxkv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gjxkv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gjxkv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2021-01-31 00:24:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 31 00:24:57.774: INFO: Pod "webserver-deployment-dd94f59b7-6wg2f" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-6wg2f webserver-deployment-dd94f59b7- deployment-3903 91e74acf-1f72-4d60-947e-35fff5592711 1114173 0 2021-01-31 00:24:53 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 1bd5fc24-4605-46c3-b784-5f87165ccf28 0xc003935ee7 0xc003935ee8}] [] [{kube-controller-manager Update v1 2021-01-31 00:24:53 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1bd5fc24-4605-46c3-b784-5f87165ccf28\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-31 00:24:55 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gjxkv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gjxkv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gjxkv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2021-01-31 00:24:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 31 00:24:57.775: INFO: Pod "webserver-deployment-dd94f59b7-79pvf" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-79pvf webserver-deployment-dd94f59b7- deployment-3903 4fa016ec-8063-427a-88b9-92a8598f748d 1113984 0 2021-01-31 00:24:36 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 1bd5fc24-4605-46c3-b784-5f87165ccf28 0xc0041aa077 0xc0041aa078}] [] [{kube-controller-manager Update v1 2021-01-31 00:24:36 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1bd5fc24-4605-46c3-b784-5f87165ccf28\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-31 00:24:47 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.62\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gjxkv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gjxkv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gjxkv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.2.62,StartTime:2021-01-31 00:24:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-01-31 00:24:46 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://5ae43c1f803c3873386d32e183f8d2f5544a26b857fcce1b36b6c4b8283136e6,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.62,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 31 00:24:57.775: INFO: Pod "webserver-deployment-dd94f59b7-96htl" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-96htl webserver-deployment-dd94f59b7- deployment-3903 fee2fa8f-7d53-4657-9a1f-f137e5efc706 1114161 0 2021-01-31 00:24:53 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 1bd5fc24-4605-46c3-b784-5f87165ccf28 0xc0041aa227 0xc0041aa228}] [] [{kube-controller-manager Update v1 2021-01-31 00:24:53 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1bd5fc24-4605-46c3-b784-5f87165ccf28\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-31 00:24:54 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gjxkv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gjxkv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gjxkv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:,StartTime:2021-01-31 00:24:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 31 00:24:57.775: INFO: Pod "webserver-deployment-dd94f59b7-9qh64" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-9qh64 webserver-deployment-dd94f59b7- deployment-3903 ffee3784-24af-426c-987d-6ca880e32daa 1114131 0 2021-01-31 00:24:53 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 1bd5fc24-4605-46c3-b784-5f87165ccf28 0xc0041aa3b7 0xc0041aa3b8}] [] [{kube-controller-manager Update v1 2021-01-31 00:24:53 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1bd5fc24-4605-46c3-b784-5f87165ccf28\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-31 00:24:54 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gjxkv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gjxkv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gjxkv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:,StartTime:2021-01-31 00:24:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 31 00:24:57.775: INFO: Pod "webserver-deployment-dd94f59b7-bs8hr" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-bs8hr webserver-deployment-dd94f59b7- deployment-3903 41f36c7a-94e5-4241-a61e-583bc728e383 1113993 0 2021-01-31 00:24:36 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 1bd5fc24-4605-46c3-b784-5f87165ccf28 0xc0041aa547 0xc0041aa548}] [] [{kube-controller-manager Update v1 2021-01-31 00:24:36 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1bd5fc24-4605-46c3-b784-5f87165ccf28\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-31 00:24:48 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.66\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gjxkv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gjxkv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gjxkv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.2.66,StartTime:2021-01-31 00:24:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-01-31 00:24:46 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://9a2ce368c3992a7cd5fec895f769a59a87dcfe49c3a36792e36c6a987e66565f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.66,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 31 00:24:57.775: INFO: Pod "webserver-deployment-dd94f59b7-dtn2w" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-dtn2w webserver-deployment-dd94f59b7- deployment-3903 f70aa54c-d131-4a91-bf4c-b88ed73a57a1 1113960 0 2021-01-31 00:24:36 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 1bd5fc24-4605-46c3-b784-5f87165ccf28 0xc0041aa6f7 0xc0041aa6f8}] [] [{kube-controller-manager Update v1 2021-01-31 00:24:36 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1bd5fc24-4605-46c3-b784-5f87165ccf28\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-31 00:24:45 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.61\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gjxkv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gjxkv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gjxkv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.2.61,StartTime:2021-01-31 00:24:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-01-31 00:24:43 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://2e8adb52d74ff41a7b0290543f6900468265b496fa874244798b3d0d5a144739,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.61,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 31 00:24:57.776: INFO: Pod "webserver-deployment-dd94f59b7-dvgwx" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-dvgwx webserver-deployment-dd94f59b7- deployment-3903 ebc63af4-b149-4be2-a110-49fcf78dd62f 1114107 0 2021-01-31 00:24:52 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 1bd5fc24-4605-46c3-b784-5f87165ccf28 0xc0041aa8a7 0xc0041aa8a8}] [] [{kube-controller-manager Update v1 2021-01-31 00:24:52 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1bd5fc24-4605-46c3-b784-5f87165ccf28\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-31 00:24:53 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gjxkv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gjxkv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gjxkv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2021-01-31 00:24:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 31 00:24:57.776: INFO: Pod "webserver-deployment-dd94f59b7-gcgr4" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-gcgr4 webserver-deployment-dd94f59b7- deployment-3903 7cba349d-a2a1-4e5a-a424-3ba10e9e5e3b 1114179 0 2021-01-31 00:24:53 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 1bd5fc24-4605-46c3-b784-5f87165ccf28 0xc0041aaa37 0xc0041aaa38}] [] [{kube-controller-manager Update v1 2021-01-31 00:24:53 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1bd5fc24-4605-46c3-b784-5f87165ccf28\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-31 00:24:55 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gjxkv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gjxkv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gjxkv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2021-01-31 00:24:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 31 00:24:57.776: INFO: Pod "webserver-deployment-dd94f59b7-jdqhp" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-jdqhp webserver-deployment-dd94f59b7- deployment-3903 0baa29f4-4b9e-44e6-b699-8490f6560a64 1114118 0 2021-01-31 00:24:52 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 1bd5fc24-4605-46c3-b784-5f87165ccf28 0xc0041aabc7 0xc0041aabc8}] [] [{kube-controller-manager Update v1 2021-01-31 00:24:52 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1bd5fc24-4605-46c3-b784-5f87165ccf28\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-31 00:24:53 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gjxkv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gjxkv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gjxkv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:,StartTime:2021-01-31 00:24:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 31 00:24:57.776: INFO: Pod "webserver-deployment-dd94f59b7-jjc98" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-jjc98 webserver-deployment-dd94f59b7- deployment-3903 709feeee-d865-41aa-a018-933f12e3a10c 1113933 0 2021-01-31 00:24:36 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 1bd5fc24-4605-46c3-b784-5f87165ccf28 0xc0041aad57 0xc0041aad58}] [] [{kube-controller-manager Update v1 2021-01-31 00:24:36 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1bd5fc24-4605-46c3-b784-5f87165ccf28\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-31 00:24:43 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.138\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gjxkv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gjxkv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gjxkv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:10.244.1.138,StartTime:2021-01-31 00:24:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-01-31 00:24:42 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://fb37f48a4aba0a36b0102cbc0f7ecbdd0b6f0f5c5b0e57fd249174b5001b4979,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.138,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 31 00:24:57.776: INFO: Pod "webserver-deployment-dd94f59b7-jm5w9" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-jm5w9 webserver-deployment-dd94f59b7- deployment-3903 d9ff9a12-57ed-44ec-8a4c-0edacb95222e 1114176 0 2021-01-31 00:24:53 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 1bd5fc24-4605-46c3-b784-5f87165ccf28 0xc0041aaf17 0xc0041aaf18}] [] [{kube-controller-manager Update v1 2021-01-31 00:24:53 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1bd5fc24-4605-46c3-b784-5f87165ccf28\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-31 00:24:55 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gjxkv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gjxkv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gjxkv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:,StartTime:2021-01-31 00:24:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 31 00:24:57.777: INFO: Pod "webserver-deployment-dd94f59b7-lxspv" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-lxspv webserver-deployment-dd94f59b7- deployment-3903 e5ed1fa2-5304-4018-aca6-7bec23ef8c73 1113980 0 2021-01-31 00:24:36 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 1bd5fc24-4605-46c3-b784-5f87165ccf28 0xc0041ab0b7 0xc0041ab0b8}] [] [{kube-controller-manager Update v1 2021-01-31 00:24:36 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1bd5fc24-4605-46c3-b784-5f87165ccf28\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-31 00:24:47 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.64\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gjxkv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gjxkv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gjxkv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.2.64,StartTime:2021-01-31 00:24:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-01-31 00:24:46 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://19ae1991bc5945db10d0d62f6e560ab24c1679c642771d3851eb63d4f83a580d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.64,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 31 00:24:57.777: INFO: Pod "webserver-deployment-dd94f59b7-ndglf" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-ndglf webserver-deployment-dd94f59b7- deployment-3903 7fdc20ed-be9d-410d-a529-3bd88feaac3b 1113942 0 2021-01-31 00:24:36 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 1bd5fc24-4605-46c3-b784-5f87165ccf28 0xc0041ab267 0xc0041ab268}] [] [{kube-controller-manager Update v1 2021-01-31 00:24:36 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1bd5fc24-4605-46c3-b784-5f87165ccf28\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-31 00:24:43 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.137\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gjxkv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gjxkv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gjxkv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:10.244.1.137,StartTime:2021-01-31 00:24:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-01-31 00:24:42 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://3cb787bd47a053043e6d221fa0ff7b55ed0d45f2d72a0d229d77700f5bc083be,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.137,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 31 00:24:57.777: INFO: Pod "webserver-deployment-dd94f59b7-nld89" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-nld89 webserver-deployment-dd94f59b7- deployment-3903 5796c536-da24-453a-80ab-73ebb306d276 1113927 0 2021-01-31 00:24:36 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 1bd5fc24-4605-46c3-b784-5f87165ccf28 0xc0041ab417 0xc0041ab418}] [] [{kube-controller-manager Update v1 2021-01-31 00:24:36 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1bd5fc24-4605-46c3-b784-5f87165ccf28\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-31 00:24:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.60\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gjxkv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gjxkv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gjxkv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.2.60,StartTime:2021-01-31 00:24:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-01-31 00:24:42 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://16b6ef6ba06e13490ab3b35a1695d8ff261dd93ac9163b6358b4fd553c382d81,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.60,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 31 00:24:57.778: INFO: Pod "webserver-deployment-dd94f59b7-nwjg4" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-nwjg4 webserver-deployment-dd94f59b7- deployment-3903 ae53b9f6-b358-4f36-8b3c-10a437b2a8f0 1113948 0 2021-01-31 00:24:36 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 1bd5fc24-4605-46c3-b784-5f87165ccf28 0xc0041ab6c7 0xc0041ab6c8}] [] [{kube-controller-manager Update v1 2021-01-31 00:24:36 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1bd5fc24-4605-46c3-b784-5f87165ccf28\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-31 00:24:44 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.139\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gjxkv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gjxkv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gjxkv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:10.244.1.139,StartTime:2021-01-31 00:24:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-01-31 00:24:43 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://89fa22f3680268043e4094acdfedf1405bbf6457bca2594542c059e365659722,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.139,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 31 00:24:57.778: INFO: Pod "webserver-deployment-dd94f59b7-pnf24" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-pnf24 webserver-deployment-dd94f59b7- deployment-3903 66db989a-4ca7-4645-bc27-b7f530a4135a 1114200 0 2021-01-31 00:24:53 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 1bd5fc24-4605-46c3-b784-5f87165ccf28 0xc0041abdb7 0xc0041abdb8}] [] [{kube-controller-manager Update v1 2021-01-31 00:24:53 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1bd5fc24-4605-46c3-b784-5f87165ccf28\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-31 00:24:57 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gjxkv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gjxkv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gjxkv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2021-01-31 00:24:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 31 00:24:57.778: INFO: Pod "webserver-deployment-dd94f59b7-s6dlh" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-s6dlh webserver-deployment-dd94f59b7- deployment-3903 59a5f841-6a48-47ac-b888-d596f7f06901 1114151 0 2021-01-31 00:24:53 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 1bd5fc24-4605-46c3-b784-5f87165ccf28 0xc001cd8137 0xc001cd8138}] [] [{kube-controller-manager Update v1 2021-01-31 00:24:53 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1bd5fc24-4605-46c3-b784-5f87165ccf28\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-31 00:24:54 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gjxkv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gjxkv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gjxkv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:,StartTime:2021-01-31 00:24:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 31 00:24:57.778: INFO: Pod "webserver-deployment-dd94f59b7-sgqk6" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-sgqk6 webserver-deployment-dd94f59b7- deployment-3903 a08adc45-f6e6-4854-8a03-2499a2893dbf 1114190 0 2021-01-31 00:24:53 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 1bd5fc24-4605-46c3-b784-5f87165ccf28 0xc001cd82d7 0xc001cd82d8}] [] [{kube-controller-manager Update v1 2021-01-31 00:24:53 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1bd5fc24-4605-46c3-b784-5f87165ccf28\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-31 00:24:56 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gjxkv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gjxkv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gjxkv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2021-01-31 00:24:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 31 00:24:57.778: INFO: Pod "webserver-deployment-dd94f59b7-zvx8w" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-zvx8w webserver-deployment-dd94f59b7- deployment-3903 c9e83227-a8c3-4ebd-a56f-a1fc57b8adac 1114147 0 2021-01-31 00:24:53 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 1bd5fc24-4605-46c3-b784-5f87165ccf28 0xc001cd8467 0xc001cd8468}] [] [{kube-controller-manager Update v1 2021-01-31 00:24:53 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1bd5fc24-4605-46c3-b784-5f87165ccf28\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-31 00:24:54 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gjxkv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gjxkv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gjxkv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 00:24:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2021-01-31 00:24:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:24:57.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3903" for this suite. • [SLOW TEST:22.262 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":311,"completed":48,"skipped":1042,"failed":0} [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:24:58.559: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 Jan 31 00:24:59.817: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:25:11.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-1098" for this suite. • [SLOW TEST:12.951 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":311,"completed":49,"skipped":1042,"failed":0} [sig-auth] ServiceAccounts should mount projected service account token [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:25:11.510: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount projected service account token [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a pod to test service account token: Jan 31 00:25:11.740: INFO: Waiting up to 5m0s for pod "test-pod-c5733e95-aaa7-4b1f-8f0c-85060abf065b" in namespace "svcaccounts-8276" to be "Succeeded or Failed" Jan 31 00:25:11.797: INFO: Pod "test-pod-c5733e95-aaa7-4b1f-8f0c-85060abf065b": Phase="Pending", Reason="", readiness=false. Elapsed: 57.496155ms Jan 31 00:25:13.944: INFO: Pod "test-pod-c5733e95-aaa7-4b1f-8f0c-85060abf065b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.2048366s Jan 31 00:25:16.277: INFO: Pod "test-pod-c5733e95-aaa7-4b1f-8f0c-85060abf065b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.537796778s Jan 31 00:25:18.282: INFO: Pod "test-pod-c5733e95-aaa7-4b1f-8f0c-85060abf065b": Phase="Running", Reason="", readiness=true. Elapsed: 6.542644307s Jan 31 00:25:20.287: INFO: Pod "test-pod-c5733e95-aaa7-4b1f-8f0c-85060abf065b": Phase="Running", Reason="", readiness=true. Elapsed: 8.54777206s Jan 31 00:25:22.291: INFO: Pod "test-pod-c5733e95-aaa7-4b1f-8f0c-85060abf065b": Phase="Running", Reason="", readiness=true. Elapsed: 10.551489326s Jan 31 00:25:24.294: INFO: Pod "test-pod-c5733e95-aaa7-4b1f-8f0c-85060abf065b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.554157904s STEP: Saw pod success Jan 31 00:25:24.294: INFO: Pod "test-pod-c5733e95-aaa7-4b1f-8f0c-85060abf065b" satisfied condition "Succeeded or Failed" Jan 31 00:25:24.296: INFO: Trying to get logs from node latest-worker2 pod test-pod-c5733e95-aaa7-4b1f-8f0c-85060abf065b container agnhost-container: STEP: delete the pod Jan 31 00:25:24.506: INFO: Waiting for pod test-pod-c5733e95-aaa7-4b1f-8f0c-85060abf065b to disappear Jan 31 00:25:24.592: INFO: Pod test-pod-c5733e95-aaa7-4b1f-8f0c-85060abf065b no longer exists [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:25:24.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-8276" for this suite. • [SLOW TEST:13.090 seconds] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount projected service account token [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount projected service account token [Conformance]","total":311,"completed":50,"skipped":1042,"failed":0} SSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:25:24.601: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Performing setup for networking test in namespace pod-network-test-4203 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 31 00:25:25.124: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jan 31 00:25:25.324: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 31 00:25:27.456: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 31 00:25:29.432: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 31 00:25:31.361: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 31 00:25:33.343: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 31 00:25:35.337: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 31 00:25:37.327: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 31 00:25:39.328: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 31 00:25:41.342: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 31 00:25:43.329: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 31 00:25:45.329: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 31 00:25:47.330: INFO: The status of Pod netserver-0 is Running (Ready = true) Jan 31 00:25:47.337: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Jan 31 00:25:51.370: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 Jan 31 00:25:51.370: INFO: Breadth first check of 10.244.2.80 on host 172.18.0.14... Jan 31 00:25:51.374: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.81:9080/dial?request=hostname&protocol=http&host=10.244.2.80&port=8080&tries=1'] Namespace:pod-network-test-4203 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 31 00:25:51.374: INFO: >>> kubeConfig: /root/.kube/config I0131 00:25:51.409865 7 log.go:181] (0xc002fa80b0) (0xc003712dc0) Create stream I0131 00:25:51.409902 7 log.go:181] (0xc002fa80b0) (0xc003712dc0) Stream added, broadcasting: 1 I0131 00:25:51.412079 7 log.go:181] (0xc002fa80b0) Reply frame received for 1 I0131 00:25:51.412106 7 log.go:181] (0xc002fa80b0) (0xc003712e60) Create stream I0131 00:25:51.412118 7 log.go:181] (0xc002fa80b0) (0xc003712e60) Stream added, broadcasting: 3 I0131 00:25:51.413391 7 log.go:181] (0xc002fa80b0) Reply frame received for 3 I0131 00:25:51.413457 7 log.go:181] (0xc002fa80b0) (0xc0011c6e60) Create stream I0131 00:25:51.413476 7 log.go:181] (0xc002fa80b0) (0xc0011c6e60) Stream added, broadcasting: 5 I0131 00:25:51.415434 7 log.go:181] (0xc002fa80b0) Reply frame received for 5 I0131 00:25:51.504823 7 log.go:181] (0xc002fa80b0) Data frame received for 3 I0131 00:25:51.504913 7 log.go:181] (0xc003712e60) (3) Data frame handling I0131 00:25:51.504928 7 log.go:181] (0xc003712e60) (3) Data frame sent I0131 00:25:51.505388 7 log.go:181] (0xc002fa80b0) Data frame received for 3 I0131 00:25:51.505413 7 log.go:181] (0xc003712e60) (3) Data frame handling I0131 00:25:51.505657 7 log.go:181] (0xc002fa80b0) Data frame received for 5 I0131 00:25:51.505668 7 log.go:181] (0xc0011c6e60) (5) Data frame handling I0131 00:25:51.507166 7 log.go:181] (0xc002fa80b0) Data frame received for 1 I0131 00:25:51.507183 7 log.go:181] (0xc003712dc0) (1) Data frame handling I0131 00:25:51.507201 7 log.go:181] (0xc003712dc0) (1) Data frame sent I0131 00:25:51.507224 7 log.go:181] (0xc002fa80b0) (0xc003712dc0) Stream removed, broadcasting: 1 I0131 00:25:51.507245 7 log.go:181] (0xc002fa80b0) Go away received I0131 00:25:51.507324 7 log.go:181] (0xc002fa80b0) (0xc003712dc0) Stream removed, broadcasting: 1 I0131 00:25:51.507338 7 log.go:181] (0xc002fa80b0) (0xc003712e60) Stream removed, broadcasting: 3 I0131 00:25:51.507348 7 log.go:181] (0xc002fa80b0) (0xc0011c6e60) Stream removed, broadcasting: 5 Jan 31 00:25:51.507: INFO: Waiting for responses: map[] Jan 31 00:25:51.507: INFO: reached 10.244.2.80 after 0/1 tries Jan 31 00:25:51.507: INFO: Breadth first check of 10.244.1.153 on host 172.18.0.16... Jan 31 00:25:51.516: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.81:9080/dial?request=hostname&protocol=http&host=10.244.1.153&port=8080&tries=1'] Namespace:pod-network-test-4203 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 31 00:25:51.517: INFO: >>> kubeConfig: /root/.kube/config I0131 00:25:51.543998 7 log.go:181] (0xc002fa8840) (0xc003713180) Create stream I0131 00:25:51.544025 7 log.go:181] (0xc002fa8840) (0xc003713180) Stream added, broadcasting: 1 I0131 00:25:51.546367 7 log.go:181] (0xc002fa8840) Reply frame received for 1 I0131 00:25:51.546409 7 log.go:181] (0xc002fa8840) (0xc002cfa820) Create stream I0131 00:25:51.546421 7 log.go:181] (0xc002fa8840) (0xc002cfa820) Stream added, broadcasting: 3 I0131 00:25:51.547289 7 log.go:181] (0xc002fa8840) Reply frame received for 3 I0131 00:25:51.547326 7 log.go:181] (0xc002fa8840) (0xc0011c7040) Create stream I0131 00:25:51.547342 7 log.go:181] (0xc002fa8840) (0xc0011c7040) Stream added, broadcasting: 5 I0131 00:25:51.548201 7 log.go:181] (0xc002fa8840) Reply frame received for 5 I0131 00:25:51.616236 7 log.go:181] (0xc002fa8840) Data frame received for 3 I0131 00:25:51.616259 7 log.go:181] (0xc002cfa820) (3) Data frame handling I0131 00:25:51.616275 7 log.go:181] (0xc002cfa820) (3) Data frame sent I0131 00:25:51.616775 7 log.go:181] (0xc002fa8840) Data frame received for 3 I0131 00:25:51.616813 7 log.go:181] (0xc002cfa820) (3) Data frame handling I0131 00:25:51.616824 7 log.go:181] (0xc002fa8840) Data frame received for 5 I0131 00:25:51.616910 7 log.go:181] (0xc0011c7040) (5) Data frame handling I0131 00:25:51.618539 7 log.go:181] (0xc002fa8840) Data frame received for 1 I0131 00:25:51.618565 7 log.go:181] (0xc003713180) (1) Data frame handling I0131 00:25:51.618583 7 log.go:181] (0xc003713180) (1) Data frame sent I0131 00:25:51.618607 7 log.go:181] (0xc002fa8840) (0xc003713180) Stream removed, broadcasting: 1 I0131 00:25:51.618632 7 log.go:181] (0xc002fa8840) Go away received I0131 00:25:51.618745 7 log.go:181] (0xc002fa8840) (0xc003713180) Stream removed, broadcasting: 1 I0131 00:25:51.618763 7 log.go:181] (0xc002fa8840) (0xc002cfa820) Stream removed, broadcasting: 3 I0131 00:25:51.618773 7 log.go:181] (0xc002fa8840) (0xc0011c7040) Stream removed, broadcasting: 5 Jan 31 00:25:51.618: INFO: Waiting for responses: map[] Jan 31 00:25:51.618: INFO: reached 10.244.1.153 after 0/1 tries Jan 31 00:25:51.618: INFO: Going to retry 0 out of 2 pods.... [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:25:51.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-4203" for this suite. • [SLOW TEST:27.048 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:27 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:30 should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":311,"completed":51,"skipped":1048,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:25:51.649: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Jan 31 00:25:51.709: INFO: >>> kubeConfig: /root/.kube/config Jan 31 00:25:55.280: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:26:07.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5018" for this suite. • [SLOW TEST:15.912 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":311,"completed":52,"skipped":1050,"failed":0} SSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:26:07.561: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating secret with name secret-test-49c3699c-afa2-4a04-8c67-b050a72b5de1 STEP: Creating a pod to test consume secrets Jan 31 00:26:07.759: INFO: Waiting up to 5m0s for pod "pod-secrets-556e465a-7c35-47a6-b249-6bca318930a6" in namespace "secrets-6669" to be "Succeeded or Failed" Jan 31 00:26:07.798: INFO: Pod "pod-secrets-556e465a-7c35-47a6-b249-6bca318930a6": Phase="Pending", Reason="", readiness=false. Elapsed: 38.579558ms Jan 31 00:26:09.864: INFO: Pod "pod-secrets-556e465a-7c35-47a6-b249-6bca318930a6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.104637888s Jan 31 00:26:11.868: INFO: Pod "pod-secrets-556e465a-7c35-47a6-b249-6bca318930a6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.108279806s STEP: Saw pod success Jan 31 00:26:11.868: INFO: Pod "pod-secrets-556e465a-7c35-47a6-b249-6bca318930a6" satisfied condition "Succeeded or Failed" Jan 31 00:26:11.871: INFO: Trying to get logs from node latest-worker pod pod-secrets-556e465a-7c35-47a6-b249-6bca318930a6 container secret-volume-test: STEP: delete the pod Jan 31 00:26:11.918: INFO: Waiting for pod pod-secrets-556e465a-7c35-47a6-b249-6bca318930a6 to disappear Jan 31 00:26:11.927: INFO: Pod pod-secrets-556e465a-7c35-47a6-b249-6bca318930a6 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:26:11.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6669" for this suite. STEP: Destroying namespace "secret-namespace-3309" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":311,"completed":53,"skipped":1054,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:26:11.940: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating configMap with name projected-configmap-test-volume-8b87e92e-502f-4770-a84f-a1d950e36989 STEP: Creating a pod to test consume configMaps Jan 31 00:26:12.057: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2cd1b630-251c-40c7-b972-b50fb1721214" in namespace "projected-4040" to be "Succeeded or Failed" Jan 31 00:26:12.066: INFO: Pod "pod-projected-configmaps-2cd1b630-251c-40c7-b972-b50fb1721214": Phase="Pending", Reason="", readiness=false. Elapsed: 8.930702ms Jan 31 00:26:14.071: INFO: Pod "pod-projected-configmaps-2cd1b630-251c-40c7-b972-b50fb1721214": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013740218s Jan 31 00:26:16.077: INFO: Pod "pod-projected-configmaps-2cd1b630-251c-40c7-b972-b50fb1721214": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019669387s STEP: Saw pod success Jan 31 00:26:16.077: INFO: Pod "pod-projected-configmaps-2cd1b630-251c-40c7-b972-b50fb1721214" satisfied condition "Succeeded or Failed" Jan 31 00:26:16.079: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-2cd1b630-251c-40c7-b972-b50fb1721214 container agnhost-container: STEP: delete the pod Jan 31 00:26:16.116: INFO: Waiting for pod pod-projected-configmaps-2cd1b630-251c-40c7-b972-b50fb1721214 to disappear Jan 31 00:26:16.151: INFO: Pod pod-projected-configmaps-2cd1b630-251c-40c7-b972-b50fb1721214 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:26:16.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4040" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":54,"skipped":1071,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:26:16.161: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating configMap with name configmap-test-volume-map-873dbc76-2879-43dc-a2f3-e0c861b79fdc STEP: Creating a pod to test consume configMaps Jan 31 00:26:16.230: INFO: Waiting up to 5m0s for pod "pod-configmaps-6458bf88-c679-4d43-90f4-c1dfff810053" in namespace "configmap-2316" to be "Succeeded or Failed" Jan 31 00:26:16.234: INFO: Pod "pod-configmaps-6458bf88-c679-4d43-90f4-c1dfff810053": Phase="Pending", Reason="", readiness=false. Elapsed: 4.653721ms Jan 31 00:26:18.240: INFO: Pod "pod-configmaps-6458bf88-c679-4d43-90f4-c1dfff810053": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010474394s Jan 31 00:26:20.244: INFO: Pod "pod-configmaps-6458bf88-c679-4d43-90f4-c1dfff810053": Phase="Running", Reason="", readiness=true. Elapsed: 4.014758635s Jan 31 00:26:22.249: INFO: Pod "pod-configmaps-6458bf88-c679-4d43-90f4-c1dfff810053": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.019621005s STEP: Saw pod success Jan 31 00:26:22.249: INFO: Pod "pod-configmaps-6458bf88-c679-4d43-90f4-c1dfff810053" satisfied condition "Succeeded or Failed" Jan 31 00:26:22.252: INFO: Trying to get logs from node latest-worker pod pod-configmaps-6458bf88-c679-4d43-90f4-c1dfff810053 container agnhost-container: STEP: delete the pod Jan 31 00:26:22.280: INFO: Waiting for pod pod-configmaps-6458bf88-c679-4d43-90f4-c1dfff810053 to disappear Jan 31 00:26:22.292: INFO: Pod pod-configmaps-6458bf88-c679-4d43-90f4-c1dfff810053 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:26:22.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2316" for this suite. • [SLOW TEST:6.140 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":55,"skipped":1091,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:26:22.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Jan 31 00:26:22.866: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Jan 31 00:26:24.878: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747649582, loc:(*time.Location)(0x79bd420)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747649582, loc:(*time.Location)(0x79bd420)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747649582, loc:(*time.Location)(0x79bd420)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747649582, loc:(*time.Location)(0x79bd420)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-7d6697c5b7\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 31 00:26:27.902: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 Jan 31 00:26:27.906: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:26:29.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-4252" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:6.975 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":311,"completed":56,"skipped":1109,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:26:29.277: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0131 00:27:09.547680 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Jan 31 00:28:11.568: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. Jan 31 00:28:11.568: INFO: Deleting pod "simpletest.rc-6vcxk" in namespace "gc-4530" Jan 31 00:28:11.632: INFO: Deleting pod "simpletest.rc-b7stv" in namespace "gc-4530" Jan 31 00:28:11.661: INFO: Deleting pod "simpletest.rc-bgl6b" in namespace "gc-4530" Jan 31 00:28:11.778: INFO: Deleting pod "simpletest.rc-hrhs4" in namespace "gc-4530" Jan 31 00:28:12.111: INFO: Deleting pod "simpletest.rc-lctwq" in namespace "gc-4530" Jan 31 00:28:12.291: INFO: Deleting pod "simpletest.rc-q2mqt" in namespace "gc-4530" Jan 31 00:28:12.681: INFO: Deleting pod "simpletest.rc-qjjcr" in namespace "gc-4530" Jan 31 00:28:12.873: INFO: Deleting pod "simpletest.rc-t9h6t" in namespace "gc-4530" Jan 31 00:28:13.099: INFO: Deleting pod "simpletest.rc-wnbh7" in namespace "gc-4530" Jan 31 00:28:13.280: INFO: Deleting pod "simpletest.rc-xqmjk" in namespace "gc-4530" [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:28:13.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4530" for this suite. • [SLOW TEST:104.388 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":311,"completed":57,"skipped":1126,"failed":0} SSSSS ------------------------------ [sig-network] Ingress API should support creating Ingress API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-network] Ingress API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:28:13.665: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingress STEP: Waiting for a default service account to be provisioned in namespace [It] should support creating Ingress API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Jan 31 00:28:13.890: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching Jan 31 00:28:13.949: INFO: starting watch STEP: patching STEP: updating Jan 31 00:28:14.170: INFO: waiting for watch events with expected annotations Jan 31 00:28:14.170: INFO: saw patched and updated annotations STEP: patching /status STEP: updating /status STEP: get /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] Ingress API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:28:14.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingress-2866" for this suite. •{"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":311,"completed":58,"skipped":1131,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:28:14.303: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 Jan 31 00:28:18.829: INFO: Deleting pod "var-expansion-1c6dbf07-1311-4ebd-9a11-6837a1f72950" in namespace "var-expansion-8688" Jan 31 00:28:18.834: INFO: Wait up to 5m0s for pod "var-expansion-1c6dbf07-1311-4ebd-9a11-6837a1f72950" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:28:52.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8688" for this suite. • [SLOW TEST:38.575 seconds] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:635 should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]","total":311,"completed":59,"skipped":1141,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:28:52.879: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating configMap with name projected-configmap-test-volume-cdf2dc90-60c2-458d-b951-a057a07b67fe STEP: Creating a pod to test consume configMaps Jan 31 00:28:53.028: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f22009a0-286e-41ec-9a9b-8e6bebb2fc74" in namespace "projected-2775" to be "Succeeded or Failed" Jan 31 00:28:53.060: INFO: Pod "pod-projected-configmaps-f22009a0-286e-41ec-9a9b-8e6bebb2fc74": Phase="Pending", Reason="", readiness=false. Elapsed: 32.211641ms Jan 31 00:28:55.387: INFO: Pod "pod-projected-configmaps-f22009a0-286e-41ec-9a9b-8e6bebb2fc74": Phase="Pending", Reason="", readiness=false. Elapsed: 2.359328162s Jan 31 00:28:57.391: INFO: Pod "pod-projected-configmaps-f22009a0-286e-41ec-9a9b-8e6bebb2fc74": Phase="Pending", Reason="", readiness=false. Elapsed: 4.363631084s Jan 31 00:28:59.397: INFO: Pod "pod-projected-configmaps-f22009a0-286e-41ec-9a9b-8e6bebb2fc74": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.369623437s STEP: Saw pod success Jan 31 00:28:59.397: INFO: Pod "pod-projected-configmaps-f22009a0-286e-41ec-9a9b-8e6bebb2fc74" satisfied condition "Succeeded or Failed" Jan 31 00:28:59.400: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-f22009a0-286e-41ec-9a9b-8e6bebb2fc74 container projected-configmap-volume-test: STEP: delete the pod Jan 31 00:28:59.460: INFO: Waiting for pod pod-projected-configmaps-f22009a0-286e-41ec-9a9b-8e6bebb2fc74 to disappear Jan 31 00:28:59.499: INFO: Pod pod-projected-configmaps-f22009a0-286e-41ec-9a9b-8e6bebb2fc74 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:28:59.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2775" for this suite. • [SLOW TEST:6.631 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":311,"completed":60,"skipped":1149,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:28:59.510: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating secret with name secret-test-3655cc8d-d753-4224-af2d-822c650d799e STEP: Creating a pod to test consume secrets Jan 31 00:28:59.593: INFO: Waiting up to 5m0s for pod "pod-secrets-fad26463-cc14-47c9-83e5-d71c17a0f205" in namespace "secrets-5417" to be "Succeeded or Failed" Jan 31 00:28:59.638: INFO: Pod "pod-secrets-fad26463-cc14-47c9-83e5-d71c17a0f205": Phase="Pending", Reason="", readiness=false. Elapsed: 44.641943ms Jan 31 00:29:01.643: INFO: Pod "pod-secrets-fad26463-cc14-47c9-83e5-d71c17a0f205": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049960611s Jan 31 00:29:03.648: INFO: Pod "pod-secrets-fad26463-cc14-47c9-83e5-d71c17a0f205": Phase="Running", Reason="", readiness=true. Elapsed: 4.054574108s Jan 31 00:29:05.653: INFO: Pod "pod-secrets-fad26463-cc14-47c9-83e5-d71c17a0f205": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.059783474s STEP: Saw pod success Jan 31 00:29:05.653: INFO: Pod "pod-secrets-fad26463-cc14-47c9-83e5-d71c17a0f205" satisfied condition "Succeeded or Failed" Jan 31 00:29:05.657: INFO: Trying to get logs from node latest-worker pod pod-secrets-fad26463-cc14-47c9-83e5-d71c17a0f205 container secret-env-test: STEP: delete the pod Jan 31 00:29:05.690: INFO: Waiting for pod pod-secrets-fad26463-cc14-47c9-83e5-d71c17a0f205 to disappear Jan 31 00:29:05.704: INFO: Pod pod-secrets-fad26463-cc14-47c9-83e5-d71c17a0f205 no longer exists [AfterEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:29:05.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5417" for this suite. • [SLOW TEST:6.207 seconds] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:36 should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":311,"completed":61,"skipped":1162,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:29:05.719: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a pod to test downward API volume plugin Jan 31 00:29:05.804: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1d11a30c-d39f-4e5f-8962-806cd281cbd6" in namespace "downward-api-4332" to be "Succeeded or Failed" Jan 31 00:29:05.826: INFO: Pod "downwardapi-volume-1d11a30c-d39f-4e5f-8962-806cd281cbd6": Phase="Pending", Reason="", readiness=false. Elapsed: 21.520434ms Jan 31 00:29:07.830: INFO: Pod "downwardapi-volume-1d11a30c-d39f-4e5f-8962-806cd281cbd6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025989061s Jan 31 00:29:09.834: INFO: Pod "downwardapi-volume-1d11a30c-d39f-4e5f-8962-806cd281cbd6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030215908s STEP: Saw pod success Jan 31 00:29:09.834: INFO: Pod "downwardapi-volume-1d11a30c-d39f-4e5f-8962-806cd281cbd6" satisfied condition "Succeeded or Failed" Jan 31 00:29:09.838: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-1d11a30c-d39f-4e5f-8962-806cd281cbd6 container client-container: STEP: delete the pod Jan 31 00:29:09.868: INFO: Waiting for pod downwardapi-volume-1d11a30c-d39f-4e5f-8962-806cd281cbd6 to disappear Jan 31 00:29:09.887: INFO: Pod downwardapi-volume-1d11a30c-d39f-4e5f-8962-806cd281cbd6 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:29:09.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4332" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":311,"completed":62,"skipped":1230,"failed":0} SSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:29:09.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating pod pod-subpath-test-projected-s24f STEP: Creating a pod to test atomic-volume-subpath Jan 31 00:29:10.006: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-s24f" in namespace "subpath-6649" to be "Succeeded or Failed" Jan 31 00:29:10.044: INFO: Pod "pod-subpath-test-projected-s24f": Phase="Pending", Reason="", readiness=false. Elapsed: 37.454095ms Jan 31 00:29:12.048: INFO: Pod "pod-subpath-test-projected-s24f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041617675s Jan 31 00:29:14.054: INFO: Pod "pod-subpath-test-projected-s24f": Phase="Running", Reason="", readiness=true. Elapsed: 4.047209408s Jan 31 00:29:16.058: INFO: Pod "pod-subpath-test-projected-s24f": Phase="Running", Reason="", readiness=true. Elapsed: 6.051587241s Jan 31 00:29:18.063: INFO: Pod "pod-subpath-test-projected-s24f": Phase="Running", Reason="", readiness=true. Elapsed: 8.056322961s Jan 31 00:29:20.067: INFO: Pod "pod-subpath-test-projected-s24f": Phase="Running", Reason="", readiness=true. Elapsed: 10.060608848s Jan 31 00:29:22.073: INFO: Pod "pod-subpath-test-projected-s24f": Phase="Running", Reason="", readiness=true. Elapsed: 12.06689413s Jan 31 00:29:24.079: INFO: Pod "pod-subpath-test-projected-s24f": Phase="Running", Reason="", readiness=true. Elapsed: 14.072324999s Jan 31 00:29:26.084: INFO: Pod "pod-subpath-test-projected-s24f": Phase="Running", Reason="", readiness=true. Elapsed: 16.077154601s Jan 31 00:29:28.087: INFO: Pod "pod-subpath-test-projected-s24f": Phase="Running", Reason="", readiness=true. Elapsed: 18.080936648s Jan 31 00:29:30.092: INFO: Pod "pod-subpath-test-projected-s24f": Phase="Running", Reason="", readiness=true. Elapsed: 20.085589673s Jan 31 00:29:32.097: INFO: Pod "pod-subpath-test-projected-s24f": Phase="Running", Reason="", readiness=true. Elapsed: 22.090435896s Jan 31 00:29:34.101: INFO: Pod "pod-subpath-test-projected-s24f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.094650101s STEP: Saw pod success Jan 31 00:29:34.101: INFO: Pod "pod-subpath-test-projected-s24f" satisfied condition "Succeeded or Failed" Jan 31 00:29:34.104: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-projected-s24f container test-container-subpath-projected-s24f: STEP: delete the pod Jan 31 00:29:34.324: INFO: Waiting for pod pod-subpath-test-projected-s24f to disappear Jan 31 00:29:34.437: INFO: Pod pod-subpath-test-projected-s24f no longer exists STEP: Deleting pod pod-subpath-test-projected-s24f Jan 31 00:29:34.437: INFO: Deleting pod "pod-subpath-test-projected-s24f" in namespace "subpath-6649" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:29:34.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6649" for this suite. • [SLOW TEST:24.548 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":311,"completed":63,"skipped":1234,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:29:34.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:129 [It] should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jan 31 00:29:34.599: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:29:34.615: INFO: Number of nodes with available pods: 0 Jan 31 00:29:34.615: INFO: Node latest-worker is running more than one daemon pod Jan 31 00:29:35.699: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:29:35.703: INFO: Number of nodes with available pods: 0 Jan 31 00:29:35.703: INFO: Node latest-worker is running more than one daemon pod Jan 31 00:29:36.778: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:29:36.781: INFO: Number of nodes with available pods: 0 Jan 31 00:29:36.781: INFO: Node latest-worker is running more than one daemon pod Jan 31 00:29:37.634: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:29:37.638: INFO: Number of nodes with available pods: 0 Jan 31 00:29:37.638: INFO: Node latest-worker is running more than one daemon pod Jan 31 00:29:38.622: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:29:38.626: INFO: Number of nodes with available pods: 1 Jan 31 00:29:38.626: INFO: Node latest-worker is running more than one daemon pod Jan 31 00:29:39.676: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:29:39.682: INFO: Number of nodes with available pods: 2 Jan 31 00:29:39.682: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Jan 31 00:29:39.750: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:29:39.812: INFO: Number of nodes with available pods: 1 Jan 31 00:29:39.813: INFO: Node latest-worker2 is running more than one daemon pod Jan 31 00:29:40.957: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:29:40.960: INFO: Number of nodes with available pods: 1 Jan 31 00:29:40.960: INFO: Node latest-worker2 is running more than one daemon pod Jan 31 00:29:41.817: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:29:41.821: INFO: Number of nodes with available pods: 1 Jan 31 00:29:41.821: INFO: Node latest-worker2 is running more than one daemon pod Jan 31 00:29:42.817: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:29:42.821: INFO: Number of nodes with available pods: 1 Jan 31 00:29:42.821: INFO: Node latest-worker2 is running more than one daemon pod Jan 31 00:29:43.818: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:29:43.822: INFO: Number of nodes with available pods: 2 Jan 31 00:29:43.822: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:95 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9893, will wait for the garbage collector to delete the pods Jan 31 00:29:43.888: INFO: Deleting DaemonSet.extensions daemon-set took: 8.001534ms Jan 31 00:29:44.488: INFO: Terminating DaemonSet.extensions daemon-set pods took: 600.253766ms Jan 31 00:30:00.812: INFO: Number of nodes with available pods: 0 Jan 31 00:30:00.812: INFO: Number of running nodes: 0, number of available pods: 0 Jan 31 00:30:00.814: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"1115813"},"items":null} Jan 31 00:30:00.816: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1115813"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:30:00.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9893" for this suite. • [SLOW TEST:26.391 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":311,"completed":64,"skipped":1266,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:30:00.838: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:30:07.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-7491" for this suite. • [SLOW TEST:6.228 seconds] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not conflict [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":311,"completed":65,"skipped":1285,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:30:07.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-2664 [It] should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating statefulset ss in namespace statefulset-2664 Jan 31 00:30:07.438: INFO: Found 0 stateful pods, waiting for 1 Jan 31 00:30:17.442: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Jan 31 00:30:17.485: INFO: Deleting all statefulset in ns statefulset-2664 Jan 31 00:30:17.525: INFO: Scaling statefulset ss to 0 Jan 31 00:30:57.616: INFO: Waiting for statefulset status.replicas updated to 0 Jan 31 00:30:57.620: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:30:57.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2664" for this suite. • [SLOW TEST:50.583 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:635 should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":311,"completed":66,"skipped":1311,"failed":0} SSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:30:57.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating secret with name secret-test-68e54518-dff0-4467-84ba-3ca83ba627db STEP: Creating a pod to test consume secrets Jan 31 00:30:57.757: INFO: Waiting up to 5m0s for pod "pod-secrets-77ed44ed-a7b1-48ca-91c3-d61035ad48bb" in namespace "secrets-5849" to be "Succeeded or Failed" Jan 31 00:30:57.772: INFO: Pod "pod-secrets-77ed44ed-a7b1-48ca-91c3-d61035ad48bb": Phase="Pending", Reason="", readiness=false. Elapsed: 15.716137ms Jan 31 00:30:59.777: INFO: Pod "pod-secrets-77ed44ed-a7b1-48ca-91c3-d61035ad48bb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020357101s Jan 31 00:31:01.782: INFO: Pod "pod-secrets-77ed44ed-a7b1-48ca-91c3-d61035ad48bb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0251921s STEP: Saw pod success Jan 31 00:31:01.782: INFO: Pod "pod-secrets-77ed44ed-a7b1-48ca-91c3-d61035ad48bb" satisfied condition "Succeeded or Failed" Jan 31 00:31:01.785: INFO: Trying to get logs from node latest-worker pod pod-secrets-77ed44ed-a7b1-48ca-91c3-d61035ad48bb container secret-volume-test: STEP: delete the pod Jan 31 00:31:01.842: INFO: Waiting for pod pod-secrets-77ed44ed-a7b1-48ca-91c3-d61035ad48bb to disappear Jan 31 00:31:01.869: INFO: Pod pod-secrets-77ed44ed-a7b1-48ca-91c3-d61035ad48bb no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:31:01.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5849" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":311,"completed":67,"skipped":1317,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:31:01.879: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 Jan 31 00:31:01.975: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Jan 31 00:31:05.545: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5221 --namespace=crd-publish-openapi-5221 create -f -' Jan 31 00:31:09.270: INFO: stderr: "" Jan 31 00:31:09.270: INFO: stdout: "e2e-test-crd-publish-openapi-7099-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Jan 31 00:31:09.270: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5221 --namespace=crd-publish-openapi-5221 delete e2e-test-crd-publish-openapi-7099-crds test-foo' Jan 31 00:31:09.371: INFO: stderr: "" Jan 31 00:31:09.371: INFO: stdout: "e2e-test-crd-publish-openapi-7099-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Jan 31 00:31:09.371: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5221 --namespace=crd-publish-openapi-5221 apply -f -' Jan 31 00:31:09.719: INFO: stderr: "" Jan 31 00:31:09.719: INFO: stdout: "e2e-test-crd-publish-openapi-7099-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Jan 31 00:31:09.719: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5221 --namespace=crd-publish-openapi-5221 delete e2e-test-crd-publish-openapi-7099-crds test-foo' Jan 31 00:31:09.877: INFO: stderr: "" Jan 31 00:31:09.877: INFO: stdout: "e2e-test-crd-publish-openapi-7099-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Jan 31 00:31:09.877: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5221 --namespace=crd-publish-openapi-5221 create -f -' Jan 31 00:31:10.150: INFO: rc: 1 Jan 31 00:31:10.151: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5221 --namespace=crd-publish-openapi-5221 apply -f -' Jan 31 00:31:10.433: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Jan 31 00:31:10.433: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5221 --namespace=crd-publish-openapi-5221 create -f -' Jan 31 00:31:10.699: INFO: rc: 1 Jan 31 00:31:10.699: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5221 --namespace=crd-publish-openapi-5221 apply -f -' Jan 31 00:31:10.971: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Jan 31 00:31:10.971: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5221 explain e2e-test-crd-publish-openapi-7099-crds' Jan 31 00:31:11.294: INFO: stderr: "" Jan 31 00:31:11.294: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7099-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Jan 31 00:31:11.295: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5221 explain e2e-test-crd-publish-openapi-7099-crds.metadata' Jan 31 00:31:11.576: INFO: stderr: "" Jan 31 00:31:11.576: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7099-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC.\n\n Populated by the system. Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested.\n\n Populated by the system when a graceful deletion is requested. Read-only.\n More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server.\n\n If this field is specified and the generated name exists, the server will\n NOT return a 409 - instead, it will either return 201 Created or 500 with\n Reason ServerTimeout indicating a unique name could not be found in the\n time allotted, and the client should retry (optionally after the time\n indicated in the Retry-After header).\n\n Applied only if Name is not specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within which each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty.\n\n Must be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources.\n\n Populated by the system. Read-only. Value must be treated as opaque by\n clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only.\n\n DEPRECATED Kubernetes will stop propagating this field in 1.20 release and\n the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations.\n\n Populated by the system. Read-only. More info:\n http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Jan 31 00:31:11.577: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5221 explain e2e-test-crd-publish-openapi-7099-crds.spec' Jan 31 00:31:11.856: INFO: stderr: "" Jan 31 00:31:11.856: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7099-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Jan 31 00:31:11.856: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5221 explain e2e-test-crd-publish-openapi-7099-crds.spec.bars' Jan 31 00:31:12.108: INFO: stderr: "" Jan 31 00:31:12.108: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7099-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Jan 31 00:31:12.108: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5221 explain e2e-test-crd-publish-openapi-7099-crds.spec.bars2' Jan 31 00:31:12.421: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:31:15.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5221" for this suite. • [SLOW TEST:14.106 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":311,"completed":68,"skipped":1324,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:31:15.986: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jan 31 00:31:26.119: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 31 00:31:26.141: INFO: Pod pod-with-prestop-http-hook still exists Jan 31 00:31:28.141: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 31 00:31:28.146: INFO: Pod pod-with-prestop-http-hook still exists Jan 31 00:31:30.141: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 31 00:31:30.146: INFO: Pod pod-with-prestop-http-hook still exists Jan 31 00:31:32.141: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 31 00:31:32.147: INFO: Pod pod-with-prestop-http-hook still exists Jan 31 00:31:34.141: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 31 00:31:34.147: INFO: Pod pod-with-prestop-http-hook still exists Jan 31 00:31:36.141: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 31 00:31:36.146: INFO: Pod pod-with-prestop-http-hook still exists Jan 31 00:31:38.141: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 31 00:31:38.145: INFO: Pod pod-with-prestop-http-hook still exists Jan 31 00:31:40.141: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 31 00:31:40.147: INFO: Pod pod-with-prestop-http-hook still exists Jan 31 00:31:42.141: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 31 00:31:42.146: INFO: Pod pod-with-prestop-http-hook still exists Jan 31 00:31:44.141: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 31 00:31:44.146: INFO: Pod pod-with-prestop-http-hook still exists Jan 31 00:31:46.141: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 31 00:31:46.146: INFO: Pod pod-with-prestop-http-hook still exists Jan 31 00:31:48.141: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 31 00:31:48.147: INFO: Pod pod-with-prestop-http-hook still exists Jan 31 00:31:50.141: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 31 00:31:50.147: INFO: Pod pod-with-prestop-http-hook still exists Jan 31 00:31:52.141: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 31 00:31:52.145: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:31:52.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-8280" for this suite. • [SLOW TEST:36.176 seconds] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:635 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:43 should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":311,"completed":69,"skipped":1335,"failed":0} S ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:31:52.162: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a pod to test substitution in container's command Jan 31 00:31:52.250: INFO: Waiting up to 5m0s for pod "var-expansion-a2a53479-bf6b-4fab-bdf9-6bd765c13d72" in namespace "var-expansion-7162" to be "Succeeded or Failed" Jan 31 00:31:52.260: INFO: Pod "var-expansion-a2a53479-bf6b-4fab-bdf9-6bd765c13d72": Phase="Pending", Reason="", readiness=false. Elapsed: 9.638309ms Jan 31 00:31:54.265: INFO: Pod "var-expansion-a2a53479-bf6b-4fab-bdf9-6bd765c13d72": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014392314s Jan 31 00:31:56.268: INFO: Pod "var-expansion-a2a53479-bf6b-4fab-bdf9-6bd765c13d72": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018167059s STEP: Saw pod success Jan 31 00:31:56.268: INFO: Pod "var-expansion-a2a53479-bf6b-4fab-bdf9-6bd765c13d72" satisfied condition "Succeeded or Failed" Jan 31 00:31:56.271: INFO: Trying to get logs from node latest-worker pod var-expansion-a2a53479-bf6b-4fab-bdf9-6bd765c13d72 container dapi-container: STEP: delete the pod Jan 31 00:31:56.303: INFO: Waiting for pod var-expansion-a2a53479-bf6b-4fab-bdf9-6bd765c13d72 to disappear Jan 31 00:31:56.313: INFO: Pod var-expansion-a2a53479-bf6b-4fab-bdf9-6bd765c13d72 no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:31:56.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7162" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":311,"completed":70,"skipped":1336,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:31:56.366: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a pod to test emptydir 0644 on node default medium Jan 31 00:31:56.675: INFO: Waiting up to 5m0s for pod "pod-078e5d00-eb28-40b3-9e3c-e93fbb029b49" in namespace "emptydir-9235" to be "Succeeded or Failed" Jan 31 00:31:56.703: INFO: Pod "pod-078e5d00-eb28-40b3-9e3c-e93fbb029b49": Phase="Pending", Reason="", readiness=false. Elapsed: 27.895626ms Jan 31 00:31:58.723: INFO: Pod "pod-078e5d00-eb28-40b3-9e3c-e93fbb029b49": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048123029s Jan 31 00:32:00.728: INFO: Pod "pod-078e5d00-eb28-40b3-9e3c-e93fbb029b49": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.052869343s STEP: Saw pod success Jan 31 00:32:00.728: INFO: Pod "pod-078e5d00-eb28-40b3-9e3c-e93fbb029b49" satisfied condition "Succeeded or Failed" Jan 31 00:32:00.731: INFO: Trying to get logs from node latest-worker pod pod-078e5d00-eb28-40b3-9e3c-e93fbb029b49 container test-container: STEP: delete the pod Jan 31 00:32:00.815: INFO: Waiting for pod pod-078e5d00-eb28-40b3-9e3c-e93fbb029b49 to disappear Jan 31 00:32:00.821: INFO: Pod pod-078e5d00-eb28-40b3-9e3c-e93fbb029b49 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:32:00.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9235" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":71,"skipped":1352,"failed":0} ------------------------------ [sig-apps] Job should delete a job [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:32:00.830: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-7774, will wait for the garbage collector to delete the pods Jan 31 00:32:05.004: INFO: Deleting Job.batch foo took: 6.805118ms Jan 31 00:32:05.604: INFO: Terminating Job.batch foo pods took: 600.25815ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:32:51.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-7774" for this suite. • [SLOW TEST:50.386 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":311,"completed":72,"skipped":1352,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:32:51.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a pod to test emptydir 0644 on node default medium Jan 31 00:32:51.330: INFO: Waiting up to 5m0s for pod "pod-a1c465d1-e7f7-4d56-878e-03866f455848" in namespace "emptydir-8433" to be "Succeeded or Failed" Jan 31 00:32:51.333: INFO: Pod "pod-a1c465d1-e7f7-4d56-878e-03866f455848": Phase="Pending", Reason="", readiness=false. Elapsed: 2.812479ms Jan 31 00:32:53.338: INFO: Pod "pod-a1c465d1-e7f7-4d56-878e-03866f455848": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008200892s Jan 31 00:32:55.343: INFO: Pod "pod-a1c465d1-e7f7-4d56-878e-03866f455848": Phase="Running", Reason="", readiness=true. Elapsed: 4.012831332s Jan 31 00:32:57.347: INFO: Pod "pod-a1c465d1-e7f7-4d56-878e-03866f455848": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.01756496s STEP: Saw pod success Jan 31 00:32:57.347: INFO: Pod "pod-a1c465d1-e7f7-4d56-878e-03866f455848" satisfied condition "Succeeded or Failed" Jan 31 00:32:57.351: INFO: Trying to get logs from node latest-worker pod pod-a1c465d1-e7f7-4d56-878e-03866f455848 container test-container: STEP: delete the pod Jan 31 00:32:57.386: INFO: Waiting for pod pod-a1c465d1-e7f7-4d56-878e-03866f455848 to disappear Jan 31 00:32:57.411: INFO: Pod pod-a1c465d1-e7f7-4d56-878e-03866f455848 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:32:57.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8433" for this suite. • [SLOW TEST:6.203 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:45 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":73,"skipped":1380,"failed":0} SSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:32:57.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating configMap configmap-8670/configmap-test-531a3717-0bb2-4db7-b078-9d2347cbb0f1 STEP: Creating a pod to test consume configMaps Jan 31 00:32:57.515: INFO: Waiting up to 5m0s for pod "pod-configmaps-b458da82-b7a7-4a92-943d-ecd23ef26c92" in namespace "configmap-8670" to be "Succeeded or Failed" Jan 31 00:32:57.538: INFO: Pod "pod-configmaps-b458da82-b7a7-4a92-943d-ecd23ef26c92": Phase="Pending", Reason="", readiness=false. Elapsed: 23.261301ms Jan 31 00:32:59.542: INFO: Pod "pod-configmaps-b458da82-b7a7-4a92-943d-ecd23ef26c92": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027412287s Jan 31 00:33:01.547: INFO: Pod "pod-configmaps-b458da82-b7a7-4a92-943d-ecd23ef26c92": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032078255s STEP: Saw pod success Jan 31 00:33:01.547: INFO: Pod "pod-configmaps-b458da82-b7a7-4a92-943d-ecd23ef26c92" satisfied condition "Succeeded or Failed" Jan 31 00:33:01.550: INFO: Trying to get logs from node latest-worker pod pod-configmaps-b458da82-b7a7-4a92-943d-ecd23ef26c92 container env-test: STEP: delete the pod Jan 31 00:33:01.702: INFO: Waiting for pod pod-configmaps-b458da82-b7a7-4a92-943d-ecd23ef26c92 to disappear Jan 31 00:33:01.721: INFO: Pod pod-configmaps-b458da82-b7a7-4a92-943d-ecd23ef26c92 no longer exists [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:33:01.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8670" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":311,"completed":74,"skipped":1385,"failed":0} SSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:33:01.833: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should be able to change the type from NodePort to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: creating a service nodeport-service with the type=NodePort in namespace services-5044 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-5044 STEP: creating replication controller externalsvc in namespace services-5044 I0131 00:33:02.015487 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-5044, replica count: 2 I0131 00:33:05.065946 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0131 00:33:08.066210 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Jan 31 00:33:08.167: INFO: Creating new exec pod Jan 31 00:33:12.196: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=services-5044 exec execpodkztnq -- /bin/sh -x -c nslookup nodeport-service.services-5044.svc.cluster.local' Jan 31 00:33:12.440: INFO: stderr: "I0131 00:33:12.346427 480 log.go:181] (0xc0009f22c0) (0xc000cca000) Create stream\nI0131 00:33:12.346526 480 log.go:181] (0xc0009f22c0) (0xc000cca000) Stream added, broadcasting: 1\nI0131 00:33:12.349007 480 log.go:181] (0xc0009f22c0) Reply frame received for 1\nI0131 00:33:12.349054 480 log.go:181] (0xc0009f22c0) (0xc00080e1e0) Create stream\nI0131 00:33:12.349067 480 log.go:181] (0xc0009f22c0) (0xc00080e1e0) Stream added, broadcasting: 3\nI0131 00:33:12.350167 480 log.go:181] (0xc0009f22c0) Reply frame received for 3\nI0131 00:33:12.350213 480 log.go:181] (0xc0009f22c0) (0xc0004b4dc0) Create stream\nI0131 00:33:12.350228 480 log.go:181] (0xc0009f22c0) (0xc0004b4dc0) Stream added, broadcasting: 5\nI0131 00:33:12.351433 480 log.go:181] (0xc0009f22c0) Reply frame received for 5\nI0131 00:33:12.420677 480 log.go:181] (0xc0009f22c0) Data frame received for 5\nI0131 00:33:12.420710 480 log.go:181] (0xc0004b4dc0) (5) Data frame handling\nI0131 00:33:12.420725 480 log.go:181] (0xc0004b4dc0) (5) Data frame sent\n+ nslookup nodeport-service.services-5044.svc.cluster.local\nI0131 00:33:12.431312 480 log.go:181] (0xc0009f22c0) Data frame received for 3\nI0131 00:33:12.431351 480 log.go:181] (0xc00080e1e0) (3) Data frame handling\nI0131 00:33:12.431379 480 log.go:181] (0xc00080e1e0) (3) Data frame sent\nI0131 00:33:12.432176 480 log.go:181] (0xc0009f22c0) Data frame received for 3\nI0131 00:33:12.432200 480 log.go:181] (0xc00080e1e0) (3) Data frame handling\nI0131 00:33:12.432223 480 log.go:181] (0xc00080e1e0) (3) Data frame sent\nI0131 00:33:12.432517 480 log.go:181] (0xc0009f22c0) Data frame received for 5\nI0131 00:33:12.432533 480 log.go:181] (0xc0004b4dc0) (5) Data frame handling\nI0131 00:33:12.432753 480 log.go:181] (0xc0009f22c0) Data frame received for 3\nI0131 00:33:12.432766 480 log.go:181] (0xc00080e1e0) (3) Data frame handling\nI0131 00:33:12.434780 480 log.go:181] (0xc0009f22c0) Data frame received for 1\nI0131 00:33:12.434793 480 log.go:181] (0xc000cca000) (1) Data frame handling\nI0131 00:33:12.434802 480 log.go:181] (0xc000cca000) (1) Data frame sent\nI0131 00:33:12.434809 480 log.go:181] (0xc0009f22c0) (0xc000cca000) Stream removed, broadcasting: 1\nI0131 00:33:12.434863 480 log.go:181] (0xc0009f22c0) Go away received\nI0131 00:33:12.435087 480 log.go:181] (0xc0009f22c0) (0xc000cca000) Stream removed, broadcasting: 1\nI0131 00:33:12.435103 480 log.go:181] (0xc0009f22c0) (0xc00080e1e0) Stream removed, broadcasting: 3\nI0131 00:33:12.435111 480 log.go:181] (0xc0009f22c0) (0xc0004b4dc0) Stream removed, broadcasting: 5\n" Jan 31 00:33:12.441: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-5044.svc.cluster.local\tcanonical name = externalsvc.services-5044.svc.cluster.local.\nName:\texternalsvc.services-5044.svc.cluster.local\nAddress: 10.96.207.235\n\n" STEP: deleting ReplicationController externalsvc in namespace services-5044, will wait for the garbage collector to delete the pods Jan 31 00:33:12.500: INFO: Deleting ReplicationController externalsvc took: 6.273709ms Jan 31 00:33:13.101: INFO: Terminating ReplicationController externalsvc pods took: 600.181222ms Jan 31 00:33:51.259: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:33:51.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5044" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 • [SLOW TEST:49.500 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":311,"completed":75,"skipped":1394,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:33:51.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should check is all data is printed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 Jan 31 00:33:51.455: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-7298 version' Jan 31 00:33:51.648: INFO: stderr: "" Jan 31 00:33:51.648: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"21+\", GitVersion:\"v1.21.0-alpha.1\", GitCommit:\"624cb1c82fb4b6804f44e77dc6475b1b4e3a9a39\", GitTreeState:\"clean\", BuildDate:\"2021-01-13T19:04:45Z\", GoVersion:\"go1.15.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"21+\", GitVersion:\"v1.21.0-alpha.0\", GitCommit:\"98bc258bf5516b6c60860e06845b899eab29825d\", GitTreeState:\"clean\", BuildDate:\"2021-01-09T21:29:39Z\", GoVersion:\"go1.15.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:33:51.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7298" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":311,"completed":76,"skipped":1415,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:33:51.666: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating configMap with name configmap-projected-all-test-volume-fcb9f1cf-8943-4e98-b153-ff45e4df7e0e STEP: Creating secret with name secret-projected-all-test-volume-813e06bd-e884-4d00-a164-531ce133e23f STEP: Creating a pod to test Check all projections for projected volume plugin Jan 31 00:33:51.804: INFO: Waiting up to 5m0s for pod "projected-volume-f564517b-30d2-425a-8b06-03de491341c2" in namespace "projected-6990" to be "Succeeded or Failed" Jan 31 00:33:51.874: INFO: Pod "projected-volume-f564517b-30d2-425a-8b06-03de491341c2": Phase="Pending", Reason="", readiness=false. Elapsed: 70.170214ms Jan 31 00:33:54.054: INFO: Pod "projected-volume-f564517b-30d2-425a-8b06-03de491341c2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.249664007s Jan 31 00:33:56.058: INFO: Pod "projected-volume-f564517b-30d2-425a-8b06-03de491341c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.254102852s STEP: Saw pod success Jan 31 00:33:56.058: INFO: Pod "projected-volume-f564517b-30d2-425a-8b06-03de491341c2" satisfied condition "Succeeded or Failed" Jan 31 00:33:56.060: INFO: Trying to get logs from node latest-worker pod projected-volume-f564517b-30d2-425a-8b06-03de491341c2 container projected-all-volume-test: STEP: delete the pod Jan 31 00:33:56.138: INFO: Waiting for pod projected-volume-f564517b-30d2-425a-8b06-03de491341c2 to disappear Jan 31 00:33:56.147: INFO: Pod projected-volume-f564517b-30d2-425a-8b06-03de491341c2 no longer exists [AfterEach] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:33:56.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6990" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":311,"completed":77,"skipped":1476,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:33:56.170: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5524.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-5524.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5524.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5524.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-5524.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-5524.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-5524.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-5524.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5524.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5524.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-5524.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5524.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-5524.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-5524.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-5524.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-5524.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-5524.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5524.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 31 00:34:04.367: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5524.svc.cluster.local from pod dns-5524/dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310: the server could not find the requested resource (get pods dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310) Jan 31 00:34:04.371: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5524.svc.cluster.local from pod dns-5524/dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310: the server could not find the requested resource (get pods dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310) Jan 31 00:34:04.374: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5524.svc.cluster.local from pod dns-5524/dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310: the server could not find the requested resource (get pods dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310) Jan 31 00:34:04.376: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5524.svc.cluster.local from pod dns-5524/dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310: the server could not find the requested resource (get pods dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310) Jan 31 00:34:04.386: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5524.svc.cluster.local from pod dns-5524/dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310: the server could not find the requested resource (get pods dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310) Jan 31 00:34:04.390: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5524.svc.cluster.local from pod dns-5524/dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310: the server could not find the requested resource (get pods dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310) Jan 31 00:34:04.393: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5524.svc.cluster.local from pod dns-5524/dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310: the server could not find the requested resource (get pods dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310) Jan 31 00:34:04.396: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5524.svc.cluster.local from pod dns-5524/dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310: the server could not find the requested resource (get pods dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310) Jan 31 00:34:04.402: INFO: Lookups using dns-5524/dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5524.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5524.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5524.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5524.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5524.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5524.svc.cluster.local jessie_udp@dns-test-service-2.dns-5524.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5524.svc.cluster.local] Jan 31 00:34:09.408: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5524.svc.cluster.local from pod dns-5524/dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310: the server could not find the requested resource (get pods dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310) Jan 31 00:34:09.412: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5524.svc.cluster.local from pod dns-5524/dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310: the server could not find the requested resource (get pods dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310) Jan 31 00:34:09.415: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5524.svc.cluster.local from pod dns-5524/dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310: the server could not find the requested resource (get pods dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310) Jan 31 00:34:09.419: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5524.svc.cluster.local from pod dns-5524/dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310: the server could not find the requested resource (get pods dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310) Jan 31 00:34:09.429: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5524.svc.cluster.local from pod dns-5524/dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310: the server could not find the requested resource (get pods dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310) Jan 31 00:34:09.433: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5524.svc.cluster.local from pod dns-5524/dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310: the server could not find the requested resource (get pods dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310) Jan 31 00:34:09.436: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5524.svc.cluster.local from pod dns-5524/dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310: the server could not find the requested resource (get pods dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310) Jan 31 00:34:09.438: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5524.svc.cluster.local from pod dns-5524/dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310: the server could not find the requested resource (get pods dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310) Jan 31 00:34:09.444: INFO: Lookups using dns-5524/dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5524.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5524.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5524.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5524.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5524.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5524.svc.cluster.local jessie_udp@dns-test-service-2.dns-5524.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5524.svc.cluster.local] Jan 31 00:34:14.408: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5524.svc.cluster.local from pod dns-5524/dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310: the server could not find the requested resource (get pods dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310) Jan 31 00:34:14.412: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5524.svc.cluster.local from pod dns-5524/dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310: the server could not find the requested resource (get pods dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310) Jan 31 00:34:14.415: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5524.svc.cluster.local from pod dns-5524/dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310: the server could not find the requested resource (get pods dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310) Jan 31 00:34:14.467: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5524.svc.cluster.local from pod dns-5524/dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310: the server could not find the requested resource (get pods dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310) Jan 31 00:34:14.479: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5524.svc.cluster.local from pod dns-5524/dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310: the server could not find the requested resource (get pods dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310) Jan 31 00:34:14.482: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5524.svc.cluster.local from pod dns-5524/dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310: the server could not find the requested resource (get pods dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310) Jan 31 00:34:14.486: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5524.svc.cluster.local from pod dns-5524/dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310: the server could not find the requested resource (get pods dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310) Jan 31 00:34:14.489: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5524.svc.cluster.local from pod dns-5524/dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310: the server could not find the requested resource (get pods dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310) Jan 31 00:34:14.495: INFO: Lookups using dns-5524/dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5524.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5524.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5524.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5524.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5524.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5524.svc.cluster.local jessie_udp@dns-test-service-2.dns-5524.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5524.svc.cluster.local] Jan 31 00:34:19.407: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5524.svc.cluster.local from pod dns-5524/dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310: the server could not find the requested resource (get pods dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310) Jan 31 00:34:19.411: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5524.svc.cluster.local from pod dns-5524/dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310: the server could not find the requested resource (get pods dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310) Jan 31 00:34:19.414: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5524.svc.cluster.local from pod dns-5524/dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310: the server could not find the requested resource (get pods dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310) Jan 31 00:34:19.418: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5524.svc.cluster.local from pod dns-5524/dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310: the server could not find the requested resource (get pods dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310) Jan 31 00:34:19.428: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5524.svc.cluster.local from pod dns-5524/dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310: the server could not find the requested resource (get pods dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310) Jan 31 00:34:19.431: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5524.svc.cluster.local from pod dns-5524/dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310: the server could not find the requested resource (get pods dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310) Jan 31 00:34:19.433: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5524.svc.cluster.local from pod dns-5524/dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310: the server could not find the requested resource (get pods dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310) Jan 31 00:34:19.436: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5524.svc.cluster.local from pod dns-5524/dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310: the server could not find the requested resource (get pods dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310) Jan 31 00:34:19.442: INFO: Lookups using dns-5524/dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5524.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5524.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5524.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5524.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5524.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5524.svc.cluster.local jessie_udp@dns-test-service-2.dns-5524.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5524.svc.cluster.local] Jan 31 00:34:24.408: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5524.svc.cluster.local from pod dns-5524/dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310: the server could not find the requested resource (get pods dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310) Jan 31 00:34:24.411: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5524.svc.cluster.local from pod dns-5524/dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310: the server could not find the requested resource (get pods dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310) Jan 31 00:34:24.415: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5524.svc.cluster.local from pod dns-5524/dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310: the server could not find the requested resource (get pods dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310) Jan 31 00:34:24.418: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5524.svc.cluster.local from pod dns-5524/dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310: the server could not find the requested resource (get pods dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310) Jan 31 00:34:24.427: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5524.svc.cluster.local from pod dns-5524/dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310: the server could not find the requested resource (get pods dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310) Jan 31 00:34:24.431: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5524.svc.cluster.local from pod dns-5524/dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310: the server could not find the requested resource (get pods dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310) Jan 31 00:34:24.434: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5524.svc.cluster.local from pod dns-5524/dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310: the server could not find the requested resource (get pods dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310) Jan 31 00:34:24.437: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5524.svc.cluster.local from pod dns-5524/dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310: the server could not find the requested resource (get pods dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310) Jan 31 00:34:24.443: INFO: Lookups using dns-5524/dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5524.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5524.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5524.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5524.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5524.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5524.svc.cluster.local jessie_udp@dns-test-service-2.dns-5524.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5524.svc.cluster.local] Jan 31 00:34:29.407: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5524.svc.cluster.local from pod dns-5524/dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310: the server could not find the requested resource (get pods dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310) Jan 31 00:34:29.411: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5524.svc.cluster.local from pod dns-5524/dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310: the server could not find the requested resource (get pods dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310) Jan 31 00:34:29.415: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5524.svc.cluster.local from pod dns-5524/dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310: the server could not find the requested resource (get pods dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310) Jan 31 00:34:29.418: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5524.svc.cluster.local from pod dns-5524/dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310: the server could not find the requested resource (get pods dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310) Jan 31 00:34:29.427: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5524.svc.cluster.local from pod dns-5524/dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310: the server could not find the requested resource (get pods dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310) Jan 31 00:34:29.430: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5524.svc.cluster.local from pod dns-5524/dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310: the server could not find the requested resource (get pods dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310) Jan 31 00:34:29.434: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5524.svc.cluster.local from pod dns-5524/dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310: the server could not find the requested resource (get pods dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310) Jan 31 00:34:29.441: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5524.svc.cluster.local from pod dns-5524/dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310: the server could not find the requested resource (get pods dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310) Jan 31 00:34:29.446: INFO: Lookups using dns-5524/dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5524.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5524.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5524.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5524.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5524.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5524.svc.cluster.local jessie_udp@dns-test-service-2.dns-5524.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5524.svc.cluster.local] Jan 31 00:34:34.442: INFO: DNS probes using dns-5524/dns-test-038fef6b-4e56-4bc4-bb86-05a8dc122310 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:34:34.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5524" for this suite. • [SLOW TEST:38.933 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":311,"completed":78,"skipped":1490,"failed":0} SSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:34:35.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-9458.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-9458.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9458.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-9458.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-9458.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9458.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 31 00:34:43.345: INFO: DNS probes using dns-9458/dns-test-606dd72d-4798-4848-850f-94428f4cfd11 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:34:43.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9458" for this suite. • [SLOW TEST:8.348 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":311,"completed":79,"skipped":1497,"failed":0} SSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:34:43.451: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a pod to test downward API volume plugin Jan 31 00:34:44.084: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1b5290c2-b134-4069-91e6-a22ab33adad2" in namespace "projected-8384" to be "Succeeded or Failed" Jan 31 00:34:44.122: INFO: Pod "downwardapi-volume-1b5290c2-b134-4069-91e6-a22ab33adad2": Phase="Pending", Reason="", readiness=false. Elapsed: 37.879123ms Jan 31 00:34:46.174: INFO: Pod "downwardapi-volume-1b5290c2-b134-4069-91e6-a22ab33adad2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08972685s Jan 31 00:34:48.178: INFO: Pod "downwardapi-volume-1b5290c2-b134-4069-91e6-a22ab33adad2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094223909s Jan 31 00:34:50.182: INFO: Pod "downwardapi-volume-1b5290c2-b134-4069-91e6-a22ab33adad2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.097803507s STEP: Saw pod success Jan 31 00:34:50.182: INFO: Pod "downwardapi-volume-1b5290c2-b134-4069-91e6-a22ab33adad2" satisfied condition "Succeeded or Failed" Jan 31 00:34:50.185: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-1b5290c2-b134-4069-91e6-a22ab33adad2 container client-container: STEP: delete the pod Jan 31 00:34:50.264: INFO: Waiting for pod downwardapi-volume-1b5290c2-b134-4069-91e6-a22ab33adad2 to disappear Jan 31 00:34:50.267: INFO: Pod downwardapi-volume-1b5290c2-b134-4069-91e6-a22ab33adad2 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:34:50.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8384" for this suite. • [SLOW TEST:6.826 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35 should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":311,"completed":80,"skipped":1500,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should delete a collection of pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:34:50.277: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 [It] should delete a collection of pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Create set of pods Jan 31 00:34:50.398: INFO: created test-pod-1 Jan 31 00:34:50.413: INFO: created test-pod-2 Jan 31 00:34:50.435: INFO: created test-pod-3 STEP: waiting for all 3 pods to be located STEP: waiting for all pods to be deleted [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:34:50.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2332" for this suite. •{"msg":"PASSED [k8s.io] Pods should delete a collection of pods [Conformance]","total":311,"completed":81,"skipped":1523,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:34:50.743: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [AfterEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:34:54.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-901" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":311,"completed":82,"skipped":1652,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:34:54.846: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename discovery STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/discovery.go:39 STEP: Setting up server cert [It] should validate PreferredVersion for each APIGroup [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 Jan 31 00:34:55.499: INFO: Checking APIGroup: apiregistration.k8s.io Jan 31 00:34:55.500: INFO: PreferredVersion.GroupVersion: apiregistration.k8s.io/v1 Jan 31 00:34:55.500: INFO: Versions found [{apiregistration.k8s.io/v1 v1} {apiregistration.k8s.io/v1beta1 v1beta1}] Jan 31 00:34:55.500: INFO: apiregistration.k8s.io/v1 matches apiregistration.k8s.io/v1 Jan 31 00:34:55.500: INFO: Checking APIGroup: apps Jan 31 00:34:55.501: INFO: PreferredVersion.GroupVersion: apps/v1 Jan 31 00:34:55.501: INFO: Versions found [{apps/v1 v1}] Jan 31 00:34:55.501: INFO: apps/v1 matches apps/v1 Jan 31 00:34:55.501: INFO: Checking APIGroup: events.k8s.io Jan 31 00:34:55.502: INFO: PreferredVersion.GroupVersion: events.k8s.io/v1 Jan 31 00:34:55.502: INFO: Versions found [{events.k8s.io/v1 v1} {events.k8s.io/v1beta1 v1beta1}] Jan 31 00:34:55.502: INFO: events.k8s.io/v1 matches events.k8s.io/v1 Jan 31 00:34:55.502: INFO: Checking APIGroup: authentication.k8s.io Jan 31 00:34:55.503: INFO: PreferredVersion.GroupVersion: authentication.k8s.io/v1 Jan 31 00:34:55.503: INFO: Versions found [{authentication.k8s.io/v1 v1} {authentication.k8s.io/v1beta1 v1beta1}] Jan 31 00:34:55.503: INFO: authentication.k8s.io/v1 matches authentication.k8s.io/v1 Jan 31 00:34:55.503: INFO: Checking APIGroup: authorization.k8s.io Jan 31 00:34:55.503: INFO: PreferredVersion.GroupVersion: authorization.k8s.io/v1 Jan 31 00:34:55.503: INFO: Versions found [{authorization.k8s.io/v1 v1} {authorization.k8s.io/v1beta1 v1beta1}] Jan 31 00:34:55.503: INFO: authorization.k8s.io/v1 matches authorization.k8s.io/v1 Jan 31 00:34:55.503: INFO: Checking APIGroup: autoscaling Jan 31 00:34:55.505: INFO: PreferredVersion.GroupVersion: autoscaling/v1 Jan 31 00:34:55.505: INFO: Versions found [{autoscaling/v1 v1} {autoscaling/v2beta1 v2beta1} {autoscaling/v2beta2 v2beta2}] Jan 31 00:34:55.505: INFO: autoscaling/v1 matches autoscaling/v1 Jan 31 00:34:55.505: INFO: Checking APIGroup: batch Jan 31 00:34:55.505: INFO: PreferredVersion.GroupVersion: batch/v1 Jan 31 00:34:55.505: INFO: Versions found [{batch/v1 v1} {batch/v1beta1 v1beta1}] Jan 31 00:34:55.505: INFO: batch/v1 matches batch/v1 Jan 31 00:34:55.505: INFO: Checking APIGroup: certificates.k8s.io Jan 31 00:34:55.506: INFO: PreferredVersion.GroupVersion: certificates.k8s.io/v1 Jan 31 00:34:55.506: INFO: Versions found [{certificates.k8s.io/v1 v1} {certificates.k8s.io/v1beta1 v1beta1}] Jan 31 00:34:55.506: INFO: certificates.k8s.io/v1 matches certificates.k8s.io/v1 Jan 31 00:34:55.506: INFO: Checking APIGroup: networking.k8s.io Jan 31 00:34:55.507: INFO: PreferredVersion.GroupVersion: networking.k8s.io/v1 Jan 31 00:34:55.507: INFO: Versions found [{networking.k8s.io/v1 v1} {networking.k8s.io/v1beta1 v1beta1}] Jan 31 00:34:55.507: INFO: networking.k8s.io/v1 matches networking.k8s.io/v1 Jan 31 00:34:55.507: INFO: Checking APIGroup: extensions Jan 31 00:34:55.508: INFO: PreferredVersion.GroupVersion: extensions/v1beta1 Jan 31 00:34:55.508: INFO: Versions found [{extensions/v1beta1 v1beta1}] Jan 31 00:34:55.508: INFO: extensions/v1beta1 matches extensions/v1beta1 Jan 31 00:34:55.508: INFO: Checking APIGroup: policy Jan 31 00:34:55.508: INFO: PreferredVersion.GroupVersion: policy/v1beta1 Jan 31 00:34:55.508: INFO: Versions found [{policy/v1beta1 v1beta1}] Jan 31 00:34:55.508: INFO: policy/v1beta1 matches policy/v1beta1 Jan 31 00:34:55.508: INFO: Checking APIGroup: rbac.authorization.k8s.io Jan 31 00:34:55.509: INFO: PreferredVersion.GroupVersion: rbac.authorization.k8s.io/v1 Jan 31 00:34:55.509: INFO: Versions found [{rbac.authorization.k8s.io/v1 v1} {rbac.authorization.k8s.io/v1beta1 v1beta1}] Jan 31 00:34:55.509: INFO: rbac.authorization.k8s.io/v1 matches rbac.authorization.k8s.io/v1 Jan 31 00:34:55.509: INFO: Checking APIGroup: storage.k8s.io Jan 31 00:34:55.510: INFO: PreferredVersion.GroupVersion: storage.k8s.io/v1 Jan 31 00:34:55.510: INFO: Versions found [{storage.k8s.io/v1 v1} {storage.k8s.io/v1beta1 v1beta1}] Jan 31 00:34:55.510: INFO: storage.k8s.io/v1 matches storage.k8s.io/v1 Jan 31 00:34:55.510: INFO: Checking APIGroup: admissionregistration.k8s.io Jan 31 00:34:55.511: INFO: PreferredVersion.GroupVersion: admissionregistration.k8s.io/v1 Jan 31 00:34:55.511: INFO: Versions found [{admissionregistration.k8s.io/v1 v1} {admissionregistration.k8s.io/v1beta1 v1beta1}] Jan 31 00:34:55.511: INFO: admissionregistration.k8s.io/v1 matches admissionregistration.k8s.io/v1 Jan 31 00:34:55.511: INFO: Checking APIGroup: apiextensions.k8s.io Jan 31 00:34:55.511: INFO: PreferredVersion.GroupVersion: apiextensions.k8s.io/v1 Jan 31 00:34:55.511: INFO: Versions found [{apiextensions.k8s.io/v1 v1} {apiextensions.k8s.io/v1beta1 v1beta1}] Jan 31 00:34:55.511: INFO: apiextensions.k8s.io/v1 matches apiextensions.k8s.io/v1 Jan 31 00:34:55.511: INFO: Checking APIGroup: scheduling.k8s.io Jan 31 00:34:55.512: INFO: PreferredVersion.GroupVersion: scheduling.k8s.io/v1 Jan 31 00:34:55.512: INFO: Versions found [{scheduling.k8s.io/v1 v1} {scheduling.k8s.io/v1beta1 v1beta1}] Jan 31 00:34:55.512: INFO: scheduling.k8s.io/v1 matches scheduling.k8s.io/v1 Jan 31 00:34:55.512: INFO: Checking APIGroup: coordination.k8s.io Jan 31 00:34:55.513: INFO: PreferredVersion.GroupVersion: coordination.k8s.io/v1 Jan 31 00:34:55.513: INFO: Versions found [{coordination.k8s.io/v1 v1} {coordination.k8s.io/v1beta1 v1beta1}] Jan 31 00:34:55.513: INFO: coordination.k8s.io/v1 matches coordination.k8s.io/v1 Jan 31 00:34:55.513: INFO: Checking APIGroup: node.k8s.io Jan 31 00:34:55.514: INFO: PreferredVersion.GroupVersion: node.k8s.io/v1 Jan 31 00:34:55.514: INFO: Versions found [{node.k8s.io/v1 v1} {node.k8s.io/v1beta1 v1beta1}] Jan 31 00:34:55.514: INFO: node.k8s.io/v1 matches node.k8s.io/v1 Jan 31 00:34:55.514: INFO: Checking APIGroup: discovery.k8s.io Jan 31 00:34:55.515: INFO: PreferredVersion.GroupVersion: discovery.k8s.io/v1beta1 Jan 31 00:34:55.515: INFO: Versions found [{discovery.k8s.io/v1beta1 v1beta1}] Jan 31 00:34:55.515: INFO: discovery.k8s.io/v1beta1 matches discovery.k8s.io/v1beta1 Jan 31 00:34:55.515: INFO: Checking APIGroup: flowcontrol.apiserver.k8s.io Jan 31 00:34:55.515: INFO: PreferredVersion.GroupVersion: flowcontrol.apiserver.k8s.io/v1beta1 Jan 31 00:34:55.515: INFO: Versions found [{flowcontrol.apiserver.k8s.io/v1beta1 v1beta1}] Jan 31 00:34:55.515: INFO: flowcontrol.apiserver.k8s.io/v1beta1 matches flowcontrol.apiserver.k8s.io/v1beta1 Jan 31 00:34:55.515: INFO: Checking APIGroup: pingcap.com Jan 31 00:34:55.516: INFO: PreferredVersion.GroupVersion: pingcap.com/v1alpha1 Jan 31 00:34:55.516: INFO: Versions found [{pingcap.com/v1alpha1 v1alpha1}] Jan 31 00:34:55.516: INFO: pingcap.com/v1alpha1 matches pingcap.com/v1alpha1 [AfterEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:34:55.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "discovery-3858" for this suite. •{"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":311,"completed":83,"skipped":1664,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:34:55.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:92 Jan 31 00:34:55.691: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 31 00:34:55.742: INFO: Waiting for terminating namespaces to be deleted... Jan 31 00:34:55.745: INFO: Logging pods the apiserver thinks is on node latest-worker before test Jan 31 00:34:55.750: INFO: chaos-controller-manager-69c479c674-tdrls from default started at 2021-01-26 11:59:56 +0000 UTC (1 container statuses recorded) Jan 31 00:34:55.750: INFO: Container chaos-mesh ready: true, restart count 0 Jan 31 00:34:55.750: INFO: chaos-daemon-vkxzr from default started at 2021-01-26 11:59:56 +0000 UTC (1 container statuses recorded) Jan 31 00:34:55.750: INFO: Container chaos-daemon ready: true, restart count 0 Jan 31 00:34:55.750: INFO: kindnet-5bf5g from kube-system started at 2021-01-26 08:08:41 +0000 UTC (1 container statuses recorded) Jan 31 00:34:55.750: INFO: Container kindnet-cni ready: true, restart count 0 Jan 31 00:34:55.750: INFO: kube-proxy-f59c8 from kube-system started at 2021-01-26 08:08:41 +0000 UTC (1 container statuses recorded) Jan 31 00:34:55.750: INFO: Container kube-proxy ready: true, restart count 0 Jan 31 00:34:55.750: INFO: bin-falseece8f933-8e94-4077-aef3-931ff566e18b from kubelet-test-901 started at 2021-01-31 00:34:50 +0000 UTC (1 container statuses recorded) Jan 31 00:34:55.750: INFO: Container bin-falseece8f933-8e94-4077-aef3-931ff566e18b ready: false, restart count 0 Jan 31 00:34:55.750: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Jan 31 00:34:55.756: INFO: chaos-daemon-g67vf from default started at 2021-01-26 11:59:56 +0000 UTC (1 container statuses recorded) Jan 31 00:34:55.756: INFO: Container chaos-daemon ready: true, restart count 0 Jan 31 00:34:55.756: INFO: coredns-74ff55c5b-ngxdm from kube-system started at 2021-01-27 12:43:36 +0000 UTC (1 container statuses recorded) Jan 31 00:34:55.756: INFO: Container coredns ready: true, restart count 0 Jan 31 00:34:55.756: INFO: coredns-74ff55c5b-ntztq from kube-system started at 2021-01-27 12:43:35 +0000 UTC (1 container statuses recorded) Jan 31 00:34:55.756: INFO: Container coredns ready: true, restart count 0 Jan 31 00:34:55.756: INFO: kindnet-98jtw from kube-system started at 2021-01-26 08:08:41 +0000 UTC (1 container statuses recorded) Jan 31 00:34:55.756: INFO: Container kindnet-cni ready: true, restart count 0 Jan 31 00:34:55.756: INFO: kube-proxy-skm7x from kube-system started at 2021-01-26 08:08:41 +0000 UTC (1 container statuses recorded) Jan 31 00:34:55.756: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: verifying the node has the label node latest-worker STEP: verifying the node has the label node latest-worker2 Jan 31 00:34:55.917: INFO: Pod chaos-controller-manager-69c479c674-tdrls requesting resource cpu=25m on Node latest-worker Jan 31 00:34:55.917: INFO: Pod chaos-daemon-g67vf requesting resource cpu=0m on Node latest-worker2 Jan 31 00:34:55.917: INFO: Pod chaos-daemon-vkxzr requesting resource cpu=0m on Node latest-worker Jan 31 00:34:55.917: INFO: Pod coredns-74ff55c5b-ngxdm requesting resource cpu=100m on Node latest-worker2 Jan 31 00:34:55.917: INFO: Pod coredns-74ff55c5b-ntztq requesting resource cpu=100m on Node latest-worker2 Jan 31 00:34:55.917: INFO: Pod kindnet-5bf5g requesting resource cpu=100m on Node latest-worker Jan 31 00:34:55.917: INFO: Pod kindnet-98jtw requesting resource cpu=100m on Node latest-worker2 Jan 31 00:34:55.917: INFO: Pod kube-proxy-f59c8 requesting resource cpu=0m on Node latest-worker Jan 31 00:34:55.917: INFO: Pod kube-proxy-skm7x requesting resource cpu=0m on Node latest-worker2 STEP: Starting Pods to consume most of the cluster CPU. Jan 31 00:34:55.917: INFO: Creating a pod which consumes cpu=11112m on Node latest-worker Jan 31 00:34:55.924: INFO: Creating a pod which consumes cpu=10990m on Node latest-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-7cbe5595-0c1e-4805-a427-117aea897e38.165f29ee7f26ada7], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3298/filler-pod-7cbe5595-0c1e-4805-a427-117aea897e38 to latest-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-7cbe5595-0c1e-4805-a427-117aea897e38.165f29eed44a8ae8], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-7cbe5595-0c1e-4805-a427-117aea897e38.165f29ef2d3d5135], Reason = [Created], Message = [Created container filler-pod-7cbe5595-0c1e-4805-a427-117aea897e38] STEP: Considering event: Type = [Normal], Name = [filler-pod-7cbe5595-0c1e-4805-a427-117aea897e38.165f29ef3f2c95ce], Reason = [Started], Message = [Started container filler-pod-7cbe5595-0c1e-4805-a427-117aea897e38] STEP: Considering event: Type = [Normal], Name = [filler-pod-a5abc53f-3095-4672-8963-1086eb636030.165f29ee80c4a77f], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3298/filler-pod-a5abc53f-3095-4672-8963-1086eb636030 to latest-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-a5abc53f-3095-4672-8963-1086eb636030.165f29eef02e8258], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-a5abc53f-3095-4672-8963-1086eb636030.165f29ef5efc5ecb], Reason = [Created], Message = [Created container filler-pod-a5abc53f-3095-4672-8963-1086eb636030] STEP: Considering event: Type = [Normal], Name = [filler-pod-a5abc53f-3095-4672-8963-1086eb636030.165f29ef713dad65], Reason = [Started], Message = [Started container filler-pod-a5abc53f-3095-4672-8963-1086eb636030] STEP: Considering event: Type = [Warning], Name = [additional-pod.165f29efecba8621], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node latest-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node latest-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:35:03.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3298" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:83 • [SLOW TEST:7.539 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":311,"completed":84,"skipped":1742,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should patch a secret [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:35:03.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:35:03.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5290" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":311,"completed":85,"skipped":1782,"failed":0} SSS ------------------------------ [sig-instrumentation] Events API should delete a collection of events [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:35:03.335: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should delete a collection of events [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Create set of events STEP: get a list of Events with a label in the current namespace STEP: delete a list of events Jan 31 00:35:03.449: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity [AfterEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:35:03.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-9247" for this suite. •{"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":311,"completed":86,"skipped":1785,"failed":0} S ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:35:03.483: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0131 00:35:13.811006 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Jan 31 00:36:15.834: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:36:15.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4963" for this suite. • [SLOW TEST:72.359 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":311,"completed":87,"skipped":1786,"failed":0} SS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:36:15.843: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a pod to test downward API volume plugin Jan 31 00:36:15.961: INFO: Waiting up to 5m0s for pod "downwardapi-volume-98137181-f337-4286-84b2-41517a70566e" in namespace "downward-api-1181" to be "Succeeded or Failed" Jan 31 00:36:15.989: INFO: Pod "downwardapi-volume-98137181-f337-4286-84b2-41517a70566e": Phase="Pending", Reason="", readiness=false. Elapsed: 28.712659ms Jan 31 00:36:17.994: INFO: Pod "downwardapi-volume-98137181-f337-4286-84b2-41517a70566e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033195416s Jan 31 00:36:20.001: INFO: Pod "downwardapi-volume-98137181-f337-4286-84b2-41517a70566e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039836914s STEP: Saw pod success Jan 31 00:36:20.001: INFO: Pod "downwardapi-volume-98137181-f337-4286-84b2-41517a70566e" satisfied condition "Succeeded or Failed" Jan 31 00:36:20.003: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-98137181-f337-4286-84b2-41517a70566e container client-container: STEP: delete the pod Jan 31 00:36:20.036: INFO: Waiting for pod downwardapi-volume-98137181-f337-4286-84b2-41517a70566e to disappear Jan 31 00:36:20.049: INFO: Pod downwardapi-volume-98137181-f337-4286-84b2-41517a70566e no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:36:20.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1181" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":311,"completed":88,"skipped":1788,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:36:20.193: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a pod to test downward API volume plugin Jan 31 00:36:20.347: INFO: Waiting up to 5m0s for pod "downwardapi-volume-600419fd-30b9-4243-9aca-b90db0be2fac" in namespace "downward-api-8357" to be "Succeeded or Failed" Jan 31 00:36:20.371: INFO: Pod "downwardapi-volume-600419fd-30b9-4243-9aca-b90db0be2fac": Phase="Pending", Reason="", readiness=false. Elapsed: 23.544051ms Jan 31 00:36:22.375: INFO: Pod "downwardapi-volume-600419fd-30b9-4243-9aca-b90db0be2fac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027433316s Jan 31 00:36:24.380: INFO: Pod "downwardapi-volume-600419fd-30b9-4243-9aca-b90db0be2fac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032530575s STEP: Saw pod success Jan 31 00:36:24.380: INFO: Pod "downwardapi-volume-600419fd-30b9-4243-9aca-b90db0be2fac" satisfied condition "Succeeded or Failed" Jan 31 00:36:24.383: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-600419fd-30b9-4243-9aca-b90db0be2fac container client-container: STEP: delete the pod Jan 31 00:36:24.444: INFO: Waiting for pod downwardapi-volume-600419fd-30b9-4243-9aca-b90db0be2fac to disappear Jan 31 00:36:24.471: INFO: Pod downwardapi-volume-600419fd-30b9-4243-9aca-b90db0be2fac no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:36:24.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8357" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":311,"completed":89,"skipped":1806,"failed":0} SSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:36:24.482: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Jan 31 00:36:29.668: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:36:29.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-9932" for this suite. • [SLOW TEST:5.332 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":311,"completed":90,"skipped":1809,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:36:29.814: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating Pod STEP: Reading file content from the nginx-container Jan 31 00:36:35.948: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-1174 PodName:pod-sharedvolume-27db4b33-d184-4954-a636-613158f68384 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 31 00:36:35.948: INFO: >>> kubeConfig: /root/.kube/config I0131 00:36:36.028205 7 log.go:181] (0xc0004fd080) (0xc000665040) Create stream I0131 00:36:36.028240 7 log.go:181] (0xc0004fd080) (0xc000665040) Stream added, broadcasting: 1 I0131 00:36:36.030624 7 log.go:181] (0xc0004fd080) Reply frame received for 1 I0131 00:36:36.030707 7 log.go:181] (0xc0004fd080) (0xc0048b20a0) Create stream I0131 00:36:36.030728 7 log.go:181] (0xc0004fd080) (0xc0048b20a0) Stream added, broadcasting: 3 I0131 00:36:36.031781 7 log.go:181] (0xc0004fd080) Reply frame received for 3 I0131 00:36:36.031818 7 log.go:181] (0xc0004fd080) (0xc0048b23c0) Create stream I0131 00:36:36.031832 7 log.go:181] (0xc0004fd080) (0xc0048b23c0) Stream added, broadcasting: 5 I0131 00:36:36.032688 7 log.go:181] (0xc0004fd080) Reply frame received for 5 I0131 00:36:36.096102 7 log.go:181] (0xc0004fd080) Data frame received for 5 I0131 00:36:36.096129 7 log.go:181] (0xc0048b23c0) (5) Data frame handling I0131 00:36:36.096176 7 log.go:181] (0xc0004fd080) Data frame received for 3 I0131 00:36:36.096216 7 log.go:181] (0xc0048b20a0) (3) Data frame handling I0131 00:36:36.096234 7 log.go:181] (0xc0048b20a0) (3) Data frame sent I0131 00:36:36.096250 7 log.go:181] (0xc0004fd080) Data frame received for 3 I0131 00:36:36.096258 7 log.go:181] (0xc0048b20a0) (3) Data frame handling I0131 00:36:36.097312 7 log.go:181] (0xc0004fd080) Data frame received for 1 I0131 00:36:36.097332 7 log.go:181] (0xc000665040) (1) Data frame handling I0131 00:36:36.097343 7 log.go:181] (0xc000665040) (1) Data frame sent I0131 00:36:36.097353 7 log.go:181] (0xc0004fd080) (0xc000665040) Stream removed, broadcasting: 1 I0131 00:36:36.097367 7 log.go:181] (0xc0004fd080) Go away received I0131 00:36:36.097437 7 log.go:181] (0xc0004fd080) (0xc000665040) Stream removed, broadcasting: 1 I0131 00:36:36.097455 7 log.go:181] (0xc0004fd080) (0xc0048b20a0) Stream removed, broadcasting: 3 I0131 00:36:36.097468 7 log.go:181] (0xc0004fd080) (0xc0048b23c0) Stream removed, broadcasting: 5 Jan 31 00:36:36.097: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:36:36.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1174" for this suite. • [SLOW TEST:6.453 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:45 pod should support shared volumes between containers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":311,"completed":91,"skipped":1812,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:36:36.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: creating service in namespace services-5225 Jan 31 00:36:40.509: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=services-5225 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Jan 31 00:36:40.761: INFO: stderr: "I0131 00:36:40.659957 517 log.go:181] (0xc000a3e000) (0xc000c5c1e0) Create stream\nI0131 00:36:40.660012 517 log.go:181] (0xc000a3e000) (0xc000c5c1e0) Stream added, broadcasting: 1\nI0131 00:36:40.661801 517 log.go:181] (0xc000a3e000) Reply frame received for 1\nI0131 00:36:40.661872 517 log.go:181] (0xc000a3e000) (0xc000c5c280) Create stream\nI0131 00:36:40.661895 517 log.go:181] (0xc000a3e000) (0xc000c5c280) Stream added, broadcasting: 3\nI0131 00:36:40.662857 517 log.go:181] (0xc000a3e000) Reply frame received for 3\nI0131 00:36:40.662890 517 log.go:181] (0xc000a3e000) (0xc000c5c320) Create stream\nI0131 00:36:40.662898 517 log.go:181] (0xc000a3e000) (0xc000c5c320) Stream added, broadcasting: 5\nI0131 00:36:40.663824 517 log.go:181] (0xc000a3e000) Reply frame received for 5\nI0131 00:36:40.747913 517 log.go:181] (0xc000a3e000) Data frame received for 5\nI0131 00:36:40.747952 517 log.go:181] (0xc000c5c320) (5) Data frame handling\nI0131 00:36:40.747987 517 log.go:181] (0xc000c5c320) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI0131 00:36:40.751898 517 log.go:181] (0xc000a3e000) Data frame received for 3\nI0131 00:36:40.751918 517 log.go:181] (0xc000c5c280) (3) Data frame handling\nI0131 00:36:40.751935 517 log.go:181] (0xc000c5c280) (3) Data frame sent\nI0131 00:36:40.752775 517 log.go:181] (0xc000a3e000) Data frame received for 3\nI0131 00:36:40.752800 517 log.go:181] (0xc000c5c280) (3) Data frame handling\nI0131 00:36:40.753068 517 log.go:181] (0xc000a3e000) Data frame received for 5\nI0131 00:36:40.753099 517 log.go:181] (0xc000c5c320) (5) Data frame handling\nI0131 00:36:40.754832 517 log.go:181] (0xc000a3e000) Data frame received for 1\nI0131 00:36:40.754873 517 log.go:181] (0xc000c5c1e0) (1) Data frame handling\nI0131 00:36:40.754928 517 log.go:181] (0xc000c5c1e0) (1) Data frame sent\nI0131 00:36:40.754967 517 log.go:181] (0xc000a3e000) (0xc000c5c1e0) Stream removed, broadcasting: 1\nI0131 00:36:40.754997 517 log.go:181] (0xc000a3e000) Go away received\nI0131 00:36:40.755371 517 log.go:181] (0xc000a3e000) (0xc000c5c1e0) Stream removed, broadcasting: 1\nI0131 00:36:40.755400 517 log.go:181] (0xc000a3e000) (0xc000c5c280) Stream removed, broadcasting: 3\nI0131 00:36:40.755416 517 log.go:181] (0xc000a3e000) (0xc000c5c320) Stream removed, broadcasting: 5\n" Jan 31 00:36:40.761: INFO: stdout: "iptables" Jan 31 00:36:40.761: INFO: proxyMode: iptables Jan 31 00:36:40.783: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jan 31 00:36:40.795: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-nodeport-timeout in namespace services-5225 STEP: creating replication controller affinity-nodeport-timeout in namespace services-5225 I0131 00:36:40.843838 7 runners.go:190] Created replication controller with name: affinity-nodeport-timeout, namespace: services-5225, replica count: 3 I0131 00:36:43.894210 7 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0131 00:36:46.894558 7 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0131 00:36:49.894833 7 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 31 00:36:49.906: INFO: Creating new exec pod Jan 31 00:36:54.933: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=services-5225 exec execpod-affinity56h6v -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-timeout 80' Jan 31 00:36:55.174: INFO: stderr: "I0131 00:36:55.070858 535 log.go:181] (0xc0000dc000) (0xc000828000) Create stream\nI0131 00:36:55.070941 535 log.go:181] (0xc0000dc000) (0xc000828000) Stream added, broadcasting: 1\nI0131 00:36:55.072826 535 log.go:181] (0xc0000dc000) Reply frame received for 1\nI0131 00:36:55.072991 535 log.go:181] (0xc0000dc000) (0xc0008280a0) Create stream\nI0131 00:36:55.073007 535 log.go:181] (0xc0000dc000) (0xc0008280a0) Stream added, broadcasting: 3\nI0131 00:36:55.073970 535 log.go:181] (0xc0000dc000) Reply frame received for 3\nI0131 00:36:55.074006 535 log.go:181] (0xc0000dc000) (0xc00071e0a0) Create stream\nI0131 00:36:55.074017 535 log.go:181] (0xc0000dc000) (0xc00071e0a0) Stream added, broadcasting: 5\nI0131 00:36:55.075247 535 log.go:181] (0xc0000dc000) Reply frame received for 5\nI0131 00:36:55.149918 535 log.go:181] (0xc0000dc000) Data frame received for 5\nI0131 00:36:55.149949 535 log.go:181] (0xc00071e0a0) (5) Data frame handling\nI0131 00:36:55.149969 535 log.go:181] (0xc00071e0a0) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport-timeout 80\nI0131 00:36:55.165685 535 log.go:181] (0xc0000dc000) Data frame received for 5\nI0131 00:36:55.165727 535 log.go:181] (0xc00071e0a0) (5) Data frame handling\nI0131 00:36:55.165764 535 log.go:181] (0xc00071e0a0) (5) Data frame sent\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\nI0131 00:36:55.166094 535 log.go:181] (0xc0000dc000) Data frame received for 5\nI0131 00:36:55.166131 535 log.go:181] (0xc00071e0a0) (5) Data frame handling\nI0131 00:36:55.166197 535 log.go:181] (0xc0000dc000) Data frame received for 3\nI0131 00:36:55.166216 535 log.go:181] (0xc0008280a0) (3) Data frame handling\nI0131 00:36:55.168341 535 log.go:181] (0xc0000dc000) Data frame received for 1\nI0131 00:36:55.168388 535 log.go:181] (0xc000828000) (1) Data frame handling\nI0131 00:36:55.168424 535 log.go:181] (0xc000828000) (1) Data frame sent\nI0131 00:36:55.168453 535 log.go:181] (0xc0000dc000) (0xc000828000) Stream removed, broadcasting: 1\nI0131 00:36:55.168489 535 log.go:181] (0xc0000dc000) Go away received\nI0131 00:36:55.168809 535 log.go:181] (0xc0000dc000) (0xc000828000) Stream removed, broadcasting: 1\nI0131 00:36:55.168831 535 log.go:181] (0xc0000dc000) (0xc0008280a0) Stream removed, broadcasting: 3\nI0131 00:36:55.168922 535 log.go:181] (0xc0000dc000) (0xc00071e0a0) Stream removed, broadcasting: 5\n" Jan 31 00:36:55.174: INFO: stdout: "" Jan 31 00:36:55.175: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=services-5225 exec execpod-affinity56h6v -- /bin/sh -x -c nc -zv -t -w 2 10.96.252.219 80' Jan 31 00:36:55.378: INFO: stderr: "I0131 00:36:55.311558 553 log.go:181] (0xc00003abb0) (0xc00079c1e0) Create stream\nI0131 00:36:55.311632 553 log.go:181] (0xc00003abb0) (0xc00079c1e0) Stream added, broadcasting: 1\nI0131 00:36:55.313963 553 log.go:181] (0xc00003abb0) Reply frame received for 1\nI0131 00:36:55.313999 553 log.go:181] (0xc00003abb0) (0xc000ce2000) Create stream\nI0131 00:36:55.314010 553 log.go:181] (0xc00003abb0) (0xc000ce2000) Stream added, broadcasting: 3\nI0131 00:36:55.314919 553 log.go:181] (0xc00003abb0) Reply frame received for 3\nI0131 00:36:55.314949 553 log.go:181] (0xc00003abb0) (0xc00079c280) Create stream\nI0131 00:36:55.314959 553 log.go:181] (0xc00003abb0) (0xc00079c280) Stream added, broadcasting: 5\nI0131 00:36:55.315927 553 log.go:181] (0xc00003abb0) Reply frame received for 5\nI0131 00:36:55.369288 553 log.go:181] (0xc00003abb0) Data frame received for 5\nI0131 00:36:55.369318 553 log.go:181] (0xc00079c280) (5) Data frame handling\nI0131 00:36:55.369341 553 log.go:181] (0xc00079c280) (5) Data frame sent\nI0131 00:36:55.369352 553 log.go:181] (0xc00003abb0) Data frame received for 5\nI0131 00:36:55.369362 553 log.go:181] (0xc00079c280) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.252.219 80\nConnection to 10.96.252.219 80 port [tcp/http] succeeded!\nI0131 00:36:55.369463 553 log.go:181] (0xc00003abb0) Data frame received for 3\nI0131 00:36:55.369509 553 log.go:181] (0xc000ce2000) (3) Data frame handling\nI0131 00:36:55.372579 553 log.go:181] (0xc00003abb0) Data frame received for 1\nI0131 00:36:55.372614 553 log.go:181] (0xc00079c1e0) (1) Data frame handling\nI0131 00:36:55.372643 553 log.go:181] (0xc00079c1e0) (1) Data frame sent\nI0131 00:36:55.372671 553 log.go:181] (0xc00003abb0) (0xc00079c1e0) Stream removed, broadcasting: 1\nI0131 00:36:55.372706 553 log.go:181] (0xc00003abb0) Go away received\nI0131 00:36:55.373113 553 log.go:181] (0xc00003abb0) (0xc00079c1e0) Stream removed, broadcasting: 1\nI0131 00:36:55.373214 553 log.go:181] (0xc00003abb0) (0xc000ce2000) Stream removed, broadcasting: 3\nI0131 00:36:55.373261 553 log.go:181] (0xc00003abb0) (0xc00079c280) Stream removed, broadcasting: 5\n" Jan 31 00:36:55.378: INFO: stdout: "" Jan 31 00:36:55.379: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=services-5225 exec execpod-affinity56h6v -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.14 31203' Jan 31 00:36:55.576: INFO: stderr: "I0131 00:36:55.495878 571 log.go:181] (0xc000a62000) (0xc000ada1e0) Create stream\nI0131 00:36:55.495928 571 log.go:181] (0xc000a62000) (0xc000ada1e0) Stream added, broadcasting: 1\nI0131 00:36:55.497780 571 log.go:181] (0xc000a62000) Reply frame received for 1\nI0131 00:36:55.497831 571 log.go:181] (0xc000a62000) (0xc000322640) Create stream\nI0131 00:36:55.497845 571 log.go:181] (0xc000a62000) (0xc000322640) Stream added, broadcasting: 3\nI0131 00:36:55.498947 571 log.go:181] (0xc000a62000) Reply frame received for 3\nI0131 00:36:55.498986 571 log.go:181] (0xc000a62000) (0xc0008361e0) Create stream\nI0131 00:36:55.498997 571 log.go:181] (0xc000a62000) (0xc0008361e0) Stream added, broadcasting: 5\nI0131 00:36:55.500023 571 log.go:181] (0xc000a62000) Reply frame received for 5\nI0131 00:36:55.569274 571 log.go:181] (0xc000a62000) Data frame received for 5\nI0131 00:36:55.569318 571 log.go:181] (0xc0008361e0) (5) Data frame handling\nI0131 00:36:55.569337 571 log.go:181] (0xc0008361e0) (5) Data frame sent\nI0131 00:36:55.569345 571 log.go:181] (0xc000a62000) Data frame received for 5\nI0131 00:36:55.569351 571 log.go:181] (0xc0008361e0) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.14 31203\nConnection to 172.18.0.14 31203 port [tcp/31203] succeeded!\nI0131 00:36:55.569369 571 log.go:181] (0xc0008361e0) (5) Data frame sent\nI0131 00:36:55.569737 571 log.go:181] (0xc000a62000) Data frame received for 5\nI0131 00:36:55.569748 571 log.go:181] (0xc0008361e0) (5) Data frame handling\nI0131 00:36:55.569949 571 log.go:181] (0xc000a62000) Data frame received for 3\nI0131 00:36:55.569976 571 log.go:181] (0xc000322640) (3) Data frame handling\nI0131 00:36:55.571219 571 log.go:181] (0xc000a62000) Data frame received for 1\nI0131 00:36:55.571282 571 log.go:181] (0xc000ada1e0) (1) Data frame handling\nI0131 00:36:55.571310 571 log.go:181] (0xc000ada1e0) (1) Data frame sent\nI0131 00:36:55.571330 571 log.go:181] (0xc000a62000) (0xc000ada1e0) Stream removed, broadcasting: 1\nI0131 00:36:55.571393 571 log.go:181] (0xc000a62000) Go away received\nI0131 00:36:55.571609 571 log.go:181] (0xc000a62000) (0xc000ada1e0) Stream removed, broadcasting: 1\nI0131 00:36:55.571633 571 log.go:181] (0xc000a62000) (0xc000322640) Stream removed, broadcasting: 3\nI0131 00:36:55.571645 571 log.go:181] (0xc000a62000) (0xc0008361e0) Stream removed, broadcasting: 5\n" Jan 31 00:36:55.576: INFO: stdout: "" Jan 31 00:36:55.576: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=services-5225 exec execpod-affinity56h6v -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.16 31203' Jan 31 00:36:55.795: INFO: stderr: "I0131 00:36:55.718406 589 log.go:181] (0xc000c80000) (0xc0005781e0) Create stream\nI0131 00:36:55.718472 589 log.go:181] (0xc000c80000) (0xc0005781e0) Stream added, broadcasting: 1\nI0131 00:36:55.721103 589 log.go:181] (0xc000c80000) Reply frame received for 1\nI0131 00:36:55.721196 589 log.go:181] (0xc000c80000) (0xc000b92000) Create stream\nI0131 00:36:55.721229 589 log.go:181] (0xc000c80000) (0xc000b92000) Stream added, broadcasting: 3\nI0131 00:36:55.722299 589 log.go:181] (0xc000c80000) Reply frame received for 3\nI0131 00:36:55.722358 589 log.go:181] (0xc000c80000) (0xc00100c000) Create stream\nI0131 00:36:55.722383 589 log.go:181] (0xc000c80000) (0xc00100c000) Stream added, broadcasting: 5\nI0131 00:36:55.723392 589 log.go:181] (0xc000c80000) Reply frame received for 5\nI0131 00:36:55.789107 589 log.go:181] (0xc000c80000) Data frame received for 5\nI0131 00:36:55.789229 589 log.go:181] (0xc00100c000) (5) Data frame handling\nI0131 00:36:55.789292 589 log.go:181] (0xc00100c000) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.16 31203\nI0131 00:36:55.789893 589 log.go:181] (0xc000c80000) Data frame received for 3\nI0131 00:36:55.789969 589 log.go:181] (0xc000b92000) (3) Data frame handling\nI0131 00:36:55.790035 589 log.go:181] (0xc000c80000) Data frame received for 5\nI0131 00:36:55.790123 589 log.go:181] (0xc00100c000) (5) Data frame handling\nI0131 00:36:55.790144 589 log.go:181] (0xc00100c000) (5) Data frame sent\nI0131 00:36:55.790152 589 log.go:181] (0xc000c80000) Data frame received for 5\nI0131 00:36:55.790158 589 log.go:181] (0xc00100c000) (5) Data frame handling\nConnection to 172.18.0.16 31203 port [tcp/31203] succeeded!\nI0131 00:36:55.791363 589 log.go:181] (0xc000c80000) Data frame received for 1\nI0131 00:36:55.791381 589 log.go:181] (0xc0005781e0) (1) Data frame handling\nI0131 00:36:55.791390 589 log.go:181] (0xc0005781e0) (1) Data frame sent\nI0131 00:36:55.791401 589 log.go:181] (0xc000c80000) (0xc0005781e0) Stream removed, broadcasting: 1\nI0131 00:36:55.791422 589 log.go:181] (0xc000c80000) Go away received\nI0131 00:36:55.791769 589 log.go:181] (0xc000c80000) (0xc0005781e0) Stream removed, broadcasting: 1\nI0131 00:36:55.791789 589 log.go:181] (0xc000c80000) (0xc000b92000) Stream removed, broadcasting: 3\nI0131 00:36:55.791799 589 log.go:181] (0xc000c80000) (0xc00100c000) Stream removed, broadcasting: 5\n" Jan 31 00:36:55.796: INFO: stdout: "" Jan 31 00:36:55.796: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=services-5225 exec execpod-affinity56h6v -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.14:31203/ ; done' Jan 31 00:36:56.123: INFO: stderr: "I0131 00:36:55.925043 608 log.go:181] (0xc0000ab130) (0xc0008121e0) Create stream\nI0131 00:36:55.925099 608 log.go:181] (0xc0000ab130) (0xc0008121e0) Stream added, broadcasting: 1\nI0131 00:36:55.927451 608 log.go:181] (0xc0000ab130) Reply frame received for 1\nI0131 00:36:55.927494 608 log.go:181] (0xc0000ab130) (0xc000318000) Create stream\nI0131 00:36:55.927512 608 log.go:181] (0xc0000ab130) (0xc000318000) Stream added, broadcasting: 3\nI0131 00:36:55.928605 608 log.go:181] (0xc0000ab130) Reply frame received for 3\nI0131 00:36:55.928668 608 log.go:181] (0xc0000ab130) (0xc00073f7c0) Create stream\nI0131 00:36:55.928690 608 log.go:181] (0xc0000ab130) (0xc00073f7c0) Stream added, broadcasting: 5\nI0131 00:36:55.929803 608 log.go:181] (0xc0000ab130) Reply frame received for 5\nI0131 00:36:56.001601 608 log.go:181] (0xc0000ab130) Data frame received for 5\nI0131 00:36:56.001634 608 log.go:181] (0xc00073f7c0) (5) Data frame handling\nI0131 00:36:56.001642 608 log.go:181] (0xc00073f7c0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31203/\nI0131 00:36:56.001669 608 log.go:181] (0xc0000ab130) Data frame received for 3\nI0131 00:36:56.001694 608 log.go:181] (0xc000318000) (3) Data frame handling\nI0131 00:36:56.001712 608 log.go:181] (0xc000318000) (3) Data frame sent\nI0131 00:36:56.025534 608 log.go:181] (0xc0000ab130) Data frame received for 3\nI0131 00:36:56.025579 608 log.go:181] (0xc000318000) (3) Data frame handling\nI0131 00:36:56.025662 608 log.go:181] (0xc000318000) (3) Data frame sent\nI0131 00:36:56.025871 608 log.go:181] (0xc0000ab130) Data frame received for 3\nI0131 00:36:56.025894 608 log.go:181] (0xc000318000) (3) Data frame handling\nI0131 00:36:56.025907 608 log.go:181] (0xc000318000) (3) Data frame sent\nI0131 00:36:56.025925 608 log.go:181] (0xc0000ab130) Data frame received for 5\nI0131 00:36:56.025935 608 log.go:181] (0xc00073f7c0) (5) Data frame handling\nI0131 00:36:56.025945 608 log.go:181] (0xc00073f7c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31203/\nI0131 00:36:56.034249 608 log.go:181] (0xc0000ab130) Data frame received for 3\nI0131 00:36:56.034267 608 log.go:181] (0xc000318000) (3) Data frame handling\nI0131 00:36:56.034291 608 log.go:181] (0xc000318000) (3) Data frame sent\nI0131 00:36:56.035143 608 log.go:181] (0xc0000ab130) Data frame received for 3\nI0131 00:36:56.035186 608 log.go:181] (0xc000318000) (3) Data frame handling\nI0131 00:36:56.035201 608 log.go:181] (0xc000318000) (3) Data frame sent\nI0131 00:36:56.035221 608 log.go:181] (0xc0000ab130) Data frame received for 5\nI0131 00:36:56.035233 608 log.go:181] (0xc00073f7c0) (5) Data frame handling\nI0131 00:36:56.035252 608 log.go:181] (0xc00073f7c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31203/\nI0131 00:36:56.042824 608 log.go:181] (0xc0000ab130) Data frame received for 3\nI0131 00:36:56.042853 608 log.go:181] (0xc000318000) (3) Data frame handling\nI0131 00:36:56.042873 608 log.go:181] (0xc000318000) (3) Data frame sent\nI0131 00:36:56.043294 608 log.go:181] (0xc0000ab130) Data frame received for 3\nI0131 00:36:56.043305 608 log.go:181] (0xc000318000) (3) Data frame handling\nI0131 00:36:56.043311 608 log.go:181] (0xc000318000) (3) Data frame sent\nI0131 00:36:56.043371 608 log.go:181] (0xc0000ab130) Data frame received for 5\nI0131 00:36:56.043385 608 log.go:181] (0xc00073f7c0) (5) Data frame handling\nI0131 00:36:56.043397 608 log.go:181] (0xc00073f7c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31203/\nI0131 00:36:56.050917 608 log.go:181] (0xc0000ab130) Data frame received for 3\nI0131 00:36:56.050938 608 log.go:181] (0xc000318000) (3) Data frame handling\nI0131 00:36:56.050961 608 log.go:181] (0xc000318000) (3) Data frame sent\nI0131 00:36:56.051906 608 log.go:181] (0xc0000ab130) Data frame received for 3\nI0131 00:36:56.051940 608 log.go:181] (0xc000318000) (3) Data frame handling\nI0131 00:36:56.051955 608 log.go:181] (0xc000318000) (3) Data frame sent\nI0131 00:36:56.051975 608 log.go:181] (0xc0000ab130) Data frame received for 5\nI0131 00:36:56.051986 608 log.go:181] (0xc00073f7c0) (5) Data frame handling\nI0131 00:36:56.051997 608 log.go:181] (0xc00073f7c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31203/\nI0131 00:36:56.055388 608 log.go:181] (0xc0000ab130) Data frame received for 3\nI0131 00:36:56.055412 608 log.go:181] (0xc000318000) (3) Data frame handling\nI0131 00:36:56.055433 608 log.go:181] (0xc000318000) (3) Data frame sent\nI0131 00:36:56.055874 608 log.go:181] (0xc0000ab130) Data frame received for 3\nI0131 00:36:56.055913 608 log.go:181] (0xc000318000) (3) Data frame handling\nI0131 00:36:56.055929 608 log.go:181] (0xc000318000) (3) Data frame sent\nI0131 00:36:56.055947 608 log.go:181] (0xc0000ab130) Data frame received for 5\nI0131 00:36:56.055958 608 log.go:181] (0xc00073f7c0) (5) Data frame handling\nI0131 00:36:56.055982 608 log.go:181] (0xc00073f7c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31203/\nI0131 00:36:56.063107 608 log.go:181] (0xc0000ab130) Data frame received for 3\nI0131 00:36:56.063125 608 log.go:181] (0xc000318000) (3) Data frame handling\nI0131 00:36:56.063146 608 log.go:181] (0xc000318000) (3) Data frame sent\nI0131 00:36:56.063882 608 log.go:181] (0xc0000ab130) Data frame received for 5\nI0131 00:36:56.063912 608 log.go:181] (0xc00073f7c0) (5) Data frame handling\nI0131 00:36:56.063922 608 log.go:181] (0xc00073f7c0) (5) Data frame sent\nI0131 00:36:56.063931 608 log.go:181] (0xc0000ab130) Data frame received for 5\nI0131 00:36:56.063939 608 log.go:181] (0xc00073f7c0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31203/\nI0131 00:36:56.063958 608 log.go:181] (0xc00073f7c0) (5) Data frame sent\nI0131 00:36:56.063975 608 log.go:181] (0xc0000ab130) Data frame received for 3\nI0131 00:36:56.063983 608 log.go:181] (0xc000318000) (3) Data frame handling\nI0131 00:36:56.063996 608 log.go:181] (0xc000318000) (3) Data frame sent\nI0131 00:36:56.067880 608 log.go:181] (0xc0000ab130) Data frame received for 3\nI0131 00:36:56.067911 608 log.go:181] (0xc000318000) (3) Data frame handling\nI0131 00:36:56.067944 608 log.go:181] (0xc000318000) (3) Data frame sent\nI0131 00:36:56.068960 608 log.go:181] (0xc0000ab130) Data frame received for 5\nI0131 00:36:56.068988 608 log.go:181] (0xc00073f7c0) (5) Data frame handling\nI0131 00:36:56.069010 608 log.go:181] (0xc00073f7c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31203/\nI0131 00:36:56.069035 608 log.go:181] (0xc0000ab130) Data frame received for 3\nI0131 00:36:56.069050 608 log.go:181] (0xc000318000) (3) Data frame handling\nI0131 00:36:56.069065 608 log.go:181] (0xc000318000) (3) Data frame sent\nI0131 00:36:56.075114 608 log.go:181] (0xc0000ab130) Data frame received for 3\nI0131 00:36:56.075144 608 log.go:181] (0xc000318000) (3) Data frame handling\nI0131 00:36:56.075162 608 log.go:181] (0xc000318000) (3) Data frame sent\nI0131 00:36:56.076083 608 log.go:181] (0xc0000ab130) Data frame received for 3\nI0131 00:36:56.076118 608 log.go:181] (0xc0000ab130) Data frame received for 5\nI0131 00:36:56.076143 608 log.go:181] (0xc00073f7c0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31203/I0131 00:36:56.076174 608 log.go:181] (0xc000318000) (3) Data frame handling\nI0131 00:36:56.076210 608 log.go:181] (0xc00073f7c0) (5) Data frame sent\nI0131 00:36:56.076243 608 log.go:181] (0xc0000ab130) Data frame received for 5\nI0131 00:36:56.076262 608 log.go:181] (0xc00073f7c0) (5) Data frame handling\nI0131 00:36:56.076278 608 log.go:181] (0xc00073f7c0) (5) Data frame sent\n\nI0131 00:36:56.076307 608 log.go:181] (0xc000318000) (3) Data frame sent\nI0131 00:36:56.082395 608 log.go:181] (0xc0000ab130) Data frame received for 3\nI0131 00:36:56.082412 608 log.go:181] (0xc000318000) (3) Data frame handling\nI0131 00:36:56.082423 608 log.go:181] (0xc000318000) (3) Data frame sent\nI0131 00:36:56.082927 608 log.go:181] (0xc0000ab130) Data frame received for 3\nI0131 00:36:56.082955 608 log.go:181] (0xc0000ab130) Data frame received for 5\nI0131 00:36:56.082989 608 log.go:181] (0xc00073f7c0) (5) Data frame handling\nI0131 00:36:56.083005 608 log.go:181] (0xc00073f7c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31203/\nI0131 00:36:56.083023 608 log.go:181] (0xc000318000) (3) Data frame handling\nI0131 00:36:56.083034 608 log.go:181] (0xc000318000) (3) Data frame sent\nI0131 00:36:56.086882 608 log.go:181] (0xc0000ab130) Data frame received for 3\nI0131 00:36:56.086906 608 log.go:181] (0xc000318000) (3) Data frame handling\nI0131 00:36:56.086915 608 log.go:181] (0xc000318000) (3) Data frame sent\nI0131 00:36:56.087337 608 log.go:181] (0xc0000ab130) Data frame received for 5\nI0131 00:36:56.087353 608 log.go:181] (0xc00073f7c0) (5) Data frame handling\nI0131 00:36:56.087365 608 log.go:181] (0xc00073f7c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31203/\nI0131 00:36:56.087379 608 log.go:181] (0xc0000ab130) Data frame received for 3\nI0131 00:36:56.087402 608 log.go:181] (0xc000318000) (3) Data frame handling\nI0131 00:36:56.087412 608 log.go:181] (0xc000318000) (3) Data frame sent\nI0131 00:36:56.091021 608 log.go:181] (0xc0000ab130) Data frame received for 3\nI0131 00:36:56.091044 608 log.go:181] (0xc000318000) (3) Data frame handling\nI0131 00:36:56.091073 608 log.go:181] (0xc000318000) (3) Data frame sent\nI0131 00:36:56.091677 608 log.go:181] (0xc0000ab130) Data frame received for 3\nI0131 00:36:56.091697 608 log.go:181] (0xc000318000) (3) Data frame handling\nI0131 00:36:56.091714 608 log.go:181] (0xc000318000) (3) Data frame sent\nI0131 00:36:56.091730 608 log.go:181] (0xc0000ab130) Data frame received for 5\nI0131 00:36:56.091741 608 log.go:181] (0xc00073f7c0) (5) Data frame handling\nI0131 00:36:56.091753 608 log.go:181] (0xc00073f7c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31203/\nI0131 00:36:56.096399 608 log.go:181] (0xc0000ab130) Data frame received for 3\nI0131 00:36:56.096422 608 log.go:181] (0xc000318000) (3) Data frame handling\nI0131 00:36:56.096434 608 log.go:181] (0xc000318000) (3) Data frame sent\nI0131 00:36:56.096826 608 log.go:181] (0xc0000ab130) Data frame received for 5\nI0131 00:36:56.096909 608 log.go:181] (0xc00073f7c0) (5) Data frame handling\nI0131 00:36:56.096921 608 log.go:181] (0xc00073f7c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31203/\nI0131 00:36:56.096931 608 log.go:181] (0xc0000ab130) Data frame received for 3\nI0131 00:36:56.096936 608 log.go:181] (0xc000318000) (3) Data frame handling\nI0131 00:36:56.096941 608 log.go:181] (0xc000318000) (3) Data frame sent\nI0131 00:36:56.101145 608 log.go:181] (0xc0000ab130) Data frame received for 3\nI0131 00:36:56.101162 608 log.go:181] (0xc000318000) (3) Data frame handling\nI0131 00:36:56.101179 608 log.go:181] (0xc000318000) (3) Data frame sent\nI0131 00:36:56.101474 608 log.go:181] (0xc0000ab130) Data frame received for 3\nI0131 00:36:56.101491 608 log.go:181] (0xc000318000) (3) Data frame handling\nI0131 00:36:56.101499 608 log.go:181] (0xc000318000) (3) Data frame sent\nI0131 00:36:56.101510 608 log.go:181] (0xc0000ab130) Data frame received for 5\nI0131 00:36:56.101515 608 log.go:181] (0xc00073f7c0) (5) Data frame handling\nI0131 00:36:56.101521 608 log.go:181] (0xc00073f7c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31203/\nI0131 00:36:56.105460 608 log.go:181] (0xc0000ab130) Data frame received for 3\nI0131 00:36:56.105495 608 log.go:181] (0xc000318000) (3) Data frame handling\nI0131 00:36:56.105526 608 log.go:181] (0xc000318000) (3) Data frame sent\nI0131 00:36:56.105914 608 log.go:181] (0xc0000ab130) Data frame received for 3\nI0131 00:36:56.105945 608 log.go:181] (0xc000318000) (3) Data frame handling\nI0131 00:36:56.105959 608 log.go:181] (0xc000318000) (3) Data frame sent\nI0131 00:36:56.105980 608 log.go:181] (0xc0000ab130) Data frame received for 5\nI0131 00:36:56.105988 608 log.go:181] (0xc00073f7c0) (5) Data frame handling\nI0131 00:36:56.106001 608 log.go:181] (0xc00073f7c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31203/\nI0131 00:36:56.111229 608 log.go:181] (0xc0000ab130) Data frame received for 3\nI0131 00:36:56.111264 608 log.go:181] (0xc000318000) (3) Data frame handling\nI0131 00:36:56.111282 608 log.go:181] (0xc000318000) (3) Data frame sent\nI0131 00:36:56.111744 608 log.go:181] (0xc0000ab130) Data frame received for 5\nI0131 00:36:56.111760 608 log.go:181] (0xc00073f7c0) (5) Data frame handling\nI0131 00:36:56.111770 608 log.go:181] (0xc00073f7c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31203/\nI0131 00:36:56.111816 608 log.go:181] (0xc0000ab130) Data frame received for 3\nI0131 00:36:56.111843 608 log.go:181] (0xc000318000) (3) Data frame handling\nI0131 00:36:56.111861 608 log.go:181] (0xc000318000) (3) Data frame sent\nI0131 00:36:56.115541 608 log.go:181] (0xc0000ab130) Data frame received for 3\nI0131 00:36:56.115569 608 log.go:181] (0xc000318000) (3) Data frame handling\nI0131 00:36:56.115595 608 log.go:181] (0xc000318000) (3) Data frame sent\nI0131 00:36:56.116250 608 log.go:181] (0xc0000ab130) Data frame received for 5\nI0131 00:36:56.116277 608 log.go:181] (0xc00073f7c0) (5) Data frame handling\nI0131 00:36:56.116436 608 log.go:181] (0xc0000ab130) Data frame received for 3\nI0131 00:36:56.116451 608 log.go:181] (0xc000318000) (3) Data frame handling\nI0131 00:36:56.118211 608 log.go:181] (0xc0000ab130) Data frame received for 1\nI0131 00:36:56.118280 608 log.go:181] (0xc0008121e0) (1) Data frame handling\nI0131 00:36:56.118339 608 log.go:181] (0xc0008121e0) (1) Data frame sent\nI0131 00:36:56.118362 608 log.go:181] (0xc0000ab130) (0xc0008121e0) Stream removed, broadcasting: 1\nI0131 00:36:56.118378 608 log.go:181] (0xc0000ab130) Go away received\nI0131 00:36:56.118829 608 log.go:181] (0xc0000ab130) (0xc0008121e0) Stream removed, broadcasting: 1\nI0131 00:36:56.118854 608 log.go:181] (0xc0000ab130) (0xc000318000) Stream removed, broadcasting: 3\nI0131 00:36:56.118865 608 log.go:181] (0xc0000ab130) (0xc00073f7c0) Stream removed, broadcasting: 5\n" Jan 31 00:36:56.124: INFO: stdout: "\naffinity-nodeport-timeout-k4qh6\naffinity-nodeport-timeout-k4qh6\naffinity-nodeport-timeout-k4qh6\naffinity-nodeport-timeout-k4qh6\naffinity-nodeport-timeout-k4qh6\naffinity-nodeport-timeout-k4qh6\naffinity-nodeport-timeout-k4qh6\naffinity-nodeport-timeout-k4qh6\naffinity-nodeport-timeout-k4qh6\naffinity-nodeport-timeout-k4qh6\naffinity-nodeport-timeout-k4qh6\naffinity-nodeport-timeout-k4qh6\naffinity-nodeport-timeout-k4qh6\naffinity-nodeport-timeout-k4qh6\naffinity-nodeport-timeout-k4qh6\naffinity-nodeport-timeout-k4qh6" Jan 31 00:36:56.124: INFO: Received response from host: affinity-nodeport-timeout-k4qh6 Jan 31 00:36:56.124: INFO: Received response from host: affinity-nodeport-timeout-k4qh6 Jan 31 00:36:56.124: INFO: Received response from host: affinity-nodeport-timeout-k4qh6 Jan 31 00:36:56.124: INFO: Received response from host: affinity-nodeport-timeout-k4qh6 Jan 31 00:36:56.124: INFO: Received response from host: affinity-nodeport-timeout-k4qh6 Jan 31 00:36:56.124: INFO: Received response from host: affinity-nodeport-timeout-k4qh6 Jan 31 00:36:56.124: INFO: Received response from host: affinity-nodeport-timeout-k4qh6 Jan 31 00:36:56.124: INFO: Received response from host: affinity-nodeport-timeout-k4qh6 Jan 31 00:36:56.124: INFO: Received response from host: affinity-nodeport-timeout-k4qh6 Jan 31 00:36:56.124: INFO: Received response from host: affinity-nodeport-timeout-k4qh6 Jan 31 00:36:56.124: INFO: Received response from host: affinity-nodeport-timeout-k4qh6 Jan 31 00:36:56.124: INFO: Received response from host: affinity-nodeport-timeout-k4qh6 Jan 31 00:36:56.124: INFO: Received response from host: affinity-nodeport-timeout-k4qh6 Jan 31 00:36:56.124: INFO: Received response from host: affinity-nodeport-timeout-k4qh6 Jan 31 00:36:56.124: INFO: Received response from host: affinity-nodeport-timeout-k4qh6 Jan 31 00:36:56.124: INFO: Received response from host: affinity-nodeport-timeout-k4qh6 Jan 31 00:36:56.124: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=services-5225 exec execpod-affinity56h6v -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.18.0.14:31203/' Jan 31 00:36:56.338: INFO: stderr: "I0131 00:36:56.259052 626 log.go:181] (0xc00003b130) (0xc000b223c0) Create stream\nI0131 00:36:56.259109 626 log.go:181] (0xc00003b130) (0xc000b223c0) Stream added, broadcasting: 1\nI0131 00:36:56.261641 626 log.go:181] (0xc00003b130) Reply frame received for 1\nI0131 00:36:56.261714 626 log.go:181] (0xc00003b130) (0xc0006a6320) Create stream\nI0131 00:36:56.261734 626 log.go:181] (0xc00003b130) (0xc0006a6320) Stream added, broadcasting: 3\nI0131 00:36:56.262426 626 log.go:181] (0xc00003b130) Reply frame received for 3\nI0131 00:36:56.262457 626 log.go:181] (0xc00003b130) (0xc00034efa0) Create stream\nI0131 00:36:56.262468 626 log.go:181] (0xc00003b130) (0xc00034efa0) Stream added, broadcasting: 5\nI0131 00:36:56.263121 626 log.go:181] (0xc00003b130) Reply frame received for 5\nI0131 00:36:56.326254 626 log.go:181] (0xc00003b130) Data frame received for 5\nI0131 00:36:56.326276 626 log.go:181] (0xc00034efa0) (5) Data frame handling\nI0131 00:36:56.326286 626 log.go:181] (0xc00034efa0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31203/\nI0131 00:36:56.330356 626 log.go:181] (0xc00003b130) Data frame received for 3\nI0131 00:36:56.330371 626 log.go:181] (0xc0006a6320) (3) Data frame handling\nI0131 00:36:56.330381 626 log.go:181] (0xc0006a6320) (3) Data frame sent\nI0131 00:36:56.331079 626 log.go:181] (0xc00003b130) Data frame received for 3\nI0131 00:36:56.331102 626 log.go:181] (0xc0006a6320) (3) Data frame handling\nI0131 00:36:56.331176 626 log.go:181] (0xc00003b130) Data frame received for 5\nI0131 00:36:56.331191 626 log.go:181] (0xc00034efa0) (5) Data frame handling\nI0131 00:36:56.332943 626 log.go:181] (0xc00003b130) Data frame received for 1\nI0131 00:36:56.332961 626 log.go:181] (0xc000b223c0) (1) Data frame handling\nI0131 00:36:56.332986 626 log.go:181] (0xc000b223c0) (1) Data frame sent\nI0131 00:36:56.333168 626 log.go:181] (0xc00003b130) (0xc000b223c0) Stream removed, broadcasting: 1\nI0131 00:36:56.333427 626 log.go:181] (0xc00003b130) Go away received\nI0131 00:36:56.333478 626 log.go:181] (0xc00003b130) (0xc000b223c0) Stream removed, broadcasting: 1\nI0131 00:36:56.333494 626 log.go:181] (0xc00003b130) (0xc0006a6320) Stream removed, broadcasting: 3\nI0131 00:36:56.333501 626 log.go:181] (0xc00003b130) (0xc00034efa0) Stream removed, broadcasting: 5\n" Jan 31 00:36:56.338: INFO: stdout: "affinity-nodeport-timeout-k4qh6" Jan 31 00:37:16.338: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=services-5225 exec execpod-affinity56h6v -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.18.0.14:31203/' Jan 31 00:37:16.596: INFO: stderr: "I0131 00:37:16.489718 645 log.go:181] (0xc00064e000) (0xc000540000) Create stream\nI0131 00:37:16.489788 645 log.go:181] (0xc00064e000) (0xc000540000) Stream added, broadcasting: 1\nI0131 00:37:16.491615 645 log.go:181] (0xc00064e000) Reply frame received for 1\nI0131 00:37:16.491661 645 log.go:181] (0xc00064e000) (0xc000502f00) Create stream\nI0131 00:37:16.491674 645 log.go:181] (0xc00064e000) (0xc000502f00) Stream added, broadcasting: 3\nI0131 00:37:16.492721 645 log.go:181] (0xc00064e000) Reply frame received for 3\nI0131 00:37:16.492770 645 log.go:181] (0xc00064e000) (0xc00020fea0) Create stream\nI0131 00:37:16.492784 645 log.go:181] (0xc00064e000) (0xc00020fea0) Stream added, broadcasting: 5\nI0131 00:37:16.493862 645 log.go:181] (0xc00064e000) Reply frame received for 5\nI0131 00:37:16.581326 645 log.go:181] (0xc00064e000) Data frame received for 5\nI0131 00:37:16.581367 645 log.go:181] (0xc00020fea0) (5) Data frame handling\nI0131 00:37:16.581394 645 log.go:181] (0xc00020fea0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31203/\nI0131 00:37:16.587136 645 log.go:181] (0xc00064e000) Data frame received for 3\nI0131 00:37:16.587156 645 log.go:181] (0xc000502f00) (3) Data frame handling\nI0131 00:37:16.587171 645 log.go:181] (0xc000502f00) (3) Data frame sent\nI0131 00:37:16.588056 645 log.go:181] (0xc00064e000) Data frame received for 5\nI0131 00:37:16.588091 645 log.go:181] (0xc00020fea0) (5) Data frame handling\nI0131 00:37:16.588165 645 log.go:181] (0xc00064e000) Data frame received for 3\nI0131 00:37:16.588201 645 log.go:181] (0xc000502f00) (3) Data frame handling\nI0131 00:37:16.589992 645 log.go:181] (0xc00064e000) Data frame received for 1\nI0131 00:37:16.590030 645 log.go:181] (0xc000540000) (1) Data frame handling\nI0131 00:37:16.590054 645 log.go:181] (0xc000540000) (1) Data frame sent\nI0131 00:37:16.590069 645 log.go:181] (0xc00064e000) (0xc000540000) Stream removed, broadcasting: 1\nI0131 00:37:16.590105 645 log.go:181] (0xc00064e000) Go away received\nI0131 00:37:16.590611 645 log.go:181] (0xc00064e000) (0xc000540000) Stream removed, broadcasting: 1\nI0131 00:37:16.590643 645 log.go:181] (0xc00064e000) (0xc000502f00) Stream removed, broadcasting: 3\nI0131 00:37:16.590656 645 log.go:181] (0xc00064e000) (0xc00020fea0) Stream removed, broadcasting: 5\n" Jan 31 00:37:16.596: INFO: stdout: "affinity-nodeport-timeout-xlc99" Jan 31 00:37:16.596: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-timeout in namespace services-5225, will wait for the garbage collector to delete the pods Jan 31 00:37:16.742: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 6.744022ms Jan 31 00:37:17.442: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 700.22208ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:37:51.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5225" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 • [SLOW TEST:75.105 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":311,"completed":92,"skipped":1838,"failed":0} SSSSS ------------------------------ [sig-node] RuntimeClass should support RuntimeClasses API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:37:51.373: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename runtimeclass STEP: Waiting for a default service account to be provisioned in namespace [It] should support RuntimeClasses API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: getting /apis STEP: getting /apis/node.k8s.io STEP: getting /apis/node.k8s.io/v1 STEP: creating STEP: watching Jan 31 00:37:51.500: INFO: starting watch STEP: getting STEP: listing STEP: patching STEP: updating Jan 31 00:37:51.618: INFO: waiting for watch events with expected annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:37:51.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "runtimeclass-3701" for this suite. •{"msg":"PASSED [sig-node] RuntimeClass should support RuntimeClasses API operations [Conformance]","total":311,"completed":93,"skipped":1843,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:37:51.690: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a pod to test emptydir 0777 on node default medium Jan 31 00:37:51.817: INFO: Waiting up to 5m0s for pod "pod-feee5386-32fe-417c-a833-f4da1680cd28" in namespace "emptydir-9102" to be "Succeeded or Failed" Jan 31 00:37:51.819: INFO: Pod "pod-feee5386-32fe-417c-a833-f4da1680cd28": Phase="Pending", Reason="", readiness=false. Elapsed: 1.713256ms Jan 31 00:37:53.822: INFO: Pod "pod-feee5386-32fe-417c-a833-f4da1680cd28": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005598684s Jan 31 00:37:55.852: INFO: Pod "pod-feee5386-32fe-417c-a833-f4da1680cd28": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034884995s Jan 31 00:37:57.856: INFO: Pod "pod-feee5386-32fe-417c-a833-f4da1680cd28": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.039143115s STEP: Saw pod success Jan 31 00:37:57.856: INFO: Pod "pod-feee5386-32fe-417c-a833-f4da1680cd28" satisfied condition "Succeeded or Failed" Jan 31 00:37:57.860: INFO: Trying to get logs from node latest-worker pod pod-feee5386-32fe-417c-a833-f4da1680cd28 container test-container: STEP: delete the pod Jan 31 00:37:57.931: INFO: Waiting for pod pod-feee5386-32fe-417c-a833-f4da1680cd28 to disappear Jan 31 00:37:57.937: INFO: Pod pod-feee5386-32fe-417c-a833-f4da1680cd28 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:37:57.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9102" for this suite. • [SLOW TEST:6.255 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:45 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":94,"skipped":1860,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:37:57.947: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:38:58.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2705" for this suite. • [SLOW TEST:60.080 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:635 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":311,"completed":95,"skipped":1923,"failed":0} S ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:38:58.027: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:39:27.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5850" for this suite. • [SLOW TEST:29.977 seconds] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:635 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 when starting a container that exits /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:42 should run with the expected status [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":311,"completed":96,"skipped":1924,"failed":0} SSSSSSSSSS ------------------------------ [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:39:28.005: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename certificates STEP: Waiting for a default service account to be provisioned in namespace [It] should support CSR API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: getting /apis STEP: getting /apis/certificates.k8s.io STEP: getting /apis/certificates.k8s.io/v1 STEP: creating STEP: getting STEP: listing STEP: watching Jan 31 00:39:28.622: INFO: starting watch STEP: patching STEP: updating Jan 31 00:39:28.671: INFO: waiting for watch events with expected annotations Jan 31 00:39:28.671: INFO: saw patched and updated annotations STEP: getting /approval STEP: patching /approval STEP: updating /approval STEP: getting /status STEP: patching /status STEP: updating /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:39:28.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "certificates-4153" for this suite. •{"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":311,"completed":97,"skipped":1934,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:39:28.881: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 Jan 31 00:39:28.993: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:39:33.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3460" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":311,"completed":98,"skipped":1953,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:39:33.045: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Jan 31 00:39:33.126: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Jan 31 00:39:46.460: INFO: >>> kubeConfig: /root/.kube/config Jan 31 00:39:50.029: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:40:02.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4310" for this suite. • [SLOW TEST:29.295 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":311,"completed":99,"skipped":1960,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:40:02.341: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Jan 31 00:40:02.448: INFO: Waiting up to 1m0s for all nodes to be ready Jan 31 00:41:02.473: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:41:02.476: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:488 STEP: Finding an available node STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. Jan 31 00:41:06.642: INFO: found a healthy node: latest-worker [It] runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 Jan 31 00:41:22.894: INFO: pods created so far: [1 1 1] Jan 31 00:41:22.894: INFO: length of pods created so far: 3 Jan 31 00:41:56.903: INFO: pods created so far: [2 2 1] [AfterEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:42:03.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-4992" for this suite. [AfterEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:462 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:42:03.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-3524" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:121.802 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:451 runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":311,"completed":100,"skipped":1973,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:42:04.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating secret with name secret-test-map-0c17a77e-d741-4815-a8dd-70130c9aac0b STEP: Creating a pod to test consume secrets Jan 31 00:42:04.286: INFO: Waiting up to 5m0s for pod "pod-secrets-7f7a7590-8df9-4400-ae6d-bb1392f383a7" in namespace "secrets-8529" to be "Succeeded or Failed" Jan 31 00:42:04.289: INFO: Pod "pod-secrets-7f7a7590-8df9-4400-ae6d-bb1392f383a7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.510872ms Jan 31 00:42:06.295: INFO: Pod "pod-secrets-7f7a7590-8df9-4400-ae6d-bb1392f383a7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009037373s Jan 31 00:42:08.300: INFO: Pod "pod-secrets-7f7a7590-8df9-4400-ae6d-bb1392f383a7": Phase="Running", Reason="", readiness=true. Elapsed: 4.013900726s Jan 31 00:42:10.414: INFO: Pod "pod-secrets-7f7a7590-8df9-4400-ae6d-bb1392f383a7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.128383218s STEP: Saw pod success Jan 31 00:42:10.414: INFO: Pod "pod-secrets-7f7a7590-8df9-4400-ae6d-bb1392f383a7" satisfied condition "Succeeded or Failed" Jan 31 00:42:10.418: INFO: Trying to get logs from node latest-worker pod pod-secrets-7f7a7590-8df9-4400-ae6d-bb1392f383a7 container secret-volume-test: STEP: delete the pod Jan 31 00:42:10.470: INFO: Waiting for pod pod-secrets-7f7a7590-8df9-4400-ae6d-bb1392f383a7 to disappear Jan 31 00:42:10.475: INFO: Pod pod-secrets-7f7a7590-8df9-4400-ae6d-bb1392f383a7 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:42:10.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8529" for this suite. • [SLOW TEST:6.340 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":101,"skipped":1981,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:42:10.483: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a pod to test downward API volume plugin Jan 31 00:42:10.556: INFO: Waiting up to 5m0s for pod "downwardapi-volume-98e74583-201f-475f-901a-fe9da91d0d63" in namespace "downward-api-102" to be "Succeeded or Failed" Jan 31 00:42:10.583: INFO: Pod "downwardapi-volume-98e74583-201f-475f-901a-fe9da91d0d63": Phase="Pending", Reason="", readiness=false. Elapsed: 26.304322ms Jan 31 00:42:12.596: INFO: Pod "downwardapi-volume-98e74583-201f-475f-901a-fe9da91d0d63": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039778739s Jan 31 00:42:14.601: INFO: Pod "downwardapi-volume-98e74583-201f-475f-901a-fe9da91d0d63": Phase="Running", Reason="", readiness=true. Elapsed: 4.044385311s Jan 31 00:42:16.605: INFO: Pod "downwardapi-volume-98e74583-201f-475f-901a-fe9da91d0d63": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.048835326s STEP: Saw pod success Jan 31 00:42:16.605: INFO: Pod "downwardapi-volume-98e74583-201f-475f-901a-fe9da91d0d63" satisfied condition "Succeeded or Failed" Jan 31 00:42:16.608: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-98e74583-201f-475f-901a-fe9da91d0d63 container client-container: STEP: delete the pod Jan 31 00:42:16.643: INFO: Waiting for pod downwardapi-volume-98e74583-201f-475f-901a-fe9da91d0d63 to disappear Jan 31 00:42:16.649: INFO: Pod downwardapi-volume-98e74583-201f-475f-901a-fe9da91d0d63 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:42:16.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-102" for this suite. • [SLOW TEST:6.175 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36 should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":311,"completed":102,"skipped":1995,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:42:16.659: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating configMap with name configmap-test-upd-faa9de4b-8b73-41b0-8880-bb147518e0f2 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:42:22.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6640" for this suite. • [SLOW TEST:6.169 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":311,"completed":103,"skipped":2033,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:42:22.828: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating pod test-webserver-16de8152-5363-46ad-b21a-4dc78f942c79 in namespace container-probe-3165 Jan 31 00:42:26.971: INFO: Started pod test-webserver-16de8152-5363-46ad-b21a-4dc78f942c79 in namespace container-probe-3165 STEP: checking the pod's current state and verifying that restartCount is present Jan 31 00:42:26.974: INFO: Initial restart count of pod test-webserver-16de8152-5363-46ad-b21a-4dc78f942c79 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:46:27.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3165" for this suite. • [SLOW TEST:244.853 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:635 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":311,"completed":104,"skipped":2045,"failed":0} SSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:46:27.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating the pod Jan 31 00:46:32.702: INFO: Successfully updated pod "annotationupdate9863e3fe-125b-4a54-9362-0cb6fb4f4124" [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:46:36.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5335" for this suite. • [SLOW TEST:9.063 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36 should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":311,"completed":105,"skipped":2051,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:46:36.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 31 00:46:37.189: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 31 00:46:39.702: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747650797, loc:(*time.Location)(0x79bd420)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747650797, loc:(*time.Location)(0x79bd420)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747650797, loc:(*time.Location)(0x79bd420)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747650797, loc:(*time.Location)(0x79bd420)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 31 00:46:41.707: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747650797, loc:(*time.Location)(0x79bd420)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747650797, loc:(*time.Location)(0x79bd420)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747650797, loc:(*time.Location)(0x79bd420)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747650797, loc:(*time.Location)(0x79bd420)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 31 00:46:44.742: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:46:44.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9392" for this suite. STEP: Destroying namespace "webhook-9392-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:8.263 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":311,"completed":106,"skipped":2062,"failed":0} SSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:46:45.009: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 [It] should contain environment variables for services [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 Jan 31 00:46:49.170: INFO: Waiting up to 5m0s for pod "client-envvars-37c2f333-b5df-4a55-8238-1f2f01cf9bdb" in namespace "pods-5310" to be "Succeeded or Failed" Jan 31 00:46:49.187: INFO: Pod "client-envvars-37c2f333-b5df-4a55-8238-1f2f01cf9bdb": Phase="Pending", Reason="", readiness=false. Elapsed: 17.453976ms Jan 31 00:46:51.399: INFO: Pod "client-envvars-37c2f333-b5df-4a55-8238-1f2f01cf9bdb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.229483696s Jan 31 00:46:53.419: INFO: Pod "client-envvars-37c2f333-b5df-4a55-8238-1f2f01cf9bdb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.249203559s STEP: Saw pod success Jan 31 00:46:53.419: INFO: Pod "client-envvars-37c2f333-b5df-4a55-8238-1f2f01cf9bdb" satisfied condition "Succeeded or Failed" Jan 31 00:46:53.422: INFO: Trying to get logs from node latest-worker pod client-envvars-37c2f333-b5df-4a55-8238-1f2f01cf9bdb container env3cont: STEP: delete the pod Jan 31 00:46:53.442: INFO: Waiting for pod client-envvars-37c2f333-b5df-4a55-8238-1f2f01cf9bdb to disappear Jan 31 00:46:53.445: INFO: Pod client-envvars-37c2f333-b5df-4a55-8238-1f2f01cf9bdb no longer exists [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:46:53.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5310" for this suite. • [SLOW TEST:8.443 seconds] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:635 should contain environment variables for services [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":311,"completed":107,"skipped":2065,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:46:53.452: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: creating a test event STEP: listing all events in all namespaces STEP: patching the test event STEP: fetching the test event STEP: deleting the test event STEP: listing all events in all namespaces [AfterEach] [sig-api-machinery] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:46:54.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-8925" for this suite. •{"msg":"PASSED [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":311,"completed":108,"skipped":2082,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:46:54.058: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0131 00:47:08.549151 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Jan 31 00:48:10.568: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. Jan 31 00:48:10.568: INFO: Deleting pod "simpletest-rc-to-be-deleted-25xhp" in namespace "gc-1492" Jan 31 00:48:10.622: INFO: Deleting pod "simpletest-rc-to-be-deleted-2q9tn" in namespace "gc-1492" Jan 31 00:48:10.719: INFO: Deleting pod "simpletest-rc-to-be-deleted-95qvh" in namespace "gc-1492" Jan 31 00:48:11.343: INFO: Deleting pod "simpletest-rc-to-be-deleted-9fsbh" in namespace "gc-1492" Jan 31 00:48:11.475: INFO: Deleting pod "simpletest-rc-to-be-deleted-b84xj" in namespace "gc-1492" [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:48:11.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1492" for this suite. • [SLOW TEST:77.583 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":311,"completed":109,"skipped":2123,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:48:11.641: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a pod to test emptydir 0777 on node default medium Jan 31 00:48:12.215: INFO: Waiting up to 5m0s for pod "pod-c87f5bd0-d993-4f08-b214-710acc1b8ac5" in namespace "emptydir-5683" to be "Succeeded or Failed" Jan 31 00:48:12.261: INFO: Pod "pod-c87f5bd0-d993-4f08-b214-710acc1b8ac5": Phase="Pending", Reason="", readiness=false. Elapsed: 46.115976ms Jan 31 00:48:14.266: INFO: Pod "pod-c87f5bd0-d993-4f08-b214-710acc1b8ac5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050977837s Jan 31 00:48:16.271: INFO: Pod "pod-c87f5bd0-d993-4f08-b214-710acc1b8ac5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056268833s Jan 31 00:48:18.277: INFO: Pod "pod-c87f5bd0-d993-4f08-b214-710acc1b8ac5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.061764018s STEP: Saw pod success Jan 31 00:48:18.277: INFO: Pod "pod-c87f5bd0-d993-4f08-b214-710acc1b8ac5" satisfied condition "Succeeded or Failed" Jan 31 00:48:18.280: INFO: Trying to get logs from node latest-worker pod pod-c87f5bd0-d993-4f08-b214-710acc1b8ac5 container test-container: STEP: delete the pod Jan 31 00:48:18.326: INFO: Waiting for pod pod-c87f5bd0-d993-4f08-b214-710acc1b8ac5 to disappear Jan 31 00:48:18.338: INFO: Pod pod-c87f5bd0-d993-4f08-b214-710acc1b8ac5 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:48:18.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5683" for this suite. • [SLOW TEST:6.706 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:45 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":110,"skipped":2128,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:48:18.347: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 31 00:48:18.908: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created Jan 31 00:48:21.166: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747650898, loc:(*time.Location)(0x79bd420)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747650898, loc:(*time.Location)(0x79bd420)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747650899, loc:(*time.Location)(0x79bd420)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747650898, loc:(*time.Location)(0x79bd420)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 31 00:48:24.284: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:48:24.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9633" for this suite. STEP: Destroying namespace "webhook-9633-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:6.094 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":311,"completed":111,"skipped":2139,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:48:24.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating secret with name secret-test-7e5eed0c-0971-4979-8b50-2555ed716758 STEP: Creating a pod to test consume secrets Jan 31 00:48:24.533: INFO: Waiting up to 5m0s for pod "pod-secrets-eb748d82-aad3-4161-bf7b-8a13042cb3fe" in namespace "secrets-1688" to be "Succeeded or Failed" Jan 31 00:48:24.551: INFO: Pod "pod-secrets-eb748d82-aad3-4161-bf7b-8a13042cb3fe": Phase="Pending", Reason="", readiness=false. Elapsed: 17.764267ms Jan 31 00:48:26.556: INFO: Pod "pod-secrets-eb748d82-aad3-4161-bf7b-8a13042cb3fe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023173294s Jan 31 00:48:28.560: INFO: Pod "pod-secrets-eb748d82-aad3-4161-bf7b-8a13042cb3fe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027502801s STEP: Saw pod success Jan 31 00:48:28.560: INFO: Pod "pod-secrets-eb748d82-aad3-4161-bf7b-8a13042cb3fe" satisfied condition "Succeeded or Failed" Jan 31 00:48:28.563: INFO: Trying to get logs from node latest-worker pod pod-secrets-eb748d82-aad3-4161-bf7b-8a13042cb3fe container secret-volume-test: STEP: delete the pod Jan 31 00:48:28.716: INFO: Waiting for pod pod-secrets-eb748d82-aad3-4161-bf7b-8a13042cb3fe to disappear Jan 31 00:48:28.728: INFO: Pod pod-secrets-eb748d82-aad3-4161-bf7b-8a13042cb3fe no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:48:28.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1688" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":112,"skipped":2149,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:48:28.738: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0131 00:48:29.987447 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Jan 31 00:49:32.043: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:49:32.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2692" for this suite. • [SLOW TEST:63.313 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":311,"completed":113,"skipped":2155,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:49:32.052: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a pod to test emptydir 0666 on node default medium Jan 31 00:49:32.217: INFO: Waiting up to 5m0s for pod "pod-475d08af-061b-433c-ac54-ab2ec10d326e" in namespace "emptydir-3191" to be "Succeeded or Failed" Jan 31 00:49:32.219: INFO: Pod "pod-475d08af-061b-433c-ac54-ab2ec10d326e": Phase="Pending", Reason="", readiness=false. Elapsed: 1.939304ms Jan 31 00:49:34.386: INFO: Pod "pod-475d08af-061b-433c-ac54-ab2ec10d326e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.168242992s Jan 31 00:49:36.390: INFO: Pod "pod-475d08af-061b-433c-ac54-ab2ec10d326e": Phase="Running", Reason="", readiness=true. Elapsed: 4.172461645s Jan 31 00:49:38.395: INFO: Pod "pod-475d08af-061b-433c-ac54-ab2ec10d326e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.177479258s STEP: Saw pod success Jan 31 00:49:38.395: INFO: Pod "pod-475d08af-061b-433c-ac54-ab2ec10d326e" satisfied condition "Succeeded or Failed" Jan 31 00:49:38.398: INFO: Trying to get logs from node latest-worker pod pod-475d08af-061b-433c-ac54-ab2ec10d326e container test-container: STEP: delete the pod Jan 31 00:49:38.548: INFO: Waiting for pod pod-475d08af-061b-433c-ac54-ab2ec10d326e to disappear Jan 31 00:49:38.561: INFO: Pod pod-475d08af-061b-433c-ac54-ab2ec10d326e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:49:38.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3191" for this suite. • [SLOW TEST:6.516 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:45 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":114,"skipped":2158,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:49:38.569: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating secret with name s-test-opt-del-07c24cea-38f2-4fd4-be5b-8617cec7f535 STEP: Creating secret with name s-test-opt-upd-b46dcd92-e9e6-4f6e-9d89-6e0340691e67 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-07c24cea-38f2-4fd4-be5b-8617cec7f535 STEP: Updating secret s-test-opt-upd-b46dcd92-e9e6-4f6e-9d89-6e0340691e67 STEP: Creating secret with name s-test-opt-create-0c06bbe8-e772-4bfe-8570-af08f4f50716 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:49:47.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-321" for this suite. • [SLOW TEST:8.475 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":311,"completed":115,"skipped":2197,"failed":0} [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:49:47.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Jan 31 00:49:54.599: INFO: 10 pods remaining Jan 31 00:49:54.599: INFO: 10 pods has nil DeletionTimestamp Jan 31 00:49:54.599: INFO: Jan 31 00:49:55.961: INFO: 10 pods remaining Jan 31 00:49:55.961: INFO: 10 pods has nil DeletionTimestamp Jan 31 00:49:55.961: INFO: Jan 31 00:49:56.997: INFO: 0 pods remaining Jan 31 00:49:56.997: INFO: 0 pods has nil DeletionTimestamp Jan 31 00:49:56.997: INFO: Jan 31 00:49:57.794: INFO: 0 pods remaining Jan 31 00:49:57.794: INFO: 0 pods has nil DeletionTimestamp Jan 31 00:49:57.794: INFO: STEP: Gathering metrics W0131 00:49:59.036383 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Jan 31 00:51:01.149: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:51:01.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8708" for this suite. • [SLOW TEST:74.124 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":311,"completed":116,"skipped":2197,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:51:01.168: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a pod to test downward API volume plugin Jan 31 00:51:01.263: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9b5a13da-0797-4a7c-bea3-fc6bd995172a" in namespace "downward-api-9833" to be "Succeeded or Failed" Jan 31 00:51:01.308: INFO: Pod "downwardapi-volume-9b5a13da-0797-4a7c-bea3-fc6bd995172a": Phase="Pending", Reason="", readiness=false. Elapsed: 45.685331ms Jan 31 00:51:03.312: INFO: Pod "downwardapi-volume-9b5a13da-0797-4a7c-bea3-fc6bd995172a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049639835s Jan 31 00:51:05.318: INFO: Pod "downwardapi-volume-9b5a13da-0797-4a7c-bea3-fc6bd995172a": Phase="Running", Reason="", readiness=true. Elapsed: 4.055175182s Jan 31 00:51:07.323: INFO: Pod "downwardapi-volume-9b5a13da-0797-4a7c-bea3-fc6bd995172a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.060568441s STEP: Saw pod success Jan 31 00:51:07.323: INFO: Pod "downwardapi-volume-9b5a13da-0797-4a7c-bea3-fc6bd995172a" satisfied condition "Succeeded or Failed" Jan 31 00:51:07.327: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-9b5a13da-0797-4a7c-bea3-fc6bd995172a container client-container: STEP: delete the pod Jan 31 00:51:07.350: INFO: Waiting for pod downwardapi-volume-9b5a13da-0797-4a7c-bea3-fc6bd995172a to disappear Jan 31 00:51:07.367: INFO: Pod downwardapi-volume-9b5a13da-0797-4a7c-bea3-fc6bd995172a no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:51:07.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9833" for this suite. • [SLOW TEST:6.210 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36 should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":311,"completed":117,"skipped":2205,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:51:07.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 31 00:51:08.195: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 31 00:51:10.252: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747651068, loc:(*time.Location)(0x79bd420)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747651068, loc:(*time.Location)(0x79bd420)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747651068, loc:(*time.Location)(0x79bd420)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747651068, loc:(*time.Location)(0x79bd420)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 31 00:51:12.257: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747651068, loc:(*time.Location)(0x79bd420)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747651068, loc:(*time.Location)(0x79bd420)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747651068, loc:(*time.Location)(0x79bd420)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747651068, loc:(*time.Location)(0x79bd420)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 31 00:51:15.303: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Jan 31 00:51:19.354: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=webhook-2678 attach --namespace=webhook-2678 to-be-attached-pod -i -c=container1' Jan 31 00:51:22.838: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:51:22.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2678" for this suite. STEP: Destroying namespace "webhook-2678-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:15.600 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":311,"completed":118,"skipped":2219,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Events should delete a collection of events [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:51:22.980: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of events [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Create set of events Jan 31 00:51:23.037: INFO: created test-event-1 Jan 31 00:51:23.043: INFO: created test-event-2 Jan 31 00:51:23.093: INFO: created test-event-3 STEP: get a list of Events with a label in the current namespace STEP: delete collection of events Jan 31 00:51:23.096: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity Jan 31 00:51:23.114: INFO: requesting list of events to confirm quantity [AfterEach] [sig-api-machinery] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:51:23.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-6196" for this suite. •{"msg":"PASSED [sig-api-machinery] Events should delete a collection of events [Conformance]","total":311,"completed":119,"skipped":2247,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:51:23.125: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a pod to test emptydir 0666 on tmpfs Jan 31 00:51:23.180: INFO: Waiting up to 5m0s for pod "pod-1d4d4bd1-2bf2-4199-99bc-772ef17c1ef1" in namespace "emptydir-3043" to be "Succeeded or Failed" Jan 31 00:51:23.237: INFO: Pod "pod-1d4d4bd1-2bf2-4199-99bc-772ef17c1ef1": Phase="Pending", Reason="", readiness=false. Elapsed: 57.232405ms Jan 31 00:51:25.242: INFO: Pod "pod-1d4d4bd1-2bf2-4199-99bc-772ef17c1ef1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062142489s Jan 31 00:51:27.247: INFO: Pod "pod-1d4d4bd1-2bf2-4199-99bc-772ef17c1ef1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.067038493s STEP: Saw pod success Jan 31 00:51:27.247: INFO: Pod "pod-1d4d4bd1-2bf2-4199-99bc-772ef17c1ef1" satisfied condition "Succeeded or Failed" Jan 31 00:51:27.250: INFO: Trying to get logs from node latest-worker pod pod-1d4d4bd1-2bf2-4199-99bc-772ef17c1ef1 container test-container: STEP: delete the pod Jan 31 00:51:27.286: INFO: Waiting for pod pod-1d4d4bd1-2bf2-4199-99bc-772ef17c1ef1 to disappear Jan 31 00:51:27.298: INFO: Pod pod-1d4d4bd1-2bf2-4199-99bc-772ef17c1ef1 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:51:27.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3043" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":120,"skipped":2262,"failed":0} ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:51:27.306: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:52:00.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-45" for this suite. STEP: Destroying namespace "nsdeletetest-1316" for this suite. Jan 31 00:52:00.730: INFO: Namespace nsdeletetest-1316 was already deleted STEP: Destroying namespace "nsdeletetest-9871" for this suite. • [SLOW TEST:33.427 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":311,"completed":121,"skipped":2262,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:52:00.733: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 Jan 31 00:52:00.858: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:52:05.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6037" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":311,"completed":122,"skipped":2276,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:52:05.015: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2535.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2535.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 31 00:52:13.201: INFO: DNS probes using dns-2535/dns-test-511f32df-f890-4ad1-8df7-79fb277704a1 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:52:13.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2535" for this suite. • [SLOW TEST:8.252 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":311,"completed":123,"skipped":2288,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:52:13.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 31 00:52:14.420: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 31 00:52:16.470: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747651134, loc:(*time.Location)(0x79bd420)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747651134, loc:(*time.Location)(0x79bd420)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747651134, loc:(*time.Location)(0x79bd420)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747651134, loc:(*time.Location)(0x79bd420)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 31 00:52:18.476: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747651134, loc:(*time.Location)(0x79bd420)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747651134, loc:(*time.Location)(0x79bd420)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747651134, loc:(*time.Location)(0x79bd420)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747651134, loc:(*time.Location)(0x79bd420)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 31 00:52:21.501: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:52:21.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4630" for this suite. STEP: Destroying namespace "webhook-4630-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:8.585 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":311,"completed":124,"skipped":2313,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:52:21.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating projection with secret that has name projected-secret-test-map-a2400f24-58b0-4803-b0de-7846fd7efede STEP: Creating a pod to test consume secrets Jan 31 00:52:21.991: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7a2e0d1b-a7b4-4c14-8430-d3cc600cc5e3" in namespace "projected-6690" to be "Succeeded or Failed" Jan 31 00:52:22.011: INFO: Pod "pod-projected-secrets-7a2e0d1b-a7b4-4c14-8430-d3cc600cc5e3": Phase="Pending", Reason="", readiness=false. Elapsed: 19.72242ms Jan 31 00:52:24.014: INFO: Pod "pod-projected-secrets-7a2e0d1b-a7b4-4c14-8430-d3cc600cc5e3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023407205s Jan 31 00:52:26.018: INFO: Pod "pod-projected-secrets-7a2e0d1b-a7b4-4c14-8430-d3cc600cc5e3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027364168s STEP: Saw pod success Jan 31 00:52:26.018: INFO: Pod "pod-projected-secrets-7a2e0d1b-a7b4-4c14-8430-d3cc600cc5e3" satisfied condition "Succeeded or Failed" Jan 31 00:52:26.021: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-7a2e0d1b-a7b4-4c14-8430-d3cc600cc5e3 container projected-secret-volume-test: STEP: delete the pod Jan 31 00:52:26.049: INFO: Waiting for pod pod-projected-secrets-7a2e0d1b-a7b4-4c14-8430-d3cc600cc5e3 to disappear Jan 31 00:52:26.054: INFO: Pod pod-projected-secrets-7a2e0d1b-a7b4-4c14-8430-d3cc600cc5e3 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:52:26.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6690" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":125,"skipped":2360,"failed":0} S ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:52:26.062: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Creating a NodePort Service STEP: Not allowing a LoadBalancer Service with NodePort to be created that exceeds remaining quota STEP: Ensuring resource quota status captures service creation STEP: Deleting Services STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:52:37.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1115" for this suite. • [SLOW TEST:11.510 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":311,"completed":126,"skipped":2361,"failed":0} SSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:52:37.572: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating the pod Jan 31 00:52:42.223: INFO: Successfully updated pod "labelsupdatee66e5ad5-794a-4e88-9b57-6a35104ae6ae" [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:52:46.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8041" for this suite. • [SLOW TEST:8.730 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36 should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":311,"completed":127,"skipped":2367,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be immutable if `immutable` field is set [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:52:46.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be immutable if `immutable` field is set [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:52:46.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4874" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be immutable if `immutable` field is set [Conformance]","total":311,"completed":128,"skipped":2415,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:52:46.441: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:129 [It] should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jan 31 00:52:46.658: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:52:46.660: INFO: Number of nodes with available pods: 0 Jan 31 00:52:46.660: INFO: Node latest-worker is running more than one daemon pod Jan 31 00:52:47.667: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:52:47.670: INFO: Number of nodes with available pods: 0 Jan 31 00:52:47.670: INFO: Node latest-worker is running more than one daemon pod Jan 31 00:52:48.789: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:52:48.846: INFO: Number of nodes with available pods: 0 Jan 31 00:52:48.846: INFO: Node latest-worker is running more than one daemon pod Jan 31 00:52:49.663: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:52:49.666: INFO: Number of nodes with available pods: 0 Jan 31 00:52:49.666: INFO: Node latest-worker is running more than one daemon pod Jan 31 00:52:50.665: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:52:50.692: INFO: Number of nodes with available pods: 1 Jan 31 00:52:50.692: INFO: Node latest-worker2 is running more than one daemon pod Jan 31 00:52:51.665: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:52:51.670: INFO: Number of nodes with available pods: 2 Jan 31 00:52:51.670: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Jan 31 00:52:51.956: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:52:51.960: INFO: Number of nodes with available pods: 1 Jan 31 00:52:51.960: INFO: Node latest-worker2 is running more than one daemon pod Jan 31 00:52:52.966: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:52:52.970: INFO: Number of nodes with available pods: 1 Jan 31 00:52:52.970: INFO: Node latest-worker2 is running more than one daemon pod Jan 31 00:52:53.965: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:52:53.968: INFO: Number of nodes with available pods: 1 Jan 31 00:52:53.968: INFO: Node latest-worker2 is running more than one daemon pod Jan 31 00:52:54.994: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:52:54.998: INFO: Number of nodes with available pods: 1 Jan 31 00:52:54.998: INFO: Node latest-worker2 is running more than one daemon pod Jan 31 00:52:55.967: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:52:55.970: INFO: Number of nodes with available pods: 1 Jan 31 00:52:55.970: INFO: Node latest-worker2 is running more than one daemon pod Jan 31 00:52:56.965: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:52:56.968: INFO: Number of nodes with available pods: 1 Jan 31 00:52:56.968: INFO: Node latest-worker2 is running more than one daemon pod Jan 31 00:52:57.964: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:52:57.967: INFO: Number of nodes with available pods: 1 Jan 31 00:52:57.967: INFO: Node latest-worker2 is running more than one daemon pod Jan 31 00:52:58.966: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:52:58.970: INFO: Number of nodes with available pods: 1 Jan 31 00:52:58.970: INFO: Node latest-worker2 is running more than one daemon pod Jan 31 00:52:59.964: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:52:59.967: INFO: Number of nodes with available pods: 1 Jan 31 00:52:59.967: INFO: Node latest-worker2 is running more than one daemon pod Jan 31 00:53:00.964: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:53:00.967: INFO: Number of nodes with available pods: 1 Jan 31 00:53:00.967: INFO: Node latest-worker2 is running more than one daemon pod Jan 31 00:53:01.967: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:53:01.971: INFO: Number of nodes with available pods: 1 Jan 31 00:53:01.971: INFO: Node latest-worker2 is running more than one daemon pod Jan 31 00:53:02.965: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:53:02.968: INFO: Number of nodes with available pods: 1 Jan 31 00:53:02.968: INFO: Node latest-worker2 is running more than one daemon pod Jan 31 00:53:03.967: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:53:03.971: INFO: Number of nodes with available pods: 1 Jan 31 00:53:03.971: INFO: Node latest-worker2 is running more than one daemon pod Jan 31 00:53:04.969: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 00:53:04.972: INFO: Number of nodes with available pods: 2 Jan 31 00:53:04.972: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:95 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2403, will wait for the garbage collector to delete the pods Jan 31 00:53:05.054: INFO: Deleting DaemonSet.extensions daemon-set took: 27.863972ms Jan 31 00:53:05.655: INFO: Terminating DaemonSet.extensions daemon-set pods took: 600.249128ms Jan 31 00:53:11.158: INFO: Number of nodes with available pods: 0 Jan 31 00:53:11.158: INFO: Number of running nodes: 0, number of available pods: 0 Jan 31 00:53:11.160: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"1121532"},"items":null} Jan 31 00:53:11.163: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1121532"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:53:11.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2403" for this suite. • [SLOW TEST:24.757 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":311,"completed":129,"skipped":2530,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:53:11.198: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a pod to test downward API volume plugin Jan 31 00:53:11.337: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a0ff4169-7469-44bb-952f-a4540f3bd1c0" in namespace "projected-4486" to be "Succeeded or Failed" Jan 31 00:53:11.357: INFO: Pod "downwardapi-volume-a0ff4169-7469-44bb-952f-a4540f3bd1c0": Phase="Pending", Reason="", readiness=false. Elapsed: 19.926601ms Jan 31 00:53:13.362: INFO: Pod "downwardapi-volume-a0ff4169-7469-44bb-952f-a4540f3bd1c0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025063149s Jan 31 00:53:15.367: INFO: Pod "downwardapi-volume-a0ff4169-7469-44bb-952f-a4540f3bd1c0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029657851s STEP: Saw pod success Jan 31 00:53:15.367: INFO: Pod "downwardapi-volume-a0ff4169-7469-44bb-952f-a4540f3bd1c0" satisfied condition "Succeeded or Failed" Jan 31 00:53:15.370: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-a0ff4169-7469-44bb-952f-a4540f3bd1c0 container client-container: STEP: delete the pod Jan 31 00:53:15.479: INFO: Waiting for pod downwardapi-volume-a0ff4169-7469-44bb-952f-a4540f3bd1c0 to disappear Jan 31 00:53:15.531: INFO: Pod downwardapi-volume-a0ff4169-7469-44bb-952f-a4540f3bd1c0 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:53:15.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4486" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":311,"completed":130,"skipped":2536,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:53:15.541: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: set up a multi version CRD Jan 31 00:53:15.726: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:53:33.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-380" for this suite. • [SLOW TEST:18.148 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":311,"completed":131,"skipped":2541,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:53:33.689: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a pod to test emptydir volume type on tmpfs Jan 31 00:53:33.822: INFO: Waiting up to 5m0s for pod "pod-f146caee-cb15-49b9-9215-a7c24198f61b" in namespace "emptydir-4512" to be "Succeeded or Failed" Jan 31 00:53:33.825: INFO: Pod "pod-f146caee-cb15-49b9-9215-a7c24198f61b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.4454ms Jan 31 00:53:35.854: INFO: Pod "pod-f146caee-cb15-49b9-9215-a7c24198f61b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032529046s Jan 31 00:53:37.859: INFO: Pod "pod-f146caee-cb15-49b9-9215-a7c24198f61b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036789094s STEP: Saw pod success Jan 31 00:53:37.859: INFO: Pod "pod-f146caee-cb15-49b9-9215-a7c24198f61b" satisfied condition "Succeeded or Failed" Jan 31 00:53:37.862: INFO: Trying to get logs from node latest-worker pod pod-f146caee-cb15-49b9-9215-a7c24198f61b container test-container: STEP: delete the pod Jan 31 00:53:37.928: INFO: Waiting for pod pod-f146caee-cb15-49b9-9215-a7c24198f61b to disappear Jan 31 00:53:37.949: INFO: Pod pod-f146caee-cb15-49b9-9215-a7c24198f61b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:53:37.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4512" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":132,"skipped":2552,"failed":0} SS ------------------------------ [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:53:37.958: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: creating service in namespace services-2018 STEP: creating service affinity-clusterip in namespace services-2018 STEP: creating replication controller affinity-clusterip in namespace services-2018 I0131 00:53:38.106954 7 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: services-2018, replica count: 3 I0131 00:53:41.157353 7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0131 00:53:44.157572 7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 31 00:53:44.165: INFO: Creating new exec pod Jan 31 00:53:49.180: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=services-2018 exec execpod-affinitytstcp -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' Jan 31 00:53:49.392: INFO: stderr: "I0131 00:53:49.323590 681 log.go:181] (0xc000140370) (0xc000d98280) Create stream\nI0131 00:53:49.323669 681 log.go:181] (0xc000140370) (0xc000d98280) Stream added, broadcasting: 1\nI0131 00:53:49.325654 681 log.go:181] (0xc000140370) Reply frame received for 1\nI0131 00:53:49.325702 681 log.go:181] (0xc000140370) (0xc000d983c0) Create stream\nI0131 00:53:49.325720 681 log.go:181] (0xc000140370) (0xc000d983c0) Stream added, broadcasting: 3\nI0131 00:53:49.326817 681 log.go:181] (0xc000140370) Reply frame received for 3\nI0131 00:53:49.326857 681 log.go:181] (0xc000140370) (0xc000227220) Create stream\nI0131 00:53:49.326869 681 log.go:181] (0xc000140370) (0xc000227220) Stream added, broadcasting: 5\nI0131 00:53:49.327790 681 log.go:181] (0xc000140370) Reply frame received for 5\nI0131 00:53:49.382842 681 log.go:181] (0xc000140370) Data frame received for 3\nI0131 00:53:49.382872 681 log.go:181] (0xc000d983c0) (3) Data frame handling\nI0131 00:53:49.382913 681 log.go:181] (0xc000140370) Data frame received for 5\nI0131 00:53:49.382953 681 log.go:181] (0xc000227220) (5) Data frame handling\nI0131 00:53:49.382981 681 log.go:181] (0xc000227220) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\nI0131 00:53:49.383249 681 log.go:181] (0xc000140370) Data frame received for 5\nI0131 00:53:49.383363 681 log.go:181] (0xc000227220) (5) Data frame handling\nI0131 00:53:49.385753 681 log.go:181] (0xc000140370) Data frame received for 1\nI0131 00:53:49.385773 681 log.go:181] (0xc000d98280) (1) Data frame handling\nI0131 00:53:49.385783 681 log.go:181] (0xc000d98280) (1) Data frame sent\nI0131 00:53:49.385796 681 log.go:181] (0xc000140370) (0xc000d98280) Stream removed, broadcasting: 1\nI0131 00:53:49.385834 681 log.go:181] (0xc000140370) Go away received\nI0131 00:53:49.386181 681 log.go:181] (0xc000140370) (0xc000d98280) Stream removed, broadcasting: 1\nI0131 00:53:49.386198 681 log.go:181] (0xc000140370) (0xc000d983c0) Stream removed, broadcasting: 3\nI0131 00:53:49.386207 681 log.go:181] (0xc000140370) (0xc000227220) Stream removed, broadcasting: 5\n" Jan 31 00:53:49.392: INFO: stdout: "" Jan 31 00:53:49.393: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=services-2018 exec execpod-affinitytstcp -- /bin/sh -x -c nc -zv -t -w 2 10.96.72.100 80' Jan 31 00:53:49.602: INFO: stderr: "I0131 00:53:49.518651 699 log.go:181] (0xc0009a8790) (0xc0009a0500) Create stream\nI0131 00:53:49.518719 699 log.go:181] (0xc0009a8790) (0xc0009a0500) Stream added, broadcasting: 1\nI0131 00:53:49.521367 699 log.go:181] (0xc0009a8790) Reply frame received for 1\nI0131 00:53:49.521419 699 log.go:181] (0xc0009a8790) (0xc000620500) Create stream\nI0131 00:53:49.521432 699 log.go:181] (0xc0009a8790) (0xc000620500) Stream added, broadcasting: 3\nI0131 00:53:49.522528 699 log.go:181] (0xc0009a8790) Reply frame received for 3\nI0131 00:53:49.522586 699 log.go:181] (0xc0009a8790) (0xc000620b40) Create stream\nI0131 00:53:49.522617 699 log.go:181] (0xc0009a8790) (0xc000620b40) Stream added, broadcasting: 5\nI0131 00:53:49.523716 699 log.go:181] (0xc0009a8790) Reply frame received for 5\nI0131 00:53:49.594585 699 log.go:181] (0xc0009a8790) Data frame received for 3\nI0131 00:53:49.594623 699 log.go:181] (0xc000620500) (3) Data frame handling\nI0131 00:53:49.594694 699 log.go:181] (0xc0009a8790) Data frame received for 5\nI0131 00:53:49.594777 699 log.go:181] (0xc000620b40) (5) Data frame handling\nI0131 00:53:49.594801 699 log.go:181] (0xc000620b40) (5) Data frame sent\nI0131 00:53:49.594810 699 log.go:181] (0xc0009a8790) Data frame received for 5\nI0131 00:53:49.594815 699 log.go:181] (0xc000620b40) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.72.100 80\nConnection to 10.96.72.100 80 port [tcp/http] succeeded!\nI0131 00:53:49.596263 699 log.go:181] (0xc0009a8790) Data frame received for 1\nI0131 00:53:49.596282 699 log.go:181] (0xc0009a0500) (1) Data frame handling\nI0131 00:53:49.596300 699 log.go:181] (0xc0009a0500) (1) Data frame sent\nI0131 00:53:49.596320 699 log.go:181] (0xc0009a8790) (0xc0009a0500) Stream removed, broadcasting: 1\nI0131 00:53:49.596450 699 log.go:181] (0xc0009a8790) Go away received\nI0131 00:53:49.596705 699 log.go:181] (0xc0009a8790) (0xc0009a0500) Stream removed, broadcasting: 1\nI0131 00:53:49.596721 699 log.go:181] (0xc0009a8790) (0xc000620500) Stream removed, broadcasting: 3\nI0131 00:53:49.596727 699 log.go:181] (0xc0009a8790) (0xc000620b40) Stream removed, broadcasting: 5\n" Jan 31 00:53:49.602: INFO: stdout: "" Jan 31 00:53:49.602: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=services-2018 exec execpod-affinitytstcp -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.96.72.100:80/ ; done' Jan 31 00:53:49.914: INFO: stderr: "I0131 00:53:49.742501 717 log.go:181] (0xc00003a420) (0xc000b21040) Create stream\nI0131 00:53:49.742559 717 log.go:181] (0xc00003a420) (0xc000b21040) Stream added, broadcasting: 1\nI0131 00:53:49.744729 717 log.go:181] (0xc00003a420) Reply frame received for 1\nI0131 00:53:49.744771 717 log.go:181] (0xc00003a420) (0xc00088cd20) Create stream\nI0131 00:53:49.744783 717 log.go:181] (0xc00003a420) (0xc00088cd20) Stream added, broadcasting: 3\nI0131 00:53:49.745684 717 log.go:181] (0xc00003a420) Reply frame received for 3\nI0131 00:53:49.745728 717 log.go:181] (0xc00003a420) (0xc00088dc20) Create stream\nI0131 00:53:49.745753 717 log.go:181] (0xc00003a420) (0xc00088dc20) Stream added, broadcasting: 5\nI0131 00:53:49.746494 717 log.go:181] (0xc00003a420) Reply frame received for 5\nI0131 00:53:49.810483 717 log.go:181] (0xc00003a420) Data frame received for 3\nI0131 00:53:49.810517 717 log.go:181] (0xc00088cd20) (3) Data frame handling\nI0131 00:53:49.810526 717 log.go:181] (0xc00088cd20) (3) Data frame sent\nI0131 00:53:49.810545 717 log.go:181] (0xc00003a420) Data frame received for 5\nI0131 00:53:49.810551 717 log.go:181] (0xc00088dc20) (5) Data frame handling\nI0131 00:53:49.810556 717 log.go:181] (0xc00088dc20) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.72.100:80/\nI0131 00:53:49.813433 717 log.go:181] (0xc00003a420) Data frame received for 3\nI0131 00:53:49.813451 717 log.go:181] (0xc00088cd20) (3) Data frame handling\nI0131 00:53:49.813464 717 log.go:181] (0xc00088cd20) (3) Data frame sent\nI0131 00:53:49.814051 717 log.go:181] (0xc00003a420) Data frame received for 3\nI0131 00:53:49.814066 717 log.go:181] (0xc00088cd20) (3) Data frame handling\nI0131 00:53:49.814082 717 log.go:181] (0xc00088cd20) (3) Data frame sent\nI0131 00:53:49.814095 717 log.go:181] (0xc00003a420) Data frame received for 5\nI0131 00:53:49.814100 717 log.go:181] (0xc00088dc20) (5) Data frame handling\nI0131 00:53:49.814104 717 log.go:181] (0xc00088dc20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.72.100:80/\nI0131 00:53:49.818035 717 log.go:181] (0xc00003a420) Data frame received for 3\nI0131 00:53:49.818052 717 log.go:181] (0xc00088cd20) (3) Data frame handling\nI0131 00:53:49.818060 717 log.go:181] (0xc00088cd20) (3) Data frame sent\nI0131 00:53:49.818542 717 log.go:181] (0xc00003a420) Data frame received for 3\nI0131 00:53:49.818568 717 log.go:181] (0xc00088cd20) (3) Data frame handling\nI0131 00:53:49.818578 717 log.go:181] (0xc00088cd20) (3) Data frame sent\nI0131 00:53:49.818588 717 log.go:181] (0xc00003a420) Data frame received for 5\nI0131 00:53:49.818593 717 log.go:181] (0xc00088dc20) (5) Data frame handling\nI0131 00:53:49.818599 717 log.go:181] (0xc00088dc20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.72.100:80/\nI0131 00:53:49.823013 717 log.go:181] (0xc00003a420) Data frame received for 3\nI0131 00:53:49.823035 717 log.go:181] (0xc00088cd20) (3) Data frame handling\nI0131 00:53:49.823053 717 log.go:181] (0xc00088cd20) (3) Data frame sent\nI0131 00:53:49.823523 717 log.go:181] (0xc00003a420) Data frame received for 5\nI0131 00:53:49.823548 717 log.go:181] (0xc00088dc20) (5) Data frame handling\nI0131 00:53:49.823569 717 log.go:181] (0xc00088dc20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.72.100:80/\nI0131 00:53:49.823792 717 log.go:181] (0xc00003a420) Data frame received for 3\nI0131 00:53:49.823813 717 log.go:181] (0xc00088cd20) (3) Data frame handling\nI0131 00:53:49.823833 717 log.go:181] (0xc00088cd20) (3) Data frame sent\nI0131 00:53:49.828628 717 log.go:181] (0xc00003a420) Data frame received for 3\nI0131 00:53:49.828651 717 log.go:181] (0xc00088cd20) (3) Data frame handling\nI0131 00:53:49.828668 717 log.go:181] (0xc00088cd20) (3) Data frame sent\nI0131 00:53:49.829295 717 log.go:181] (0xc00003a420) Data frame received for 3\nI0131 00:53:49.829316 717 log.go:181] (0xc00088cd20) (3) Data frame handling\nI0131 00:53:49.829324 717 log.go:181] (0xc00088cd20) (3) Data frame sent\nI0131 00:53:49.829337 717 log.go:181] (0xc00003a420) Data frame received for 5\nI0131 00:53:49.829343 717 log.go:181] (0xc00088dc20) (5) Data frame handling\nI0131 00:53:49.829350 717 log.go:181] (0xc00088dc20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.72.100:80/\nI0131 00:53:49.833982 717 log.go:181] (0xc00003a420) Data frame received for 3\nI0131 00:53:49.833997 717 log.go:181] (0xc00088cd20) (3) Data frame handling\nI0131 00:53:49.834010 717 log.go:181] (0xc00088cd20) (3) Data frame sent\nI0131 00:53:49.834667 717 log.go:181] (0xc00003a420) Data frame received for 3\nI0131 00:53:49.834688 717 log.go:181] (0xc00088cd20) (3) Data frame handling\nI0131 00:53:49.834703 717 log.go:181] (0xc00088cd20) (3) Data frame sent\nI0131 00:53:49.834722 717 log.go:181] (0xc00003a420) Data frame received for 5\nI0131 00:53:49.834730 717 log.go:181] (0xc00088dc20) (5) Data frame handling\nI0131 00:53:49.834737 717 log.go:181] (0xc00088dc20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.72.100:80/\nI0131 00:53:49.840126 717 log.go:181] (0xc00003a420) Data frame received for 3\nI0131 00:53:49.840156 717 log.go:181] (0xc00088cd20) (3) Data frame handling\nI0131 00:53:49.840182 717 log.go:181] (0xc00088cd20) (3) Data frame sent\nI0131 00:53:49.840830 717 log.go:181] (0xc00003a420) Data frame received for 5\nI0131 00:53:49.840944 717 log.go:181] (0xc00088dc20) (5) Data frame handling\nI0131 00:53:49.840953 717 log.go:181] (0xc00088dc20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.72.100:80/\nI0131 00:53:49.840967 717 log.go:181] (0xc00003a420) Data frame received for 3\nI0131 00:53:49.840974 717 log.go:181] (0xc00088cd20) (3) Data frame handling\nI0131 00:53:49.840983 717 log.go:181] (0xc00088cd20) (3) Data frame sent\nI0131 00:53:49.847964 717 log.go:181] (0xc00003a420) Data frame received for 3\nI0131 00:53:49.847987 717 log.go:181] (0xc00088cd20) (3) Data frame handling\nI0131 00:53:49.848003 717 log.go:181] (0xc00088cd20) (3) Data frame sent\nI0131 00:53:49.848533 717 log.go:181] (0xc00003a420) Data frame received for 3\nI0131 00:53:49.848572 717 log.go:181] (0xc00088cd20) (3) Data frame handling\nI0131 00:53:49.848593 717 log.go:181] (0xc00088cd20) (3) Data frame sent\nI0131 00:53:49.848619 717 log.go:181] (0xc00003a420) Data frame received for 5\nI0131 00:53:49.848634 717 log.go:181] (0xc00088dc20) (5) Data frame handling\nI0131 00:53:49.848657 717 log.go:181] (0xc00088dc20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.72.100:80/\nI0131 00:53:49.853280 717 log.go:181] (0xc00003a420) Data frame received for 3\nI0131 00:53:49.853298 717 log.go:181] (0xc00088cd20) (3) Data frame handling\nI0131 00:53:49.853307 717 log.go:181] (0xc00088cd20) (3) Data frame sent\nI0131 00:53:49.854046 717 log.go:181] (0xc00003a420) Data frame received for 5\nI0131 00:53:49.854071 717 log.go:181] (0xc00088dc20) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.72.100:80/\nI0131 00:53:49.854092 717 log.go:181] (0xc00003a420) Data frame received for 3\nI0131 00:53:49.854109 717 log.go:181] (0xc00088cd20) (3) Data frame handling\nI0131 00:53:49.854121 717 log.go:181] (0xc00088cd20) (3) Data frame sent\nI0131 00:53:49.854137 717 log.go:181] (0xc00088dc20) (5) Data frame sent\nI0131 00:53:49.857999 717 log.go:181] (0xc00003a420) Data frame received for 3\nI0131 00:53:49.858026 717 log.go:181] (0xc00088cd20) (3) Data frame handling\nI0131 00:53:49.858042 717 log.go:181] (0xc00088cd20) (3) Data frame sent\nI0131 00:53:49.858940 717 log.go:181] (0xc00003a420) Data frame received for 3\nI0131 00:53:49.858962 717 log.go:181] (0xc00088cd20) (3) Data frame handling\nI0131 00:53:49.858997 717 log.go:181] (0xc00088cd20) (3) Data frame sent\nI0131 00:53:49.859014 717 log.go:181] (0xc00003a420) Data frame received for 5\nI0131 00:53:49.859024 717 log.go:181] (0xc00088dc20) (5) Data frame handling\nI0131 00:53:49.859037 717 log.go:181] (0xc00088dc20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.72.100:80/\nI0131 00:53:49.862862 717 log.go:181] (0xc00003a420) Data frame received for 3\nI0131 00:53:49.862877 717 log.go:181] (0xc00088cd20) (3) Data frame handling\nI0131 00:53:49.862886 717 log.go:181] (0xc00088cd20) (3) Data frame sent\nI0131 00:53:49.863715 717 log.go:181] (0xc00003a420) Data frame received for 5\nI0131 00:53:49.863733 717 log.go:181] (0xc00088dc20) (5) Data frame handling\nI0131 00:53:49.863747 717 log.go:181] (0xc00088dc20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.72.100:80/\nI0131 00:53:49.863773 717 log.go:181] (0xc00003a420) Data frame received for 3\nI0131 00:53:49.863807 717 log.go:181] (0xc00088cd20) (3) Data frame handling\nI0131 00:53:49.863824 717 log.go:181] (0xc00088cd20) (3) Data frame sent\nI0131 00:53:49.869630 717 log.go:181] (0xc00003a420) Data frame received for 3\nI0131 00:53:49.869645 717 log.go:181] (0xc00088cd20) (3) Data frame handling\nI0131 00:53:49.869654 717 log.go:181] (0xc00088cd20) (3) Data frame sent\nI0131 00:53:49.870248 717 log.go:181] (0xc00003a420) Data frame received for 5\nI0131 00:53:49.870260 717 log.go:181] (0xc00088dc20) (5) Data frame handling\nI0131 00:53:49.870269 717 log.go:181] (0xc00088dc20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.72.100:80/\nI0131 00:53:49.870333 717 log.go:181] (0xc00003a420) Data frame received for 3\nI0131 00:53:49.870342 717 log.go:181] (0xc00088cd20) (3) Data frame handling\nI0131 00:53:49.870350 717 log.go:181] (0xc00088cd20) (3) Data frame sent\nI0131 00:53:49.875826 717 log.go:181] (0xc00003a420) Data frame received for 3\nI0131 00:53:49.875846 717 log.go:181] (0xc00088cd20) (3) Data frame handling\nI0131 00:53:49.875856 717 log.go:181] (0xc00088cd20) (3) Data frame sent\nI0131 00:53:49.876792 717 log.go:181] (0xc00003a420) Data frame received for 3\nI0131 00:53:49.876818 717 log.go:181] (0xc00088cd20) (3) Data frame handling\nI0131 00:53:49.876938 717 log.go:181] (0xc00088cd20) (3) Data frame sent\nI0131 00:53:49.876965 717 log.go:181] (0xc00003a420) Data frame received for 5\nI0131 00:53:49.876984 717 log.go:181] (0xc00088dc20) (5) Data frame handling\nI0131 00:53:49.876999 717 log.go:181] (0xc00088dc20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.72.100:80/\nI0131 00:53:49.882663 717 log.go:181] (0xc00003a420) Data frame received for 3\nI0131 00:53:49.882704 717 log.go:181] (0xc00088cd20) (3) Data frame handling\nI0131 00:53:49.882743 717 log.go:181] (0xc00088cd20) (3) Data frame sent\nI0131 00:53:49.883646 717 log.go:181] (0xc00003a420) Data frame received for 3\nI0131 00:53:49.883669 717 log.go:181] (0xc00088cd20) (3) Data frame handling\nI0131 00:53:49.883680 717 log.go:181] (0xc00088cd20) (3) Data frame sent\nI0131 00:53:49.883697 717 log.go:181] (0xc00003a420) Data frame received for 5\nI0131 00:53:49.883706 717 log.go:181] (0xc00088dc20) (5) Data frame handling\nI0131 00:53:49.883715 717 log.go:181] (0xc00088dc20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.72.100:80/\nI0131 00:53:49.890091 717 log.go:181] (0xc00003a420) Data frame received for 3\nI0131 00:53:49.890116 717 log.go:181] (0xc00088cd20) (3) Data frame handling\nI0131 00:53:49.890135 717 log.go:181] (0xc00088cd20) (3) Data frame sent\nI0131 00:53:49.891050 717 log.go:181] (0xc00003a420) Data frame received for 5\nI0131 00:53:49.891082 717 log.go:181] (0xc00088dc20) (5) Data frame handling\nI0131 00:53:49.891102 717 log.go:181] (0xc00088dc20) (5) Data frame sent\nI0131 00:53:49.891118 717 log.go:181] (0xc00003a420) Data frame received for 5\n+ echo\n+ curl -q -sI0131 00:53:49.891139 717 log.go:181] (0xc00088dc20) (5) Data frame handling\nI0131 00:53:49.891188 717 log.go:181] (0xc00088dc20) (5) Data frame sent\n --connect-timeout 2 http://10.96.72.100:80/\nI0131 00:53:49.891220 717 log.go:181] (0xc00003a420) Data frame received for 3\nI0131 00:53:49.891240 717 log.go:181] (0xc00088cd20) (3) Data frame handling\nI0131 00:53:49.891260 717 log.go:181] (0xc00088cd20) (3) Data frame sent\nI0131 00:53:49.895238 717 log.go:181] (0xc00003a420) Data frame received for 3\nI0131 00:53:49.895259 717 log.go:181] (0xc00088cd20) (3) Data frame handling\nI0131 00:53:49.895273 717 log.go:181] (0xc00088cd20) (3) Data frame sent\nI0131 00:53:49.896303 717 log.go:181] (0xc00003a420) Data frame received for 3\nI0131 00:53:49.896348 717 log.go:181] (0xc00088cd20) (3) Data frame handling\nI0131 00:53:49.896368 717 log.go:181] (0xc00003a420) Data frame received for 5\nI0131 00:53:49.896407 717 log.go:181] (0xc00088dc20) (5) Data frame handling\nI0131 00:53:49.896429 717 log.go:181] (0xc00088dc20) (5) Data frame sent\nI0131 00:53:49.896451 717 log.go:181] (0xc00003a420) Data frame received for 5\nI0131 00:53:49.896470 717 log.go:181] (0xc00088dc20) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.72.100:80/\nI0131 00:53:49.896492 717 log.go:181] (0xc00088cd20) (3) Data frame sent\nI0131 00:53:49.896509 717 log.go:181] (0xc00088dc20) (5) Data frame sent\nI0131 00:53:49.903299 717 log.go:181] (0xc00003a420) Data frame received for 3\nI0131 00:53:49.903310 717 log.go:181] (0xc00088cd20) (3) Data frame handling\nI0131 00:53:49.903316 717 log.go:181] (0xc00088cd20) (3) Data frame sent\nI0131 00:53:49.904182 717 log.go:181] (0xc00003a420) Data frame received for 5\nI0131 00:53:49.904208 717 log.go:181] (0xc00088dc20) (5) Data frame handling\nI0131 00:53:49.904230 717 log.go:181] (0xc00003a420) Data frame received for 3\nI0131 00:53:49.904263 717 log.go:181] (0xc00088cd20) (3) Data frame handling\nI0131 00:53:49.906411 717 log.go:181] (0xc00003a420) Data frame received for 1\nI0131 00:53:49.906433 717 log.go:181] (0xc000b21040) (1) Data frame handling\nI0131 00:53:49.906444 717 log.go:181] (0xc000b21040) (1) Data frame sent\nI0131 00:53:49.906458 717 log.go:181] (0xc00003a420) (0xc000b21040) Stream removed, broadcasting: 1\nI0131 00:53:49.906472 717 log.go:181] (0xc00003a420) Go away received\nI0131 00:53:49.907203 717 log.go:181] (0xc00003a420) (0xc000b21040) Stream removed, broadcasting: 1\nI0131 00:53:49.907238 717 log.go:181] (0xc00003a420) (0xc00088cd20) Stream removed, broadcasting: 3\nI0131 00:53:49.907250 717 log.go:181] (0xc00003a420) (0xc00088dc20) Stream removed, broadcasting: 5\n" Jan 31 00:53:49.914: INFO: stdout: "\naffinity-clusterip-5dcm6\naffinity-clusterip-5dcm6\naffinity-clusterip-5dcm6\naffinity-clusterip-5dcm6\naffinity-clusterip-5dcm6\naffinity-clusterip-5dcm6\naffinity-clusterip-5dcm6\naffinity-clusterip-5dcm6\naffinity-clusterip-5dcm6\naffinity-clusterip-5dcm6\naffinity-clusterip-5dcm6\naffinity-clusterip-5dcm6\naffinity-clusterip-5dcm6\naffinity-clusterip-5dcm6\naffinity-clusterip-5dcm6\naffinity-clusterip-5dcm6" Jan 31 00:53:49.915: INFO: Received response from host: affinity-clusterip-5dcm6 Jan 31 00:53:49.915: INFO: Received response from host: affinity-clusterip-5dcm6 Jan 31 00:53:49.915: INFO: Received response from host: affinity-clusterip-5dcm6 Jan 31 00:53:49.915: INFO: Received response from host: affinity-clusterip-5dcm6 Jan 31 00:53:49.915: INFO: Received response from host: affinity-clusterip-5dcm6 Jan 31 00:53:49.915: INFO: Received response from host: affinity-clusterip-5dcm6 Jan 31 00:53:49.915: INFO: Received response from host: affinity-clusterip-5dcm6 Jan 31 00:53:49.915: INFO: Received response from host: affinity-clusterip-5dcm6 Jan 31 00:53:49.915: INFO: Received response from host: affinity-clusterip-5dcm6 Jan 31 00:53:49.915: INFO: Received response from host: affinity-clusterip-5dcm6 Jan 31 00:53:49.915: INFO: Received response from host: affinity-clusterip-5dcm6 Jan 31 00:53:49.915: INFO: Received response from host: affinity-clusterip-5dcm6 Jan 31 00:53:49.915: INFO: Received response from host: affinity-clusterip-5dcm6 Jan 31 00:53:49.915: INFO: Received response from host: affinity-clusterip-5dcm6 Jan 31 00:53:49.915: INFO: Received response from host: affinity-clusterip-5dcm6 Jan 31 00:53:49.915: INFO: Received response from host: affinity-clusterip-5dcm6 Jan 31 00:53:49.915: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip in namespace services-2018, will wait for the garbage collector to delete the pods Jan 31 00:53:49.991: INFO: Deleting ReplicationController affinity-clusterip took: 5.007094ms Jan 31 00:53:50.591: INFO: Terminating ReplicationController affinity-clusterip pods took: 600.252311ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:54:11.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2018" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 • [SLOW TEST:33.417 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":311,"completed":133,"skipped":2554,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:54:11.376: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: creating the pod STEP: waiting for pod running STEP: creating a file in subpath Jan 31 00:54:15.499: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-6102 PodName:var-expansion-0ff61b56-ed7c-4941-b235-06e44ddb2a41 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 31 00:54:15.499: INFO: >>> kubeConfig: /root/.kube/config I0131 00:54:15.538337 7 log.go:181] (0xc00752e580) (0xc0003e6320) Create stream I0131 00:54:15.538367 7 log.go:181] (0xc00752e580) (0xc0003e6320) Stream added, broadcasting: 1 I0131 00:54:15.540519 7 log.go:181] (0xc00752e580) Reply frame received for 1 I0131 00:54:15.540554 7 log.go:181] (0xc00752e580) (0xc0021ccdc0) Create stream I0131 00:54:15.540568 7 log.go:181] (0xc00752e580) (0xc0021ccdc0) Stream added, broadcasting: 3 I0131 00:54:15.541734 7 log.go:181] (0xc00752e580) Reply frame received for 3 I0131 00:54:15.541777 7 log.go:181] (0xc00752e580) (0xc0021cce60) Create stream I0131 00:54:15.541796 7 log.go:181] (0xc00752e580) (0xc0021cce60) Stream added, broadcasting: 5 I0131 00:54:15.542589 7 log.go:181] (0xc00752e580) Reply frame received for 5 I0131 00:54:15.604156 7 log.go:181] (0xc00752e580) Data frame received for 5 I0131 00:54:15.604211 7 log.go:181] (0xc0021cce60) (5) Data frame handling I0131 00:54:15.604279 7 log.go:181] (0xc00752e580) Data frame received for 3 I0131 00:54:15.604301 7 log.go:181] (0xc0021ccdc0) (3) Data frame handling I0131 00:54:15.605701 7 log.go:181] (0xc00752e580) Data frame received for 1 I0131 00:54:15.605716 7 log.go:181] (0xc0003e6320) (1) Data frame handling I0131 00:54:15.605723 7 log.go:181] (0xc0003e6320) (1) Data frame sent I0131 00:54:15.605738 7 log.go:181] (0xc00752e580) (0xc0003e6320) Stream removed, broadcasting: 1 I0131 00:54:15.605802 7 log.go:181] (0xc00752e580) (0xc0003e6320) Stream removed, broadcasting: 1 I0131 00:54:15.605812 7 log.go:181] (0xc00752e580) (0xc0021ccdc0) Stream removed, broadcasting: 3 I0131 00:54:15.605819 7 log.go:181] (0xc00752e580) (0xc0021cce60) Stream removed, broadcasting: 5 STEP: test for file in mounted path I0131 00:54:15.605856 7 log.go:181] (0xc00752e580) Go away received Jan 31 00:54:15.608: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-6102 PodName:var-expansion-0ff61b56-ed7c-4941-b235-06e44ddb2a41 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 31 00:54:15.608: INFO: >>> kubeConfig: /root/.kube/config I0131 00:54:15.647384 7 log.go:181] (0xc00752ec60) (0xc0003e74a0) Create stream I0131 00:54:15.647411 7 log.go:181] (0xc00752ec60) (0xc0003e74a0) Stream added, broadcasting: 1 I0131 00:54:15.649250 7 log.go:181] (0xc00752ec60) Reply frame received for 1 I0131 00:54:15.649280 7 log.go:181] (0xc00752ec60) (0xc0010d4f00) Create stream I0131 00:54:15.649303 7 log.go:181] (0xc00752ec60) (0xc0010d4f00) Stream added, broadcasting: 3 I0131 00:54:15.650046 7 log.go:181] (0xc00752ec60) Reply frame received for 3 I0131 00:54:15.650071 7 log.go:181] (0xc00752ec60) (0xc0010d5040) Create stream I0131 00:54:15.650084 7 log.go:181] (0xc00752ec60) (0xc0010d5040) Stream added, broadcasting: 5 I0131 00:54:15.650729 7 log.go:181] (0xc00752ec60) Reply frame received for 5 I0131 00:54:15.723080 7 log.go:181] (0xc00752ec60) Data frame received for 3 I0131 00:54:15.723133 7 log.go:181] (0xc0010d4f00) (3) Data frame handling I0131 00:54:15.723187 7 log.go:181] (0xc00752ec60) Data frame received for 5 I0131 00:54:15.723206 7 log.go:181] (0xc0010d5040) (5) Data frame handling I0131 00:54:15.724608 7 log.go:181] (0xc00752ec60) Data frame received for 1 I0131 00:54:15.724645 7 log.go:181] (0xc0003e74a0) (1) Data frame handling I0131 00:54:15.724692 7 log.go:181] (0xc0003e74a0) (1) Data frame sent I0131 00:54:15.724723 7 log.go:181] (0xc00752ec60) (0xc0003e74a0) Stream removed, broadcasting: 1 I0131 00:54:15.724755 7 log.go:181] (0xc00752ec60) Go away received I0131 00:54:15.724831 7 log.go:181] (0xc00752ec60) (0xc0003e74a0) Stream removed, broadcasting: 1 I0131 00:54:15.724976 7 log.go:181] (0xc00752ec60) (0xc0010d4f00) Stream removed, broadcasting: 3 I0131 00:54:15.724992 7 log.go:181] (0xc00752ec60) (0xc0010d5040) Stream removed, broadcasting: 5 STEP: updating the annotation value Jan 31 00:54:16.238: INFO: Successfully updated pod "var-expansion-0ff61b56-ed7c-4941-b235-06e44ddb2a41" STEP: waiting for annotated pod running STEP: deleting the pod gracefully Jan 31 00:54:16.251: INFO: Deleting pod "var-expansion-0ff61b56-ed7c-4941-b235-06e44ddb2a41" in namespace "var-expansion-6102" Jan 31 00:54:16.256: INFO: Wait up to 5m0s for pod "var-expansion-0ff61b56-ed7c-4941-b235-06e44ddb2a41" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:55:12.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6102" for this suite. • [SLOW TEST:60.914 seconds] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:635 should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]","total":311,"completed":134,"skipped":2573,"failed":0} SS ------------------------------ [sig-network] IngressClass API should support creating IngressClass API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:55:12.289: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingressclass STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingressclass.go:148 [It] should support creating IngressClass API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Jan 31 00:55:12.429: INFO: starting watch STEP: patching STEP: updating Jan 31 00:55:12.438: INFO: waiting for watch events with expected annotations Jan 31 00:55:12.438: INFO: saw patched and updated annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:55:12.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingressclass-3624" for this suite. •{"msg":"PASSED [sig-network] IngressClass API should support creating IngressClass API operations [Conformance]","total":311,"completed":135,"skipped":2575,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:55:12.495: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Jan 31 00:55:13.041: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Jan 31 00:55:15.059: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747651313, loc:(*time.Location)(0x79bd420)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747651313, loc:(*time.Location)(0x79bd420)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747651313, loc:(*time.Location)(0x79bd420)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747651312, loc:(*time.Location)(0x79bd420)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-7d6697c5b7\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 31 00:55:17.064: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747651313, loc:(*time.Location)(0x79bd420)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747651313, loc:(*time.Location)(0x79bd420)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747651313, loc:(*time.Location)(0x79bd420)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747651312, loc:(*time.Location)(0x79bd420)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-7d6697c5b7\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 31 00:55:20.107: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 Jan 31 00:55:20.124: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:55:21.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-1019" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:8.873 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":311,"completed":136,"skipped":2576,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:55:21.369: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 Jan 31 00:55:21.507: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-3f55cfee-2256-4b2d-a8aa-74d7b127e668" in namespace "security-context-test-4366" to be "Succeeded or Failed" Jan 31 00:55:21.516: INFO: Pod "busybox-readonly-false-3f55cfee-2256-4b2d-a8aa-74d7b127e668": Phase="Pending", Reason="", readiness=false. Elapsed: 9.684327ms Jan 31 00:55:23.522: INFO: Pod "busybox-readonly-false-3f55cfee-2256-4b2d-a8aa-74d7b127e668": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014735798s Jan 31 00:55:25.527: INFO: Pod "busybox-readonly-false-3f55cfee-2256-4b2d-a8aa-74d7b127e668": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020309766s Jan 31 00:55:25.527: INFO: Pod "busybox-readonly-false-3f55cfee-2256-4b2d-a8aa-74d7b127e668" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:55:25.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-4366" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":311,"completed":137,"skipped":2588,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:55:25.536: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:55:39.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-3359" for this suite. • [SLOW TEST:14.107 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":311,"completed":138,"skipped":2630,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:55:39.643: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 Jan 31 00:55:39.739: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:55:40.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-841" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":311,"completed":139,"skipped":2631,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:55:40.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a pod to test downward API volume plugin Jan 31 00:55:40.890: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fa22d2b6-7a14-4a85-96c2-7098d5b1c1b6" in namespace "projected-7107" to be "Succeeded or Failed" Jan 31 00:55:40.894: INFO: Pod "downwardapi-volume-fa22d2b6-7a14-4a85-96c2-7098d5b1c1b6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.121099ms Jan 31 00:55:42.899: INFO: Pod "downwardapi-volume-fa22d2b6-7a14-4a85-96c2-7098d5b1c1b6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008520773s Jan 31 00:55:44.907: INFO: Pod "downwardapi-volume-fa22d2b6-7a14-4a85-96c2-7098d5b1c1b6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016206899s STEP: Saw pod success Jan 31 00:55:44.907: INFO: Pod "downwardapi-volume-fa22d2b6-7a14-4a85-96c2-7098d5b1c1b6" satisfied condition "Succeeded or Failed" Jan 31 00:55:44.952: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-fa22d2b6-7a14-4a85-96c2-7098d5b1c1b6 container client-container: STEP: delete the pod Jan 31 00:55:45.182: INFO: Waiting for pod downwardapi-volume-fa22d2b6-7a14-4a85-96c2-7098d5b1c1b6 to disappear Jan 31 00:55:45.194: INFO: Pod downwardapi-volume-fa22d2b6-7a14-4a85-96c2-7098d5b1c1b6 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:55:45.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7107" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":311,"completed":140,"skipped":2653,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:55:45.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:129 [It] should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 Jan 31 00:55:45.356: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Jan 31 00:55:45.363: INFO: Number of nodes with available pods: 0 Jan 31 00:55:45.363: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Jan 31 00:55:45.414: INFO: Number of nodes with available pods: 0 Jan 31 00:55:45.414: INFO: Node latest-worker is running more than one daemon pod Jan 31 00:55:46.418: INFO: Number of nodes with available pods: 0 Jan 31 00:55:46.418: INFO: Node latest-worker is running more than one daemon pod Jan 31 00:55:47.418: INFO: Number of nodes with available pods: 0 Jan 31 00:55:47.418: INFO: Node latest-worker is running more than one daemon pod Jan 31 00:55:48.418: INFO: Number of nodes with available pods: 0 Jan 31 00:55:48.418: INFO: Node latest-worker is running more than one daemon pod Jan 31 00:55:49.419: INFO: Number of nodes with available pods: 1 Jan 31 00:55:49.419: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Jan 31 00:55:49.456: INFO: Number of nodes with available pods: 1 Jan 31 00:55:49.456: INFO: Number of running nodes: 0, number of available pods: 1 Jan 31 00:55:50.464: INFO: Number of nodes with available pods: 0 Jan 31 00:55:50.464: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Jan 31 00:55:50.551: INFO: Number of nodes with available pods: 0 Jan 31 00:55:50.551: INFO: Node latest-worker is running more than one daemon pod Jan 31 00:55:51.556: INFO: Number of nodes with available pods: 0 Jan 31 00:55:51.556: INFO: Node latest-worker is running more than one daemon pod Jan 31 00:55:52.555: INFO: Number of nodes with available pods: 0 Jan 31 00:55:52.555: INFO: Node latest-worker is running more than one daemon pod Jan 31 00:55:53.555: INFO: Number of nodes with available pods: 0 Jan 31 00:55:53.556: INFO: Node latest-worker is running more than one daemon pod Jan 31 00:55:54.566: INFO: Number of nodes with available pods: 0 Jan 31 00:55:54.566: INFO: Node latest-worker is running more than one daemon pod Jan 31 00:55:55.558: INFO: Number of nodes with available pods: 0 Jan 31 00:55:55.558: INFO: Node latest-worker is running more than one daemon pod Jan 31 00:55:56.554: INFO: Number of nodes with available pods: 0 Jan 31 00:55:56.554: INFO: Node latest-worker is running more than one daemon pod Jan 31 00:55:57.555: INFO: Number of nodes with available pods: 0 Jan 31 00:55:57.555: INFO: Node latest-worker is running more than one daemon pod Jan 31 00:55:58.554: INFO: Number of nodes with available pods: 0 Jan 31 00:55:58.554: INFO: Node latest-worker is running more than one daemon pod Jan 31 00:55:59.556: INFO: Number of nodes with available pods: 0 Jan 31 00:55:59.556: INFO: Node latest-worker is running more than one daemon pod Jan 31 00:56:00.554: INFO: Number of nodes with available pods: 0 Jan 31 00:56:00.554: INFO: Node latest-worker is running more than one daemon pod Jan 31 00:56:01.556: INFO: Number of nodes with available pods: 0 Jan 31 00:56:01.556: INFO: Node latest-worker is running more than one daemon pod Jan 31 00:56:02.555: INFO: Number of nodes with available pods: 0 Jan 31 00:56:02.555: INFO: Node latest-worker is running more than one daemon pod Jan 31 00:56:03.555: INFO: Number of nodes with available pods: 0 Jan 31 00:56:03.555: INFO: Node latest-worker is running more than one daemon pod Jan 31 00:56:04.555: INFO: Number of nodes with available pods: 0 Jan 31 00:56:04.555: INFO: Node latest-worker is running more than one daemon pod Jan 31 00:56:05.554: INFO: Number of nodes with available pods: 0 Jan 31 00:56:05.554: INFO: Node latest-worker is running more than one daemon pod Jan 31 00:56:06.555: INFO: Number of nodes with available pods: 0 Jan 31 00:56:06.555: INFO: Node latest-worker is running more than one daemon pod Jan 31 00:56:07.556: INFO: Number of nodes with available pods: 0 Jan 31 00:56:07.556: INFO: Node latest-worker is running more than one daemon pod Jan 31 00:56:08.556: INFO: Number of nodes with available pods: 0 Jan 31 00:56:08.556: INFO: Node latest-worker is running more than one daemon pod Jan 31 00:56:09.556: INFO: Number of nodes with available pods: 0 Jan 31 00:56:09.556: INFO: Node latest-worker is running more than one daemon pod Jan 31 00:56:10.570: INFO: Number of nodes with available pods: 0 Jan 31 00:56:10.571: INFO: Node latest-worker is running more than one daemon pod Jan 31 00:56:11.555: INFO: Number of nodes with available pods: 0 Jan 31 00:56:11.555: INFO: Node latest-worker is running more than one daemon pod Jan 31 00:56:12.554: INFO: Number of nodes with available pods: 0 Jan 31 00:56:12.554: INFO: Node latest-worker is running more than one daemon pod Jan 31 00:56:13.557: INFO: Number of nodes with available pods: 0 Jan 31 00:56:13.557: INFO: Node latest-worker is running more than one daemon pod Jan 31 00:56:14.556: INFO: Number of nodes with available pods: 0 Jan 31 00:56:14.556: INFO: Node latest-worker is running more than one daemon pod Jan 31 00:56:15.555: INFO: Number of nodes with available pods: 1 Jan 31 00:56:15.555: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:95 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6353, will wait for the garbage collector to delete the pods Jan 31 00:56:15.621: INFO: Deleting DaemonSet.extensions daemon-set took: 7.474771ms Jan 31 00:56:16.221: INFO: Terminating DaemonSet.extensions daemon-set pods took: 600.234149ms Jan 31 00:56:21.251: INFO: Number of nodes with available pods: 0 Jan 31 00:56:21.251: INFO: Number of running nodes: 0, number of available pods: 0 Jan 31 00:56:21.253: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"1122458"},"items":null} Jan 31 00:56:21.255: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1122458"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:56:21.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6353" for this suite. • [SLOW TEST:36.071 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":311,"completed":141,"skipped":2713,"failed":0} SSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:56:21.322: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating pod liveness-9e3605db-d8e5-4451-80ce-dce9e675d35a in namespace container-probe-6580 Jan 31 00:56:25.445: INFO: Started pod liveness-9e3605db-d8e5-4451-80ce-dce9e675d35a in namespace container-probe-6580 STEP: checking the pod's current state and verifying that restartCount is present Jan 31 00:56:25.448: INFO: Initial restart count of pod liveness-9e3605db-d8e5-4451-80ce-dce9e675d35a is 0 Jan 31 00:56:47.532: INFO: Restart count of pod container-probe-6580/liveness-9e3605db-d8e5-4451-80ce-dce9e675d35a is now 1 (22.084477435s elapsed) Jan 31 00:57:07.584: INFO: Restart count of pod container-probe-6580/liveness-9e3605db-d8e5-4451-80ce-dce9e675d35a is now 2 (42.135822617s elapsed) Jan 31 00:57:25.993: INFO: Restart count of pod container-probe-6580/liveness-9e3605db-d8e5-4451-80ce-dce9e675d35a is now 3 (1m0.54542232s elapsed) Jan 31 00:57:46.041: INFO: Restart count of pod container-probe-6580/liveness-9e3605db-d8e5-4451-80ce-dce9e675d35a is now 4 (1m20.592726752s elapsed) Jan 31 00:58:56.215: INFO: Restart count of pod container-probe-6580/liveness-9e3605db-d8e5-4451-80ce-dce9e675d35a is now 5 (2m30.766959624s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:58:56.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6580" for this suite. • [SLOW TEST:154.917 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:635 should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":311,"completed":142,"skipped":2719,"failed":0} SSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:58:56.240: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating configMap with name configmap-test-upd-4ea87b90-f316-49c7-80e9-a135fd96dd65 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-4ea87b90-f316-49c7-80e9-a135fd96dd65 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:59:02.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9322" for this suite. • [SLOW TEST:6.540 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":311,"completed":143,"skipped":2724,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:59:02.780: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 31 00:59:03.555: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 31 00:59:05.566: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747651543, loc:(*time.Location)(0x79bd420)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747651543, loc:(*time.Location)(0x79bd420)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747651543, loc:(*time.Location)(0x79bd420)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747651543, loc:(*time.Location)(0x79bd420)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 31 00:59:07.571: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747651543, loc:(*time.Location)(0x79bd420)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747651543, loc:(*time.Location)(0x79bd420)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747651543, loc:(*time.Location)(0x79bd420)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747651543, loc:(*time.Location)(0x79bd420)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 31 00:59:10.642: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 00:59:22.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8726" for this suite. STEP: Destroying namespace "webhook-8726-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:20.259 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":311,"completed":144,"skipped":2743,"failed":0} SSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 00:59:23.040: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-8702 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-8702 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-8702 Jan 31 00:59:23.193: INFO: Found 0 stateful pods, waiting for 1 Jan 31 00:59:33.198: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Jan 31 00:59:33.202: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-8702 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 31 00:59:33.490: INFO: stderr: "I0131 00:59:33.356207 735 log.go:181] (0xc00003a0b0) (0xc0009261e0) Create stream\nI0131 00:59:33.356285 735 log.go:181] (0xc00003a0b0) (0xc0009261e0) Stream added, broadcasting: 1\nI0131 00:59:33.360522 735 log.go:181] (0xc00003a0b0) Reply frame received for 1\nI0131 00:59:33.360598 735 log.go:181] (0xc00003a0b0) (0xc000926320) Create stream\nI0131 00:59:33.360710 735 log.go:181] (0xc00003a0b0) (0xc000926320) Stream added, broadcasting: 3\nI0131 00:59:33.366258 735 log.go:181] (0xc00003a0b0) Reply frame received for 3\nI0131 00:59:33.366290 735 log.go:181] (0xc00003a0b0) (0xc000a03040) Create stream\nI0131 00:59:33.366297 735 log.go:181] (0xc00003a0b0) (0xc000a03040) Stream added, broadcasting: 5\nI0131 00:59:33.367032 735 log.go:181] (0xc00003a0b0) Reply frame received for 5\nI0131 00:59:33.457547 735 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0131 00:59:33.457578 735 log.go:181] (0xc000a03040) (5) Data frame handling\nI0131 00:59:33.457599 735 log.go:181] (0xc000a03040) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0131 00:59:33.481582 735 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0131 00:59:33.481626 735 log.go:181] (0xc000926320) (3) Data frame handling\nI0131 00:59:33.481659 735 log.go:181] (0xc000926320) (3) Data frame sent\nI0131 00:59:33.481675 735 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0131 00:59:33.481689 735 log.go:181] (0xc000926320) (3) Data frame handling\nI0131 00:59:33.481783 735 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0131 00:59:33.481830 735 log.go:181] (0xc000a03040) (5) Data frame handling\nI0131 00:59:33.483917 735 log.go:181] (0xc00003a0b0) Data frame received for 1\nI0131 00:59:33.483945 735 log.go:181] (0xc0009261e0) (1) Data frame handling\nI0131 00:59:33.483959 735 log.go:181] (0xc0009261e0) (1) Data frame sent\nI0131 00:59:33.483975 735 log.go:181] (0xc00003a0b0) (0xc0009261e0) Stream removed, broadcasting: 1\nI0131 00:59:33.484045 735 log.go:181] (0xc00003a0b0) Go away received\nI0131 00:59:33.484382 735 log.go:181] (0xc00003a0b0) (0xc0009261e0) Stream removed, broadcasting: 1\nI0131 00:59:33.484402 735 log.go:181] (0xc00003a0b0) (0xc000926320) Stream removed, broadcasting: 3\nI0131 00:59:33.484414 735 log.go:181] (0xc00003a0b0) (0xc000a03040) Stream removed, broadcasting: 5\n" Jan 31 00:59:33.490: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 31 00:59:33.490: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 31 00:59:33.494: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jan 31 00:59:43.500: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 31 00:59:43.500: INFO: Waiting for statefulset status.replicas updated to 0 Jan 31 00:59:43.517: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999455s Jan 31 00:59:44.522: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.993646275s Jan 31 00:59:45.527: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.989157789s Jan 31 00:59:46.533: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.984102938s Jan 31 00:59:47.538: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.978294436s Jan 31 00:59:48.549: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.972693364s Jan 31 00:59:49.555: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.961835304s Jan 31 00:59:50.564: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.956072413s Jan 31 00:59:51.569: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.947115228s Jan 31 00:59:52.579: INFO: Verifying statefulset ss doesn't scale past 1 for another 942.161104ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-8702 Jan 31 00:59:53.584: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-8702 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 31 00:59:53.838: INFO: stderr: "I0131 00:59:53.728290 753 log.go:181] (0xc0008f82c0) (0xc0001a7860) Create stream\nI0131 00:59:53.728386 753 log.go:181] (0xc0008f82c0) (0xc0001a7860) Stream added, broadcasting: 1\nI0131 00:59:53.730653 753 log.go:181] (0xc0008f82c0) Reply frame received for 1\nI0131 00:59:53.730713 753 log.go:181] (0xc0008f82c0) (0xc00014b2c0) Create stream\nI0131 00:59:53.730730 753 log.go:181] (0xc0008f82c0) (0xc00014b2c0) Stream added, broadcasting: 3\nI0131 00:59:53.731807 753 log.go:181] (0xc0008f82c0) Reply frame received for 3\nI0131 00:59:53.731846 753 log.go:181] (0xc0008f82c0) (0xc0009106e0) Create stream\nI0131 00:59:53.731859 753 log.go:181] (0xc0008f82c0) (0xc0009106e0) Stream added, broadcasting: 5\nI0131 00:59:53.733334 753 log.go:181] (0xc0008f82c0) Reply frame received for 5\nI0131 00:59:53.829944 753 log.go:181] (0xc0008f82c0) Data frame received for 5\nI0131 00:59:53.829974 753 log.go:181] (0xc0009106e0) (5) Data frame handling\nI0131 00:59:53.829986 753 log.go:181] (0xc0009106e0) (5) Data frame sent\nI0131 00:59:53.829993 753 log.go:181] (0xc0008f82c0) Data frame received for 5\nI0131 00:59:53.830000 753 log.go:181] (0xc0009106e0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0131 00:59:53.830055 753 log.go:181] (0xc0008f82c0) Data frame received for 3\nI0131 00:59:53.830090 753 log.go:181] (0xc00014b2c0) (3) Data frame handling\nI0131 00:59:53.830121 753 log.go:181] (0xc00014b2c0) (3) Data frame sent\nI0131 00:59:53.830137 753 log.go:181] (0xc0008f82c0) Data frame received for 3\nI0131 00:59:53.830148 753 log.go:181] (0xc00014b2c0) (3) Data frame handling\nI0131 00:59:53.831990 753 log.go:181] (0xc0008f82c0) Data frame received for 1\nI0131 00:59:53.832019 753 log.go:181] (0xc0001a7860) (1) Data frame handling\nI0131 00:59:53.832038 753 log.go:181] (0xc0001a7860) (1) Data frame sent\nI0131 00:59:53.832056 753 log.go:181] (0xc0008f82c0) (0xc0001a7860) Stream removed, broadcasting: 1\nI0131 00:59:53.832083 753 log.go:181] (0xc0008f82c0) Go away received\nI0131 00:59:53.832569 753 log.go:181] (0xc0008f82c0) (0xc0001a7860) Stream removed, broadcasting: 1\nI0131 00:59:53.832593 753 log.go:181] (0xc0008f82c0) (0xc00014b2c0) Stream removed, broadcasting: 3\nI0131 00:59:53.832603 753 log.go:181] (0xc0008f82c0) (0xc0009106e0) Stream removed, broadcasting: 5\n" Jan 31 00:59:53.839: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 31 00:59:53.839: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 31 00:59:53.842: INFO: Found 1 stateful pods, waiting for 3 Jan 31 01:00:03.847: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 31 01:00:03.847: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 31 01:00:03.847: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Jan 31 01:00:03.855: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-8702 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 31 01:00:04.115: INFO: stderr: "I0131 01:00:04.006844 771 log.go:181] (0xc00096a2c0) (0xc000b44000) Create stream\nI0131 01:00:04.006922 771 log.go:181] (0xc00096a2c0) (0xc000b44000) Stream added, broadcasting: 1\nI0131 01:00:04.009047 771 log.go:181] (0xc00096a2c0) Reply frame received for 1\nI0131 01:00:04.009086 771 log.go:181] (0xc00096a2c0) (0xc000b440a0) Create stream\nI0131 01:00:04.009097 771 log.go:181] (0xc00096a2c0) (0xc000b440a0) Stream added, broadcasting: 3\nI0131 01:00:04.010183 771 log.go:181] (0xc00096a2c0) Reply frame received for 3\nI0131 01:00:04.010220 771 log.go:181] (0xc00096a2c0) (0xc000d8c000) Create stream\nI0131 01:00:04.010240 771 log.go:181] (0xc00096a2c0) (0xc000d8c000) Stream added, broadcasting: 5\nI0131 01:00:04.011373 771 log.go:181] (0xc00096a2c0) Reply frame received for 5\nI0131 01:00:04.109448 771 log.go:181] (0xc00096a2c0) Data frame received for 3\nI0131 01:00:04.109501 771 log.go:181] (0xc000b440a0) (3) Data frame handling\nI0131 01:00:04.109518 771 log.go:181] (0xc000b440a0) (3) Data frame sent\nI0131 01:00:04.109529 771 log.go:181] (0xc00096a2c0) Data frame received for 3\nI0131 01:00:04.109540 771 log.go:181] (0xc000b440a0) (3) Data frame handling\nI0131 01:00:04.109610 771 log.go:181] (0xc00096a2c0) Data frame received for 5\nI0131 01:00:04.109647 771 log.go:181] (0xc000d8c000) (5) Data frame handling\nI0131 01:00:04.109669 771 log.go:181] (0xc000d8c000) (5) Data frame sent\nI0131 01:00:04.109686 771 log.go:181] (0xc00096a2c0) Data frame received for 5\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0131 01:00:04.109706 771 log.go:181] (0xc000d8c000) (5) Data frame handling\nI0131 01:00:04.111162 771 log.go:181] (0xc00096a2c0) Data frame received for 1\nI0131 01:00:04.111197 771 log.go:181] (0xc000b44000) (1) Data frame handling\nI0131 01:00:04.111212 771 log.go:181] (0xc000b44000) (1) Data frame sent\nI0131 01:00:04.111237 771 log.go:181] (0xc00096a2c0) (0xc000b44000) Stream removed, broadcasting: 1\nI0131 01:00:04.111273 771 log.go:181] (0xc00096a2c0) Go away received\nI0131 01:00:04.111549 771 log.go:181] (0xc00096a2c0) (0xc000b44000) Stream removed, broadcasting: 1\nI0131 01:00:04.111563 771 log.go:181] (0xc00096a2c0) (0xc000b440a0) Stream removed, broadcasting: 3\nI0131 01:00:04.111569 771 log.go:181] (0xc00096a2c0) (0xc000d8c000) Stream removed, broadcasting: 5\n" Jan 31 01:00:04.115: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 31 01:00:04.115: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 31 01:00:04.116: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-8702 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 31 01:00:04.370: INFO: stderr: "I0131 01:00:04.251463 789 log.go:181] (0xc0001902c0) (0xc000f90280) Create stream\nI0131 01:00:04.251562 789 log.go:181] (0xc0001902c0) (0xc000f90280) Stream added, broadcasting: 1\nI0131 01:00:04.253557 789 log.go:181] (0xc0001902c0) Reply frame received for 1\nI0131 01:00:04.253610 789 log.go:181] (0xc0001902c0) (0xc0003b5400) Create stream\nI0131 01:00:04.253626 789 log.go:181] (0xc0001902c0) (0xc0003b5400) Stream added, broadcasting: 3\nI0131 01:00:04.255117 789 log.go:181] (0xc0001902c0) Reply frame received for 3\nI0131 01:00:04.255184 789 log.go:181] (0xc0001902c0) (0xc0000cb9a0) Create stream\nI0131 01:00:04.255214 789 log.go:181] (0xc0001902c0) (0xc0000cb9a0) Stream added, broadcasting: 5\nI0131 01:00:04.256322 789 log.go:181] (0xc0001902c0) Reply frame received for 5\nI0131 01:00:04.324336 789 log.go:181] (0xc0001902c0) Data frame received for 5\nI0131 01:00:04.324377 789 log.go:181] (0xc0000cb9a0) (5) Data frame handling\nI0131 01:00:04.324399 789 log.go:181] (0xc0000cb9a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0131 01:00:04.359362 789 log.go:181] (0xc0001902c0) Data frame received for 3\nI0131 01:00:04.359415 789 log.go:181] (0xc0003b5400) (3) Data frame handling\nI0131 01:00:04.359431 789 log.go:181] (0xc0003b5400) (3) Data frame sent\nI0131 01:00:04.359467 789 log.go:181] (0xc0001902c0) Data frame received for 5\nI0131 01:00:04.359512 789 log.go:181] (0xc0000cb9a0) (5) Data frame handling\nI0131 01:00:04.359545 789 log.go:181] (0xc0001902c0) Data frame received for 3\nI0131 01:00:04.359581 789 log.go:181] (0xc0003b5400) (3) Data frame handling\nI0131 01:00:04.361781 789 log.go:181] (0xc0001902c0) Data frame received for 1\nI0131 01:00:04.361808 789 log.go:181] (0xc000f90280) (1) Data frame handling\nI0131 01:00:04.361838 789 log.go:181] (0xc000f90280) (1) Data frame sent\nI0131 01:00:04.361868 789 log.go:181] (0xc0001902c0) (0xc000f90280) Stream removed, broadcasting: 1\nI0131 01:00:04.361888 789 log.go:181] (0xc0001902c0) Go away received\nI0131 01:00:04.362386 789 log.go:181] (0xc0001902c0) (0xc000f90280) Stream removed, broadcasting: 1\nI0131 01:00:04.362409 789 log.go:181] (0xc0001902c0) (0xc0003b5400) Stream removed, broadcasting: 3\nI0131 01:00:04.362421 789 log.go:181] (0xc0001902c0) (0xc0000cb9a0) Stream removed, broadcasting: 5\n" Jan 31 01:00:04.370: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 31 01:00:04.370: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 31 01:00:04.370: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-8702 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 31 01:00:04.703: INFO: stderr: "I0131 01:00:04.548828 807 log.go:181] (0xc000974160) (0xc000bf2500) Create stream\nI0131 01:00:04.549338 807 log.go:181] (0xc000974160) (0xc000bf2500) Stream added, broadcasting: 1\nI0131 01:00:04.551339 807 log.go:181] (0xc000974160) Reply frame received for 1\nI0131 01:00:04.551392 807 log.go:181] (0xc000974160) (0xc0009960a0) Create stream\nI0131 01:00:04.551408 807 log.go:181] (0xc000974160) (0xc0009960a0) Stream added, broadcasting: 3\nI0131 01:00:04.552991 807 log.go:181] (0xc000974160) Reply frame received for 3\nI0131 01:00:04.553046 807 log.go:181] (0xc000974160) (0xc000648000) Create stream\nI0131 01:00:04.553066 807 log.go:181] (0xc000974160) (0xc000648000) Stream added, broadcasting: 5\nI0131 01:00:04.554037 807 log.go:181] (0xc000974160) Reply frame received for 5\nI0131 01:00:04.629072 807 log.go:181] (0xc000974160) Data frame received for 5\nI0131 01:00:04.629097 807 log.go:181] (0xc000648000) (5) Data frame handling\nI0131 01:00:04.629112 807 log.go:181] (0xc000648000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0131 01:00:04.695239 807 log.go:181] (0xc000974160) Data frame received for 3\nI0131 01:00:04.695265 807 log.go:181] (0xc0009960a0) (3) Data frame handling\nI0131 01:00:04.695289 807 log.go:181] (0xc0009960a0) (3) Data frame sent\nI0131 01:00:04.695707 807 log.go:181] (0xc000974160) Data frame received for 5\nI0131 01:00:04.695736 807 log.go:181] (0xc000648000) (5) Data frame handling\nI0131 01:00:04.695792 807 log.go:181] (0xc000974160) Data frame received for 3\nI0131 01:00:04.695808 807 log.go:181] (0xc0009960a0) (3) Data frame handling\nI0131 01:00:04.698373 807 log.go:181] (0xc000974160) Data frame received for 1\nI0131 01:00:04.698389 807 log.go:181] (0xc000bf2500) (1) Data frame handling\nI0131 01:00:04.698403 807 log.go:181] (0xc000bf2500) (1) Data frame sent\nI0131 01:00:04.698415 807 log.go:181] (0xc000974160) (0xc000bf2500) Stream removed, broadcasting: 1\nI0131 01:00:04.698473 807 log.go:181] (0xc000974160) Go away received\nI0131 01:00:04.698719 807 log.go:181] (0xc000974160) (0xc000bf2500) Stream removed, broadcasting: 1\nI0131 01:00:04.698732 807 log.go:181] (0xc000974160) (0xc0009960a0) Stream removed, broadcasting: 3\nI0131 01:00:04.698739 807 log.go:181] (0xc000974160) (0xc000648000) Stream removed, broadcasting: 5\n" Jan 31 01:00:04.703: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 31 01:00:04.703: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 31 01:00:04.703: INFO: Waiting for statefulset status.replicas updated to 0 Jan 31 01:00:04.708: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Jan 31 01:00:14.718: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 31 01:00:14.718: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jan 31 01:00:14.718: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jan 31 01:00:14.747: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999404s Jan 31 01:00:15.768: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.978465146s Jan 31 01:00:16.795: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.956561509s Jan 31 01:00:17.803: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.929616072s Jan 31 01:00:18.806: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.92256229s Jan 31 01:00:19.810: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.918790221s Jan 31 01:00:20.815: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.914876407s Jan 31 01:00:21.819: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.909766703s Jan 31 01:00:22.823: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.905813903s Jan 31 01:00:23.828: INFO: Verifying statefulset ss doesn't scale past 3 for another 901.580803ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-8702 Jan 31 01:00:24.832: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-8702 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 31 01:00:25.056: INFO: stderr: "I0131 01:00:24.967506 826 log.go:181] (0xc00097a2c0) (0xc0009739a0) Create stream\nI0131 01:00:24.967616 826 log.go:181] (0xc00097a2c0) (0xc0009739a0) Stream added, broadcasting: 1\nI0131 01:00:24.969813 826 log.go:181] (0xc00097a2c0) Reply frame received for 1\nI0131 01:00:24.969853 826 log.go:181] (0xc00097a2c0) (0xc000990460) Create stream\nI0131 01:00:24.969866 826 log.go:181] (0xc00097a2c0) (0xc000990460) Stream added, broadcasting: 3\nI0131 01:00:24.970731 826 log.go:181] (0xc00097a2c0) Reply frame received for 3\nI0131 01:00:24.970781 826 log.go:181] (0xc00097a2c0) (0xc000991720) Create stream\nI0131 01:00:24.970797 826 log.go:181] (0xc00097a2c0) (0xc000991720) Stream added, broadcasting: 5\nI0131 01:00:24.971659 826 log.go:181] (0xc00097a2c0) Reply frame received for 5\nI0131 01:00:25.047866 826 log.go:181] (0xc00097a2c0) Data frame received for 5\nI0131 01:00:25.047909 826 log.go:181] (0xc000991720) (5) Data frame handling\nI0131 01:00:25.047926 826 log.go:181] (0xc000991720) (5) Data frame sent\nI0131 01:00:25.047938 826 log.go:181] (0xc00097a2c0) Data frame received for 5\nI0131 01:00:25.047950 826 log.go:181] (0xc000991720) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0131 01:00:25.048027 826 log.go:181] (0xc00097a2c0) Data frame received for 3\nI0131 01:00:25.048047 826 log.go:181] (0xc000990460) (3) Data frame handling\nI0131 01:00:25.048061 826 log.go:181] (0xc000990460) (3) Data frame sent\nI0131 01:00:25.048078 826 log.go:181] (0xc00097a2c0) Data frame received for 3\nI0131 01:00:25.048091 826 log.go:181] (0xc000990460) (3) Data frame handling\nI0131 01:00:25.049940 826 log.go:181] (0xc00097a2c0) Data frame received for 1\nI0131 01:00:25.049954 826 log.go:181] (0xc0009739a0) (1) Data frame handling\nI0131 01:00:25.049962 826 log.go:181] (0xc0009739a0) (1) Data frame sent\nI0131 01:00:25.049973 826 log.go:181] (0xc00097a2c0) (0xc0009739a0) Stream removed, broadcasting: 1\nI0131 01:00:25.050034 826 log.go:181] (0xc00097a2c0) Go away received\nI0131 01:00:25.050279 826 log.go:181] (0xc00097a2c0) (0xc0009739a0) Stream removed, broadcasting: 1\nI0131 01:00:25.050290 826 log.go:181] (0xc00097a2c0) (0xc000990460) Stream removed, broadcasting: 3\nI0131 01:00:25.050297 826 log.go:181] (0xc00097a2c0) (0xc000991720) Stream removed, broadcasting: 5\n" Jan 31 01:00:25.056: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 31 01:00:25.056: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 31 01:00:25.056: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-8702 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 31 01:00:25.268: INFO: stderr: "I0131 01:00:25.193235 844 log.go:181] (0xc00003a0b0) (0xc0006c2000) Create stream\nI0131 01:00:25.193306 844 log.go:181] (0xc00003a0b0) (0xc0006c2000) Stream added, broadcasting: 1\nI0131 01:00:25.195318 844 log.go:181] (0xc00003a0b0) Reply frame received for 1\nI0131 01:00:25.195384 844 log.go:181] (0xc00003a0b0) (0xc000c243c0) Create stream\nI0131 01:00:25.195411 844 log.go:181] (0xc00003a0b0) (0xc000c243c0) Stream added, broadcasting: 3\nI0131 01:00:25.196231 844 log.go:181] (0xc00003a0b0) Reply frame received for 3\nI0131 01:00:25.196255 844 log.go:181] (0xc00003a0b0) (0xc0006c20a0) Create stream\nI0131 01:00:25.196264 844 log.go:181] (0xc00003a0b0) (0xc0006c20a0) Stream added, broadcasting: 5\nI0131 01:00:25.197183 844 log.go:181] (0xc00003a0b0) Reply frame received for 5\nI0131 01:00:25.260982 844 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0131 01:00:25.261046 844 log.go:181] (0xc000c243c0) (3) Data frame handling\nI0131 01:00:25.261067 844 log.go:181] (0xc000c243c0) (3) Data frame sent\nI0131 01:00:25.261095 844 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0131 01:00:25.261132 844 log.go:181] (0xc000c243c0) (3) Data frame handling\nI0131 01:00:25.261155 844 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0131 01:00:25.261167 844 log.go:181] (0xc0006c20a0) (5) Data frame handling\nI0131 01:00:25.261179 844 log.go:181] (0xc0006c20a0) (5) Data frame sent\nI0131 01:00:25.261194 844 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0131 01:00:25.261214 844 log.go:181] (0xc0006c20a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0131 01:00:25.262781 844 log.go:181] (0xc00003a0b0) Data frame received for 1\nI0131 01:00:25.262824 844 log.go:181] (0xc0006c2000) (1) Data frame handling\nI0131 01:00:25.262851 844 log.go:181] (0xc0006c2000) (1) Data frame sent\nI0131 01:00:25.262878 844 log.go:181] (0xc00003a0b0) (0xc0006c2000) Stream removed, broadcasting: 1\nI0131 01:00:25.262914 844 log.go:181] (0xc00003a0b0) Go away received\nI0131 01:00:25.263622 844 log.go:181] (0xc00003a0b0) (0xc0006c2000) Stream removed, broadcasting: 1\nI0131 01:00:25.263651 844 log.go:181] (0xc00003a0b0) (0xc000c243c0) Stream removed, broadcasting: 3\nI0131 01:00:25.263664 844 log.go:181] (0xc00003a0b0) (0xc0006c20a0) Stream removed, broadcasting: 5\n" Jan 31 01:00:25.268: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 31 01:00:25.268: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 31 01:00:25.268: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-8702 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 31 01:00:25.484: INFO: stderr: "I0131 01:00:25.410177 862 log.go:181] (0xc0005511e0) (0xc0008ee3c0) Create stream\nI0131 01:00:25.410219 862 log.go:181] (0xc0005511e0) (0xc0008ee3c0) Stream added, broadcasting: 1\nI0131 01:00:25.411864 862 log.go:181] (0xc0005511e0) Reply frame received for 1\nI0131 01:00:25.411888 862 log.go:181] (0xc0005511e0) (0xc00062e3c0) Create stream\nI0131 01:00:25.411896 862 log.go:181] (0xc0005511e0) (0xc00062e3c0) Stream added, broadcasting: 3\nI0131 01:00:25.412617 862 log.go:181] (0xc0005511e0) Reply frame received for 3\nI0131 01:00:25.412676 862 log.go:181] (0xc0005511e0) (0xc000548000) Create stream\nI0131 01:00:25.412709 862 log.go:181] (0xc0005511e0) (0xc000548000) Stream added, broadcasting: 5\nI0131 01:00:25.413484 862 log.go:181] (0xc0005511e0) Reply frame received for 5\nI0131 01:00:25.476818 862 log.go:181] (0xc0005511e0) Data frame received for 5\nI0131 01:00:25.476961 862 log.go:181] (0xc000548000) (5) Data frame handling\nI0131 01:00:25.476977 862 log.go:181] (0xc000548000) (5) Data frame sent\nI0131 01:00:25.476988 862 log.go:181] (0xc0005511e0) Data frame received for 5\nI0131 01:00:25.476996 862 log.go:181] (0xc000548000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0131 01:00:25.477056 862 log.go:181] (0xc0005511e0) Data frame received for 3\nI0131 01:00:25.477095 862 log.go:181] (0xc00062e3c0) (3) Data frame handling\nI0131 01:00:25.477130 862 log.go:181] (0xc00062e3c0) (3) Data frame sent\nI0131 01:00:25.477553 862 log.go:181] (0xc0005511e0) Data frame received for 3\nI0131 01:00:25.477589 862 log.go:181] (0xc00062e3c0) (3) Data frame handling\nI0131 01:00:25.479138 862 log.go:181] (0xc0005511e0) Data frame received for 1\nI0131 01:00:25.479165 862 log.go:181] (0xc0008ee3c0) (1) Data frame handling\nI0131 01:00:25.479175 862 log.go:181] (0xc0008ee3c0) (1) Data frame sent\nI0131 01:00:25.479187 862 log.go:181] (0xc0005511e0) (0xc0008ee3c0) Stream removed, broadcasting: 1\nI0131 01:00:25.479236 862 log.go:181] (0xc0005511e0) Go away received\nI0131 01:00:25.479510 862 log.go:181] (0xc0005511e0) (0xc0008ee3c0) Stream removed, broadcasting: 1\nI0131 01:00:25.479522 862 log.go:181] (0xc0005511e0) (0xc00062e3c0) Stream removed, broadcasting: 3\nI0131 01:00:25.479529 862 log.go:181] (0xc0005511e0) (0xc000548000) Stream removed, broadcasting: 5\n" Jan 31 01:00:25.484: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 31 01:00:25.484: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 31 01:00:25.484: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Jan 31 01:02:15.499: INFO: Deleting all statefulset in ns statefulset-8702 Jan 31 01:02:15.503: INFO: Scaling statefulset ss to 0 Jan 31 01:02:15.516: INFO: Waiting for statefulset status.replicas updated to 0 Jan 31 01:02:15.518: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:02:15.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8702" for this suite. • [SLOW TEST:172.501 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:635 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":311,"completed":145,"skipped":2750,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:02:15.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a pod to test emptydir 0666 on node default medium Jan 31 01:02:15.690: INFO: Waiting up to 5m0s for pod "pod-45c454bb-b1f1-4f94-820c-760e8a6e37ca" in namespace "emptydir-9226" to be "Succeeded or Failed" Jan 31 01:02:15.733: INFO: Pod "pod-45c454bb-b1f1-4f94-820c-760e8a6e37ca": Phase="Pending", Reason="", readiness=false. Elapsed: 42.582379ms Jan 31 01:02:17.745: INFO: Pod "pod-45c454bb-b1f1-4f94-820c-760e8a6e37ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054228037s Jan 31 01:02:19.805: INFO: Pod "pod-45c454bb-b1f1-4f94-820c-760e8a6e37ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.114187989s STEP: Saw pod success Jan 31 01:02:19.805: INFO: Pod "pod-45c454bb-b1f1-4f94-820c-760e8a6e37ca" satisfied condition "Succeeded or Failed" Jan 31 01:02:19.807: INFO: Trying to get logs from node latest-worker pod pod-45c454bb-b1f1-4f94-820c-760e8a6e37ca container test-container: STEP: delete the pod Jan 31 01:02:19.855: INFO: Waiting for pod pod-45c454bb-b1f1-4f94-820c-760e8a6e37ca to disappear Jan 31 01:02:19.869: INFO: Pod pod-45c454bb-b1f1-4f94-820c-760e8a6e37ca no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:02:19.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9226" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":146,"skipped":2788,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:02:19.881: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [AfterEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:02:24.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9141" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":147,"skipped":2803,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:02:24.036: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 Jan 31 01:02:24.147: INFO: Waiting up to 5m0s for pod "busybox-user-65534-dfe50536-fe34-4ff6-a72e-61d4a7300e8c" in namespace "security-context-test-7727" to be "Succeeded or Failed" Jan 31 01:02:24.159: INFO: Pod "busybox-user-65534-dfe50536-fe34-4ff6-a72e-61d4a7300e8c": Phase="Pending", Reason="", readiness=false. Elapsed: 12.400904ms Jan 31 01:02:26.203: INFO: Pod "busybox-user-65534-dfe50536-fe34-4ff6-a72e-61d4a7300e8c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055437751s Jan 31 01:02:28.218: INFO: Pod "busybox-user-65534-dfe50536-fe34-4ff6-a72e-61d4a7300e8c": Phase="Running", Reason="", readiness=true. Elapsed: 4.071142526s Jan 31 01:02:30.223: INFO: Pod "busybox-user-65534-dfe50536-fe34-4ff6-a72e-61d4a7300e8c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.075540914s Jan 31 01:02:30.223: INFO: Pod "busybox-user-65534-dfe50536-fe34-4ff6-a72e-61d4a7300e8c" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:02:30.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-7727" for this suite. • [SLOW TEST:6.213 seconds] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:635 When creating a container with runAsUser /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:45 should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":148,"skipped":2826,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:02:30.250: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating replication controller my-hostname-basic-5d5d1052-2c16-424f-8f9b-94f951ce0751 Jan 31 01:02:30.420: INFO: Pod name my-hostname-basic-5d5d1052-2c16-424f-8f9b-94f951ce0751: Found 0 pods out of 1 Jan 31 01:02:35.424: INFO: Pod name my-hostname-basic-5d5d1052-2c16-424f-8f9b-94f951ce0751: Found 1 pods out of 1 Jan 31 01:02:35.424: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-5d5d1052-2c16-424f-8f9b-94f951ce0751" are running Jan 31 01:02:35.427: INFO: Pod "my-hostname-basic-5d5d1052-2c16-424f-8f9b-94f951ce0751-s9cwv" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-01-31 01:02:30 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-01-31 01:02:33 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-01-31 01:02:33 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-01-31 01:02:30 +0000 UTC Reason: Message:}]) Jan 31 01:02:35.427: INFO: Trying to dial the pod Jan 31 01:02:40.439: INFO: Controller my-hostname-basic-5d5d1052-2c16-424f-8f9b-94f951ce0751: Got expected result from replica 1 [my-hostname-basic-5d5d1052-2c16-424f-8f9b-94f951ce0751-s9cwv]: "my-hostname-basic-5d5d1052-2c16-424f-8f9b-94f951ce0751-s9cwv", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:02:40.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6001" for this suite. • [SLOW TEST:10.198 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":311,"completed":149,"skipped":2864,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:02:40.448: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a pod to test override command Jan 31 01:02:40.538: INFO: Waiting up to 5m0s for pod "client-containers-7a807424-d7b7-4d95-9962-20a355430bd8" in namespace "containers-4278" to be "Succeeded or Failed" Jan 31 01:02:40.541: INFO: Pod "client-containers-7a807424-d7b7-4d95-9962-20a355430bd8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.03209ms Jan 31 01:02:42.545: INFO: Pod "client-containers-7a807424-d7b7-4d95-9962-20a355430bd8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007621477s Jan 31 01:02:44.550: INFO: Pod "client-containers-7a807424-d7b7-4d95-9962-20a355430bd8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012023982s STEP: Saw pod success Jan 31 01:02:44.550: INFO: Pod "client-containers-7a807424-d7b7-4d95-9962-20a355430bd8" satisfied condition "Succeeded or Failed" Jan 31 01:02:44.553: INFO: Trying to get logs from node latest-worker pod client-containers-7a807424-d7b7-4d95-9962-20a355430bd8 container agnhost-container: STEP: delete the pod Jan 31 01:02:44.583: INFO: Waiting for pod client-containers-7a807424-d7b7-4d95-9962-20a355430bd8 to disappear Jan 31 01:02:44.722: INFO: Pod client-containers-7a807424-d7b7-4d95-9962-20a355430bd8 no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:02:44.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-4278" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":311,"completed":150,"skipped":2903,"failed":0} SS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:02:44.733: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a pod to test downward API volume plugin Jan 31 01:02:44.937: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cf620ec0-3afe-43fe-b7b0-9fe5081b9c82" in namespace "projected-1793" to be "Succeeded or Failed" Jan 31 01:02:44.983: INFO: Pod "downwardapi-volume-cf620ec0-3afe-43fe-b7b0-9fe5081b9c82": Phase="Pending", Reason="", readiness=false. Elapsed: 45.649412ms Jan 31 01:02:46.986: INFO: Pod "downwardapi-volume-cf620ec0-3afe-43fe-b7b0-9fe5081b9c82": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049119603s Jan 31 01:02:48.990: INFO: Pod "downwardapi-volume-cf620ec0-3afe-43fe-b7b0-9fe5081b9c82": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.052988265s STEP: Saw pod success Jan 31 01:02:48.990: INFO: Pod "downwardapi-volume-cf620ec0-3afe-43fe-b7b0-9fe5081b9c82" satisfied condition "Succeeded or Failed" Jan 31 01:02:48.993: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-cf620ec0-3afe-43fe-b7b0-9fe5081b9c82 container client-container: STEP: delete the pod Jan 31 01:02:49.028: INFO: Waiting for pod downwardapi-volume-cf620ec0-3afe-43fe-b7b0-9fe5081b9c82 to disappear Jan 31 01:02:49.053: INFO: Pod downwardapi-volume-cf620ec0-3afe-43fe-b7b0-9fe5081b9c82 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:02:49.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1793" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":151,"skipped":2905,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:02:49.062: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jan 31 01:02:54.237: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:02:54.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3853" for this suite. • [SLOW TEST:5.211 seconds] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:635 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 on terminated container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:134 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":311,"completed":152,"skipped":2945,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:02:54.274: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a pod to test downward api env vars Jan 31 01:02:54.386: INFO: Waiting up to 5m0s for pod "downward-api-3d45d577-174d-47e8-9091-5f4a5a5026e0" in namespace "downward-api-8461" to be "Succeeded or Failed" Jan 31 01:02:54.429: INFO: Pod "downward-api-3d45d577-174d-47e8-9091-5f4a5a5026e0": Phase="Pending", Reason="", readiness=false. Elapsed: 42.907238ms Jan 31 01:02:56.481: INFO: Pod "downward-api-3d45d577-174d-47e8-9091-5f4a5a5026e0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.095734461s Jan 31 01:02:58.512: INFO: Pod "downward-api-3d45d577-174d-47e8-9091-5f4a5a5026e0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.126557427s Jan 31 01:03:00.524: INFO: Pod "downward-api-3d45d577-174d-47e8-9091-5f4a5a5026e0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.138407797s STEP: Saw pod success Jan 31 01:03:00.524: INFO: Pod "downward-api-3d45d577-174d-47e8-9091-5f4a5a5026e0" satisfied condition "Succeeded or Failed" Jan 31 01:03:00.526: INFO: Trying to get logs from node latest-worker pod downward-api-3d45d577-174d-47e8-9091-5f4a5a5026e0 container dapi-container: STEP: delete the pod Jan 31 01:03:00.574: INFO: Waiting for pod downward-api-3d45d577-174d-47e8-9091-5f4a5a5026e0 to disappear Jan 31 01:03:00.579: INFO: Pod downward-api-3d45d577-174d-47e8-9091-5f4a5a5026e0 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:03:00.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8461" for this suite. • [SLOW TEST:6.314 seconds] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":311,"completed":153,"skipped":2957,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:03:00.589: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-4337 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating stateful set ss in namespace statefulset-4337 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-4337 Jan 31 01:03:00.713: INFO: Found 0 stateful pods, waiting for 1 Jan 31 01:03:10.718: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Jan 31 01:03:10.722: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-4337 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 31 01:03:14.241: INFO: stderr: "I0131 01:03:14.085933 880 log.go:181] (0xc000d74630) (0xc00067c1e0) Create stream\nI0131 01:03:14.085998 880 log.go:181] (0xc000d74630) (0xc00067c1e0) Stream added, broadcasting: 1\nI0131 01:03:14.088015 880 log.go:181] (0xc000d74630) Reply frame received for 1\nI0131 01:03:14.088066 880 log.go:181] (0xc000d74630) (0xc000dc6000) Create stream\nI0131 01:03:14.088088 880 log.go:181] (0xc000d74630) (0xc000dc6000) Stream added, broadcasting: 3\nI0131 01:03:14.089004 880 log.go:181] (0xc000d74630) Reply frame received for 3\nI0131 01:03:14.089042 880 log.go:181] (0xc000d74630) (0xc00067c280) Create stream\nI0131 01:03:14.089049 880 log.go:181] (0xc000d74630) (0xc00067c280) Stream added, broadcasting: 5\nI0131 01:03:14.089970 880 log.go:181] (0xc000d74630) Reply frame received for 5\nI0131 01:03:14.185934 880 log.go:181] (0xc000d74630) Data frame received for 5\nI0131 01:03:14.185958 880 log.go:181] (0xc00067c280) (5) Data frame handling\nI0131 01:03:14.185973 880 log.go:181] (0xc00067c280) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0131 01:03:14.232676 880 log.go:181] (0xc000d74630) Data frame received for 3\nI0131 01:03:14.232727 880 log.go:181] (0xc000d74630) Data frame received for 5\nI0131 01:03:14.232763 880 log.go:181] (0xc00067c280) (5) Data frame handling\nI0131 01:03:14.232803 880 log.go:181] (0xc000dc6000) (3) Data frame handling\nI0131 01:03:14.232831 880 log.go:181] (0xc000dc6000) (3) Data frame sent\nI0131 01:03:14.233322 880 log.go:181] (0xc000d74630) Data frame received for 3\nI0131 01:03:14.233346 880 log.go:181] (0xc000dc6000) (3) Data frame handling\nI0131 01:03:14.235160 880 log.go:181] (0xc000d74630) Data frame received for 1\nI0131 01:03:14.235204 880 log.go:181] (0xc00067c1e0) (1) Data frame handling\nI0131 01:03:14.235234 880 log.go:181] (0xc00067c1e0) (1) Data frame sent\nI0131 01:03:14.235265 880 log.go:181] (0xc000d74630) (0xc00067c1e0) Stream removed, broadcasting: 1\nI0131 01:03:14.235327 880 log.go:181] (0xc000d74630) Go away received\nI0131 01:03:14.235765 880 log.go:181] (0xc000d74630) (0xc00067c1e0) Stream removed, broadcasting: 1\nI0131 01:03:14.235795 880 log.go:181] (0xc000d74630) (0xc000dc6000) Stream removed, broadcasting: 3\nI0131 01:03:14.235815 880 log.go:181] (0xc000d74630) (0xc00067c280) Stream removed, broadcasting: 5\n" Jan 31 01:03:14.242: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 31 01:03:14.242: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 31 01:03:14.245: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jan 31 01:03:24.251: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 31 01:03:24.251: INFO: Waiting for statefulset status.replicas updated to 0 Jan 31 01:03:24.266: INFO: POD NODE PHASE GRACE CONDITIONS Jan 31 01:03:24.266: INFO: ss-0 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:00 +0000 UTC }] Jan 31 01:03:24.266: INFO: Jan 31 01:03:24.266: INFO: StatefulSet ss has not reached scale 3, at 1 Jan 31 01:03:25.272: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.993723589s Jan 31 01:03:26.307: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.988161351s Jan 31 01:03:27.356: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.953443119s Jan 31 01:03:28.362: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.903496788s Jan 31 01:03:29.375: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.897533064s Jan 31 01:03:30.382: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.884840567s Jan 31 01:03:31.387: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.8781318s Jan 31 01:03:32.392: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.87317478s Jan 31 01:03:33.399: INFO: Verifying statefulset ss doesn't scale past 3 for another 867.65677ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4337 Jan 31 01:03:34.405: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-4337 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 31 01:03:34.650: INFO: stderr: "I0131 01:03:34.544523 898 log.go:181] (0xc00018c370) (0xc000f06000) Create stream\nI0131 01:03:34.544574 898 log.go:181] (0xc00018c370) (0xc000f06000) Stream added, broadcasting: 1\nI0131 01:03:34.546398 898 log.go:181] (0xc00018c370) Reply frame received for 1\nI0131 01:03:34.546478 898 log.go:181] (0xc00018c370) (0xc000f060a0) Create stream\nI0131 01:03:34.546507 898 log.go:181] (0xc00018c370) (0xc000f060a0) Stream added, broadcasting: 3\nI0131 01:03:34.547454 898 log.go:181] (0xc00018c370) Reply frame received for 3\nI0131 01:03:34.547483 898 log.go:181] (0xc00018c370) (0xc0006a2aa0) Create stream\nI0131 01:03:34.547500 898 log.go:181] (0xc00018c370) (0xc0006a2aa0) Stream added, broadcasting: 5\nI0131 01:03:34.548484 898 log.go:181] (0xc00018c370) Reply frame received for 5\nI0131 01:03:34.640822 898 log.go:181] (0xc00018c370) Data frame received for 3\nI0131 01:03:34.640945 898 log.go:181] (0xc000f060a0) (3) Data frame handling\nI0131 01:03:34.640957 898 log.go:181] (0xc000f060a0) (3) Data frame sent\nI0131 01:03:34.640965 898 log.go:181] (0xc00018c370) Data frame received for 3\nI0131 01:03:34.640989 898 log.go:181] (0xc00018c370) Data frame received for 5\nI0131 01:03:34.641032 898 log.go:181] (0xc0006a2aa0) (5) Data frame handling\nI0131 01:03:34.641061 898 log.go:181] (0xc0006a2aa0) (5) Data frame sent\nI0131 01:03:34.641078 898 log.go:181] (0xc00018c370) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0131 01:03:34.641103 898 log.go:181] (0xc0006a2aa0) (5) Data frame handling\nI0131 01:03:34.641124 898 log.go:181] (0xc000f060a0) (3) Data frame handling\nI0131 01:03:34.642720 898 log.go:181] (0xc00018c370) Data frame received for 1\nI0131 01:03:34.642743 898 log.go:181] (0xc000f06000) (1) Data frame handling\nI0131 01:03:34.642760 898 log.go:181] (0xc000f06000) (1) Data frame sent\nI0131 01:03:34.642781 898 log.go:181] (0xc00018c370) (0xc000f06000) Stream removed, broadcasting: 1\nI0131 01:03:34.642802 898 log.go:181] (0xc00018c370) Go away received\nI0131 01:03:34.643267 898 log.go:181] (0xc00018c370) (0xc000f06000) Stream removed, broadcasting: 1\nI0131 01:03:34.643290 898 log.go:181] (0xc00018c370) (0xc000f060a0) Stream removed, broadcasting: 3\nI0131 01:03:34.643303 898 log.go:181] (0xc00018c370) (0xc0006a2aa0) Stream removed, broadcasting: 5\n" Jan 31 01:03:34.650: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 31 01:03:34.650: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 31 01:03:34.650: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-4337 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 31 01:03:34.891: INFO: stderr: "I0131 01:03:34.813924 916 log.go:181] (0xc000140370) (0xc000376140) Create stream\nI0131 01:03:34.814024 916 log.go:181] (0xc000140370) (0xc000376140) Stream added, broadcasting: 1\nI0131 01:03:34.816767 916 log.go:181] (0xc000140370) Reply frame received for 1\nI0131 01:03:34.816805 916 log.go:181] (0xc000140370) (0xc000376960) Create stream\nI0131 01:03:34.816824 916 log.go:181] (0xc000140370) (0xc000376960) Stream added, broadcasting: 3\nI0131 01:03:34.817724 916 log.go:181] (0xc000140370) Reply frame received for 3\nI0131 01:03:34.817755 916 log.go:181] (0xc000140370) (0xc0003770e0) Create stream\nI0131 01:03:34.817764 916 log.go:181] (0xc000140370) (0xc0003770e0) Stream added, broadcasting: 5\nI0131 01:03:34.818541 916 log.go:181] (0xc000140370) Reply frame received for 5\nI0131 01:03:34.883865 916 log.go:181] (0xc000140370) Data frame received for 3\nI0131 01:03:34.883897 916 log.go:181] (0xc000376960) (3) Data frame handling\nI0131 01:03:34.883917 916 log.go:181] (0xc000376960) (3) Data frame sent\nI0131 01:03:34.883926 916 log.go:181] (0xc000140370) Data frame received for 3\nI0131 01:03:34.883934 916 log.go:181] (0xc000376960) (3) Data frame handling\nI0131 01:03:34.884057 916 log.go:181] (0xc000140370) Data frame received for 5\nI0131 01:03:34.884071 916 log.go:181] (0xc0003770e0) (5) Data frame handling\nI0131 01:03:34.884077 916 log.go:181] (0xc0003770e0) (5) Data frame sent\nI0131 01:03:34.884083 916 log.go:181] (0xc000140370) Data frame received for 5\nI0131 01:03:34.884087 916 log.go:181] (0xc0003770e0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0131 01:03:34.885576 916 log.go:181] (0xc000140370) Data frame received for 1\nI0131 01:03:34.885597 916 log.go:181] (0xc000376140) (1) Data frame handling\nI0131 01:03:34.885615 916 log.go:181] (0xc000376140) (1) Data frame sent\nI0131 01:03:34.885634 916 log.go:181] (0xc000140370) (0xc000376140) Stream removed, broadcasting: 1\nI0131 01:03:34.885650 916 log.go:181] (0xc000140370) Go away received\nI0131 01:03:34.886194 916 log.go:181] (0xc000140370) (0xc000376140) Stream removed, broadcasting: 1\nI0131 01:03:34.886222 916 log.go:181] (0xc000140370) (0xc000376960) Stream removed, broadcasting: 3\nI0131 01:03:34.886238 916 log.go:181] (0xc000140370) (0xc0003770e0) Stream removed, broadcasting: 5\n" Jan 31 01:03:34.891: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 31 01:03:34.891: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 31 01:03:34.891: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-4337 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 31 01:03:35.105: INFO: stderr: "I0131 01:03:35.041291 934 log.go:181] (0xc0008a48f0) (0xc00089c500) Create stream\nI0131 01:03:35.041350 934 log.go:181] (0xc0008a48f0) (0xc00089c500) Stream added, broadcasting: 1\nI0131 01:03:35.043511 934 log.go:181] (0xc0008a48f0) Reply frame received for 1\nI0131 01:03:35.043549 934 log.go:181] (0xc0008a48f0) (0xc00089c5a0) Create stream\nI0131 01:03:35.043562 934 log.go:181] (0xc0008a48f0) (0xc00089c5a0) Stream added, broadcasting: 3\nI0131 01:03:35.044510 934 log.go:181] (0xc0008a48f0) Reply frame received for 3\nI0131 01:03:35.044545 934 log.go:181] (0xc0008a48f0) (0xc00066e000) Create stream\nI0131 01:03:35.044561 934 log.go:181] (0xc0008a48f0) (0xc00066e000) Stream added, broadcasting: 5\nI0131 01:03:35.045470 934 log.go:181] (0xc0008a48f0) Reply frame received for 5\nI0131 01:03:35.097260 934 log.go:181] (0xc0008a48f0) Data frame received for 3\nI0131 01:03:35.097313 934 log.go:181] (0xc00089c5a0) (3) Data frame handling\nI0131 01:03:35.097347 934 log.go:181] (0xc00089c5a0) (3) Data frame sent\nI0131 01:03:35.097376 934 log.go:181] (0xc0008a48f0) Data frame received for 3\nI0131 01:03:35.097399 934 log.go:181] (0xc00089c5a0) (3) Data frame handling\nI0131 01:03:35.097449 934 log.go:181] (0xc0008a48f0) Data frame received for 5\nI0131 01:03:35.097466 934 log.go:181] (0xc00066e000) (5) Data frame handling\nI0131 01:03:35.097483 934 log.go:181] (0xc00066e000) (5) Data frame sent\nI0131 01:03:35.097494 934 log.go:181] (0xc0008a48f0) Data frame received for 5\nI0131 01:03:35.097502 934 log.go:181] (0xc00066e000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0131 01:03:35.099049 934 log.go:181] (0xc0008a48f0) Data frame received for 1\nI0131 01:03:35.099075 934 log.go:181] (0xc00089c500) (1) Data frame handling\nI0131 01:03:35.099102 934 log.go:181] (0xc00089c500) (1) Data frame sent\nI0131 01:03:35.099122 934 log.go:181] (0xc0008a48f0) (0xc00089c500) Stream removed, broadcasting: 1\nI0131 01:03:35.099146 934 log.go:181] (0xc0008a48f0) Go away received\nI0131 01:03:35.099676 934 log.go:181] (0xc0008a48f0) (0xc00089c500) Stream removed, broadcasting: 1\nI0131 01:03:35.099716 934 log.go:181] (0xc0008a48f0) (0xc00089c5a0) Stream removed, broadcasting: 3\nI0131 01:03:35.099730 934 log.go:181] (0xc0008a48f0) (0xc00066e000) Stream removed, broadcasting: 5\n" Jan 31 01:03:35.106: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 31 01:03:35.106: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 31 01:03:35.110: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Jan 31 01:03:45.116: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 31 01:03:45.116: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 31 01:03:45.116: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Jan 31 01:03:45.120: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-4337 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 31 01:03:45.330: INFO: stderr: "I0131 01:03:45.240652 952 log.go:181] (0xc00096edc0) (0xc00035b5e0) Create stream\nI0131 01:03:45.240700 952 log.go:181] (0xc00096edc0) (0xc00035b5e0) Stream added, broadcasting: 1\nI0131 01:03:45.242504 952 log.go:181] (0xc00096edc0) Reply frame received for 1\nI0131 01:03:45.242532 952 log.go:181] (0xc00096edc0) (0xc000c4c000) Create stream\nI0131 01:03:45.242540 952 log.go:181] (0xc00096edc0) (0xc000c4c000) Stream added, broadcasting: 3\nI0131 01:03:45.243284 952 log.go:181] (0xc00096edc0) Reply frame received for 3\nI0131 01:03:45.243317 952 log.go:181] (0xc00096edc0) (0xc00035bc20) Create stream\nI0131 01:03:45.243328 952 log.go:181] (0xc00096edc0) (0xc00035bc20) Stream added, broadcasting: 5\nI0131 01:03:45.244025 952 log.go:181] (0xc00096edc0) Reply frame received for 5\nI0131 01:03:45.322501 952 log.go:181] (0xc00096edc0) Data frame received for 3\nI0131 01:03:45.322582 952 log.go:181] (0xc000c4c000) (3) Data frame handling\nI0131 01:03:45.322611 952 log.go:181] (0xc000c4c000) (3) Data frame sent\nI0131 01:03:45.322630 952 log.go:181] (0xc00096edc0) Data frame received for 3\nI0131 01:03:45.322645 952 log.go:181] (0xc000c4c000) (3) Data frame handling\nI0131 01:03:45.322739 952 log.go:181] (0xc00096edc0) Data frame received for 5\nI0131 01:03:45.322767 952 log.go:181] (0xc00035bc20) (5) Data frame handling\nI0131 01:03:45.322787 952 log.go:181] (0xc00035bc20) (5) Data frame sent\nI0131 01:03:45.322813 952 log.go:181] (0xc00096edc0) Data frame received for 5\nI0131 01:03:45.322829 952 log.go:181] (0xc00035bc20) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0131 01:03:45.324302 952 log.go:181] (0xc00096edc0) Data frame received for 1\nI0131 01:03:45.324333 952 log.go:181] (0xc00035b5e0) (1) Data frame handling\nI0131 01:03:45.324348 952 log.go:181] (0xc00035b5e0) (1) Data frame sent\nI0131 01:03:45.324358 952 log.go:181] (0xc00096edc0) (0xc00035b5e0) Stream removed, broadcasting: 1\nI0131 01:03:45.324367 952 log.go:181] (0xc00096edc0) Go away received\nI0131 01:03:45.324713 952 log.go:181] (0xc00096edc0) (0xc00035b5e0) Stream removed, broadcasting: 1\nI0131 01:03:45.324728 952 log.go:181] (0xc00096edc0) (0xc000c4c000) Stream removed, broadcasting: 3\nI0131 01:03:45.324734 952 log.go:181] (0xc00096edc0) (0xc00035bc20) Stream removed, broadcasting: 5\n" Jan 31 01:03:45.330: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 31 01:03:45.330: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 31 01:03:45.331: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-4337 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 31 01:03:45.571: INFO: stderr: "I0131 01:03:45.458885 970 log.go:181] (0xc000262000) (0xc000990000) Create stream\nI0131 01:03:45.458999 970 log.go:181] (0xc000262000) (0xc000990000) Stream added, broadcasting: 1\nI0131 01:03:45.461861 970 log.go:181] (0xc000262000) Reply frame received for 1\nI0131 01:03:45.461922 970 log.go:181] (0xc000262000) (0xc000b81360) Create stream\nI0131 01:03:45.461938 970 log.go:181] (0xc000262000) (0xc000b81360) Stream added, broadcasting: 3\nI0131 01:03:45.463002 970 log.go:181] (0xc000262000) Reply frame received for 3\nI0131 01:03:45.463040 970 log.go:181] (0xc000262000) (0xc000b815e0) Create stream\nI0131 01:03:45.463053 970 log.go:181] (0xc000262000) (0xc000b815e0) Stream added, broadcasting: 5\nI0131 01:03:45.463995 970 log.go:181] (0xc000262000) Reply frame received for 5\nI0131 01:03:45.530128 970 log.go:181] (0xc000262000) Data frame received for 5\nI0131 01:03:45.530167 970 log.go:181] (0xc000b815e0) (5) Data frame handling\nI0131 01:03:45.530193 970 log.go:181] (0xc000b815e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0131 01:03:45.562991 970 log.go:181] (0xc000262000) Data frame received for 3\nI0131 01:03:45.563019 970 log.go:181] (0xc000b81360) (3) Data frame handling\nI0131 01:03:45.563042 970 log.go:181] (0xc000b81360) (3) Data frame sent\nI0131 01:03:45.563238 970 log.go:181] (0xc000262000) Data frame received for 5\nI0131 01:03:45.563261 970 log.go:181] (0xc000b815e0) (5) Data frame handling\nI0131 01:03:45.563559 970 log.go:181] (0xc000262000) Data frame received for 3\nI0131 01:03:45.563590 970 log.go:181] (0xc000b81360) (3) Data frame handling\nI0131 01:03:45.565113 970 log.go:181] (0xc000262000) Data frame received for 1\nI0131 01:03:45.565146 970 log.go:181] (0xc000990000) (1) Data frame handling\nI0131 01:03:45.565172 970 log.go:181] (0xc000990000) (1) Data frame sent\nI0131 01:03:45.565187 970 log.go:181] (0xc000262000) (0xc000990000) Stream removed, broadcasting: 1\nI0131 01:03:45.565213 970 log.go:181] (0xc000262000) Go away received\nI0131 01:03:45.565748 970 log.go:181] (0xc000262000) (0xc000990000) Stream removed, broadcasting: 1\nI0131 01:03:45.565771 970 log.go:181] (0xc000262000) (0xc000b81360) Stream removed, broadcasting: 3\nI0131 01:03:45.565786 970 log.go:181] (0xc000262000) (0xc000b815e0) Stream removed, broadcasting: 5\n" Jan 31 01:03:45.571: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 31 01:03:45.571: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 31 01:03:45.572: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-4337 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 31 01:03:45.811: INFO: stderr: "I0131 01:03:45.698545 988 log.go:181] (0xc000650210) (0xc000648320) Create stream\nI0131 01:03:45.698608 988 log.go:181] (0xc000650210) (0xc000648320) Stream added, broadcasting: 1\nI0131 01:03:45.700753 988 log.go:181] (0xc000650210) Reply frame received for 1\nI0131 01:03:45.700778 988 log.go:181] (0xc000650210) (0xc0006483c0) Create stream\nI0131 01:03:45.700789 988 log.go:181] (0xc000650210) (0xc0006483c0) Stream added, broadcasting: 3\nI0131 01:03:45.701945 988 log.go:181] (0xc000650210) Reply frame received for 3\nI0131 01:03:45.701985 988 log.go:181] (0xc000650210) (0xc000648460) Create stream\nI0131 01:03:45.701999 988 log.go:181] (0xc000650210) (0xc000648460) Stream added, broadcasting: 5\nI0131 01:03:45.703156 988 log.go:181] (0xc000650210) Reply frame received for 5\nI0131 01:03:45.768230 988 log.go:181] (0xc000650210) Data frame received for 5\nI0131 01:03:45.768261 988 log.go:181] (0xc000648460) (5) Data frame handling\nI0131 01:03:45.768281 988 log.go:181] (0xc000648460) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0131 01:03:45.803109 988 log.go:181] (0xc000650210) Data frame received for 3\nI0131 01:03:45.803134 988 log.go:181] (0xc0006483c0) (3) Data frame handling\nI0131 01:03:45.803148 988 log.go:181] (0xc0006483c0) (3) Data frame sent\nI0131 01:03:45.803163 988 log.go:181] (0xc000650210) Data frame received for 3\nI0131 01:03:45.803168 988 log.go:181] (0xc0006483c0) (3) Data frame handling\nI0131 01:03:45.803592 988 log.go:181] (0xc000650210) Data frame received for 5\nI0131 01:03:45.803626 988 log.go:181] (0xc000648460) (5) Data frame handling\nI0131 01:03:45.805589 988 log.go:181] (0xc000650210) Data frame received for 1\nI0131 01:03:45.805630 988 log.go:181] (0xc000648320) (1) Data frame handling\nI0131 01:03:45.805668 988 log.go:181] (0xc000648320) (1) Data frame sent\nI0131 01:03:45.805730 988 log.go:181] (0xc000650210) (0xc000648320) Stream removed, broadcasting: 1\nI0131 01:03:45.805765 988 log.go:181] (0xc000650210) Go away received\nI0131 01:03:45.806388 988 log.go:181] (0xc000650210) (0xc000648320) Stream removed, broadcasting: 1\nI0131 01:03:45.806409 988 log.go:181] (0xc000650210) (0xc0006483c0) Stream removed, broadcasting: 3\nI0131 01:03:45.806420 988 log.go:181] (0xc000650210) (0xc000648460) Stream removed, broadcasting: 5\n" Jan 31 01:03:45.811: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 31 01:03:45.811: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 31 01:03:45.811: INFO: Waiting for statefulset status.replicas updated to 0 Jan 31 01:03:45.814: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Jan 31 01:03:55.822: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 31 01:03:55.822: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jan 31 01:03:55.822: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jan 31 01:03:55.839: INFO: POD NODE PHASE GRACE CONDITIONS Jan 31 01:03:55.839: INFO: ss-0 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:00 +0000 UTC }] Jan 31 01:03:55.839: INFO: ss-1 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:24 +0000 UTC }] Jan 31 01:03:55.839: INFO: ss-2 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:24 +0000 UTC }] Jan 31 01:03:55.839: INFO: Jan 31 01:03:55.839: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 31 01:03:56.846: INFO: POD NODE PHASE GRACE CONDITIONS Jan 31 01:03:56.846: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:00 +0000 UTC }] Jan 31 01:03:56.846: INFO: ss-1 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:24 +0000 UTC }] Jan 31 01:03:56.846: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:24 +0000 UTC }] Jan 31 01:03:56.846: INFO: Jan 31 01:03:56.846: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 31 01:03:57.905: INFO: POD NODE PHASE GRACE CONDITIONS Jan 31 01:03:57.906: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:00 +0000 UTC }] Jan 31 01:03:57.906: INFO: ss-1 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:24 +0000 UTC }] Jan 31 01:03:57.906: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:24 +0000 UTC }] Jan 31 01:03:57.906: INFO: Jan 31 01:03:57.906: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 31 01:03:58.911: INFO: POD NODE PHASE GRACE CONDITIONS Jan 31 01:03:58.911: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:00 +0000 UTC }] Jan 31 01:03:58.911: INFO: ss-1 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:24 +0000 UTC }] Jan 31 01:03:58.911: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:24 +0000 UTC }] Jan 31 01:03:58.911: INFO: Jan 31 01:03:58.911: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 31 01:03:59.916: INFO: POD NODE PHASE GRACE CONDITIONS Jan 31 01:03:59.916: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:00 +0000 UTC }] Jan 31 01:03:59.916: INFO: ss-1 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:24 +0000 UTC }] Jan 31 01:03:59.917: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:24 +0000 UTC }] Jan 31 01:03:59.917: INFO: Jan 31 01:03:59.917: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 31 01:04:00.922: INFO: POD NODE PHASE GRACE CONDITIONS Jan 31 01:04:00.922: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:00 +0000 UTC }] Jan 31 01:04:00.922: INFO: ss-1 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:24 +0000 UTC }] Jan 31 01:04:00.922: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:24 +0000 UTC }] Jan 31 01:04:00.922: INFO: Jan 31 01:04:00.922: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 31 01:04:01.928: INFO: POD NODE PHASE GRACE CONDITIONS Jan 31 01:04:01.928: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:00 +0000 UTC }] Jan 31 01:04:01.928: INFO: ss-1 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:24 +0000 UTC }] Jan 31 01:04:01.928: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:24 +0000 UTC }] Jan 31 01:04:01.928: INFO: Jan 31 01:04:01.928: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 31 01:04:02.933: INFO: POD NODE PHASE GRACE CONDITIONS Jan 31 01:04:02.934: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:00 +0000 UTC }] Jan 31 01:04:02.934: INFO: ss-1 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:24 +0000 UTC }] Jan 31 01:04:02.934: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:24 +0000 UTC }] Jan 31 01:04:02.934: INFO: Jan 31 01:04:02.934: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 31 01:04:03.939: INFO: POD NODE PHASE GRACE CONDITIONS Jan 31 01:04:03.939: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:00 +0000 UTC }] Jan 31 01:04:03.939: INFO: ss-1 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:24 +0000 UTC }] Jan 31 01:04:03.940: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:24 +0000 UTC }] Jan 31 01:04:03.940: INFO: Jan 31 01:04:03.940: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 31 01:04:04.946: INFO: POD NODE PHASE GRACE CONDITIONS Jan 31 01:04:04.946: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:00 +0000 UTC }] Jan 31 01:04:04.946: INFO: ss-1 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:24 +0000 UTC }] Jan 31 01:04:04.946: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-31 01:03:24 +0000 UTC }] Jan 31 01:04:04.946: INFO: Jan 31 01:04:04.946: INFO: StatefulSet ss has not reached scale 0, at 3 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-4337 Jan 31 01:04:05.957: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-4337 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 31 01:04:06.100: INFO: rc: 1 Jan 31 01:04:06.100: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-4337 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Jan 31 01:04:16.101: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-4337 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 31 01:04:16.199: INFO: rc: 1 Jan 31 01:04:16.199: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-4337 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 31 01:04:26.200: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-4337 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 31 01:04:26.303: INFO: rc: 1 Jan 31 01:04:26.303: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-4337 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 31 01:04:36.303: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-4337 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 31 01:04:36.402: INFO: rc: 1 Jan 31 01:04:36.402: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-4337 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 31 01:04:46.402: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-4337 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 31 01:04:46.506: INFO: rc: 1 Jan 31 01:04:46.506: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-4337 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 31 01:04:56.506: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-4337 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 31 01:04:56.611: INFO: rc: 1 Jan 31 01:04:56.611: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-4337 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 31 01:05:06.612: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-4337 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 31 01:05:06.720: INFO: rc: 1 Jan 31 01:05:06.720: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-4337 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 31 01:05:16.721: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-4337 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 31 01:05:16.826: INFO: rc: 1 Jan 31 01:05:16.826: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-4337 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 31 01:05:26.826: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-4337 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 31 01:05:26.931: INFO: rc: 1 Jan 31 01:05:26.931: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-4337 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 31 01:05:36.931: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-4337 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 31 01:05:37.041: INFO: rc: 1 Jan 31 01:05:37.041: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-4337 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 31 01:05:47.041: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-4337 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 31 01:05:47.153: INFO: rc: 1 Jan 31 01:05:47.153: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-4337 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 31 01:05:57.153: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-4337 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 31 01:05:57.264: INFO: rc: 1 Jan 31 01:05:57.264: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-4337 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 31 01:06:07.264: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-4337 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 31 01:06:07.375: INFO: rc: 1 Jan 31 01:06:07.375: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-4337 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 31 01:06:17.376: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-4337 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 31 01:06:17.470: INFO: rc: 1 Jan 31 01:06:17.470: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-4337 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 31 01:06:27.471: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-4337 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 31 01:06:27.577: INFO: rc: 1 Jan 31 01:06:27.577: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-4337 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 31 01:06:37.577: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-4337 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 31 01:06:37.691: INFO: rc: 1 Jan 31 01:06:37.691: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-4337 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 31 01:06:47.691: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-4337 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 31 01:06:47.804: INFO: rc: 1 Jan 31 01:06:47.804: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-4337 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 31 01:06:57.804: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-4337 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 31 01:06:57.912: INFO: rc: 1 Jan 31 01:06:57.912: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-4337 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 31 01:07:07.913: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-4337 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 31 01:07:08.019: INFO: rc: 1 Jan 31 01:07:08.019: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-4337 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 31 01:07:18.019: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-4337 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 31 01:07:18.127: INFO: rc: 1 Jan 31 01:07:18.127: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-4337 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 31 01:07:28.127: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-4337 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 31 01:07:28.223: INFO: rc: 1 Jan 31 01:07:28.223: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-4337 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 31 01:07:38.223: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-4337 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 31 01:07:38.316: INFO: rc: 1 Jan 31 01:07:38.316: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-4337 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 31 01:07:48.316: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-4337 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 31 01:07:48.422: INFO: rc: 1 Jan 31 01:07:48.422: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-4337 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 31 01:07:58.422: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-4337 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 31 01:07:58.521: INFO: rc: 1 Jan 31 01:07:58.521: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-4337 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 31 01:08:08.522: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-4337 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 31 01:08:08.621: INFO: rc: 1 Jan 31 01:08:08.621: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-4337 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 31 01:08:18.621: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-4337 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 31 01:08:18.718: INFO: rc: 1 Jan 31 01:08:18.718: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-4337 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 31 01:08:28.718: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-4337 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 31 01:08:28.821: INFO: rc: 1 Jan 31 01:08:28.821: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-4337 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 31 01:08:38.821: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-4337 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 31 01:08:38.928: INFO: rc: 1 Jan 31 01:08:38.928: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-4337 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 31 01:08:48.929: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-4337 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 31 01:08:49.029: INFO: rc: 1 Jan 31 01:08:49.029: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-4337 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 31 01:08:59.029: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-4337 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 31 01:08:59.130: INFO: rc: 1 Jan 31 01:08:59.130: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-4337 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 31 01:09:09.130: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-4337 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 31 01:09:09.245: INFO: rc: 1 Jan 31 01:09:09.245: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: Jan 31 01:09:09.245: INFO: Scaling statefulset ss to 0 Jan 31 01:09:09.257: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Jan 31 01:09:09.259: INFO: Deleting all statefulset in ns statefulset-4337 Jan 31 01:09:09.261: INFO: Scaling statefulset ss to 0 Jan 31 01:09:09.270: INFO: Waiting for statefulset status.replicas updated to 0 Jan 31 01:09:09.272: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:09:09.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4337" for this suite. • [SLOW TEST:368.707 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:635 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":311,"completed":154,"skipped":2982,"failed":0} S ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:09:09.296: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Jan 31 01:09:19.463: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5796 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 31 01:09:19.463: INFO: >>> kubeConfig: /root/.kube/config I0131 01:09:19.505776 7 log.go:181] (0xc0004fce70) (0xc0039af2c0) Create stream I0131 01:09:19.505807 7 log.go:181] (0xc0004fce70) (0xc0039af2c0) Stream added, broadcasting: 1 I0131 01:09:19.507974 7 log.go:181] (0xc0004fce70) Reply frame received for 1 I0131 01:09:19.508052 7 log.go:181] (0xc0004fce70) (0xc000f88140) Create stream I0131 01:09:19.508079 7 log.go:181] (0xc0004fce70) (0xc000f88140) Stream added, broadcasting: 3 I0131 01:09:19.509608 7 log.go:181] (0xc0004fce70) Reply frame received for 3 I0131 01:09:19.509650 7 log.go:181] (0xc0004fce70) (0xc000f881e0) Create stream I0131 01:09:19.509666 7 log.go:181] (0xc0004fce70) (0xc000f881e0) Stream added, broadcasting: 5 I0131 01:09:19.511199 7 log.go:181] (0xc0004fce70) Reply frame received for 5 I0131 01:09:19.585587 7 log.go:181] (0xc0004fce70) Data frame received for 5 I0131 01:09:19.585636 7 log.go:181] (0xc000f881e0) (5) Data frame handling I0131 01:09:19.585692 7 log.go:181] (0xc0004fce70) Data frame received for 3 I0131 01:09:19.585717 7 log.go:181] (0xc000f88140) (3) Data frame handling I0131 01:09:19.585741 7 log.go:181] (0xc000f88140) (3) Data frame sent I0131 01:09:19.585756 7 log.go:181] (0xc0004fce70) Data frame received for 3 I0131 01:09:19.585770 7 log.go:181] (0xc000f88140) (3) Data frame handling I0131 01:09:19.587326 7 log.go:181] (0xc0004fce70) Data frame received for 1 I0131 01:09:19.587353 7 log.go:181] (0xc0039af2c0) (1) Data frame handling I0131 01:09:19.587366 7 log.go:181] (0xc0039af2c0) (1) Data frame sent I0131 01:09:19.587374 7 log.go:181] (0xc0004fce70) (0xc0039af2c0) Stream removed, broadcasting: 1 I0131 01:09:19.587382 7 log.go:181] (0xc0004fce70) Go away received I0131 01:09:19.587556 7 log.go:181] (0xc0004fce70) (0xc0039af2c0) Stream removed, broadcasting: 1 I0131 01:09:19.587570 7 log.go:181] (0xc0004fce70) (0xc000f88140) Stream removed, broadcasting: 3 I0131 01:09:19.587575 7 log.go:181] (0xc0004fce70) (0xc000f881e0) Stream removed, broadcasting: 5 Jan 31 01:09:19.587: INFO: Exec stderr: "" Jan 31 01:09:19.587: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5796 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 31 01:09:19.587: INFO: >>> kubeConfig: /root/.kube/config I0131 01:09:19.620349 7 log.go:181] (0xc0006ae0b0) (0xc0040b8a00) Create stream I0131 01:09:19.620378 7 log.go:181] (0xc0006ae0b0) (0xc0040b8a00) Stream added, broadcasting: 1 I0131 01:09:19.622477 7 log.go:181] (0xc0006ae0b0) Reply frame received for 1 I0131 01:09:19.622516 7 log.go:181] (0xc0006ae0b0) (0xc002146aa0) Create stream I0131 01:09:19.622529 7 log.go:181] (0xc0006ae0b0) (0xc002146aa0) Stream added, broadcasting: 3 I0131 01:09:19.623469 7 log.go:181] (0xc0006ae0b0) Reply frame received for 3 I0131 01:09:19.623519 7 log.go:181] (0xc0006ae0b0) (0xc000f88280) Create stream I0131 01:09:19.623537 7 log.go:181] (0xc0006ae0b0) (0xc000f88280) Stream added, broadcasting: 5 I0131 01:09:19.624622 7 log.go:181] (0xc0006ae0b0) Reply frame received for 5 I0131 01:09:19.693199 7 log.go:181] (0xc0006ae0b0) Data frame received for 3 I0131 01:09:19.693237 7 log.go:181] (0xc002146aa0) (3) Data frame handling I0131 01:09:19.693251 7 log.go:181] (0xc002146aa0) (3) Data frame sent I0131 01:09:19.693261 7 log.go:181] (0xc0006ae0b0) Data frame received for 3 I0131 01:09:19.693283 7 log.go:181] (0xc002146aa0) (3) Data frame handling I0131 01:09:19.693342 7 log.go:181] (0xc0006ae0b0) Data frame received for 5 I0131 01:09:19.693370 7 log.go:181] (0xc000f88280) (5) Data frame handling I0131 01:09:19.694728 7 log.go:181] (0xc0006ae0b0) Data frame received for 1 I0131 01:09:19.694774 7 log.go:181] (0xc0040b8a00) (1) Data frame handling I0131 01:09:19.694793 7 log.go:181] (0xc0040b8a00) (1) Data frame sent I0131 01:09:19.694811 7 log.go:181] (0xc0006ae0b0) (0xc0040b8a00) Stream removed, broadcasting: 1 I0131 01:09:19.694838 7 log.go:181] (0xc0006ae0b0) Go away received I0131 01:09:19.694958 7 log.go:181] (0xc0006ae0b0) (0xc0040b8a00) Stream removed, broadcasting: 1 I0131 01:09:19.695012 7 log.go:181] (0xc0006ae0b0) (0xc002146aa0) Stream removed, broadcasting: 3 I0131 01:09:19.695033 7 log.go:181] (0xc0006ae0b0) (0xc000f88280) Stream removed, broadcasting: 5 Jan 31 01:09:19.695: INFO: Exec stderr: "" Jan 31 01:09:19.695: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5796 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 31 01:09:19.695: INFO: >>> kubeConfig: /root/.kube/config I0131 01:09:19.725556 7 log.go:181] (0xc003a784d0) (0xc0040b8dc0) Create stream I0131 01:09:19.725586 7 log.go:181] (0xc003a784d0) (0xc0040b8dc0) Stream added, broadcasting: 1 I0131 01:09:19.727421 7 log.go:181] (0xc003a784d0) Reply frame received for 1 I0131 01:09:19.727468 7 log.go:181] (0xc003a784d0) (0xc002146b40) Create stream I0131 01:09:19.727484 7 log.go:181] (0xc003a784d0) (0xc002146b40) Stream added, broadcasting: 3 I0131 01:09:19.728312 7 log.go:181] (0xc003a784d0) Reply frame received for 3 I0131 01:09:19.728343 7 log.go:181] (0xc003a784d0) (0xc0040b8f00) Create stream I0131 01:09:19.728355 7 log.go:181] (0xc003a784d0) (0xc0040b8f00) Stream added, broadcasting: 5 I0131 01:09:19.729446 7 log.go:181] (0xc003a784d0) Reply frame received for 5 I0131 01:09:19.788509 7 log.go:181] (0xc003a784d0) Data frame received for 5 I0131 01:09:19.788545 7 log.go:181] (0xc0040b8f00) (5) Data frame handling I0131 01:09:19.788571 7 log.go:181] (0xc003a784d0) Data frame received for 3 I0131 01:09:19.788581 7 log.go:181] (0xc002146b40) (3) Data frame handling I0131 01:09:19.788592 7 log.go:181] (0xc002146b40) (3) Data frame sent I0131 01:09:19.788601 7 log.go:181] (0xc003a784d0) Data frame received for 3 I0131 01:09:19.788609 7 log.go:181] (0xc002146b40) (3) Data frame handling I0131 01:09:19.795758 7 log.go:181] (0xc003a784d0) Data frame received for 1 I0131 01:09:19.795789 7 log.go:181] (0xc0040b8dc0) (1) Data frame handling I0131 01:09:19.795811 7 log.go:181] (0xc0040b8dc0) (1) Data frame sent I0131 01:09:19.795849 7 log.go:181] (0xc003a784d0) (0xc0040b8dc0) Stream removed, broadcasting: 1 I0131 01:09:19.795947 7 log.go:181] (0xc003a784d0) Go away received I0131 01:09:19.796024 7 log.go:181] (0xc003a784d0) (0xc0040b8dc0) Stream removed, broadcasting: 1 I0131 01:09:19.796122 7 log.go:181] (0xc003a784d0) (0xc002146b40) Stream removed, broadcasting: 3 I0131 01:09:19.796151 7 log.go:181] (0xc003a784d0) (0xc0040b8f00) Stream removed, broadcasting: 5 Jan 31 01:09:19.796: INFO: Exec stderr: "" Jan 31 01:09:19.796: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5796 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 31 01:09:19.796: INFO: >>> kubeConfig: /root/.kube/config I0131 01:09:19.831439 7 log.go:181] (0xc003a78b00) (0xc0040b90e0) Create stream I0131 01:09:19.831467 7 log.go:181] (0xc003a78b00) (0xc0040b90e0) Stream added, broadcasting: 1 I0131 01:09:19.833540 7 log.go:181] (0xc003a78b00) Reply frame received for 1 I0131 01:09:19.833574 7 log.go:181] (0xc003a78b00) (0xc0039af360) Create stream I0131 01:09:19.833588 7 log.go:181] (0xc003a78b00) (0xc0039af360) Stream added, broadcasting: 3 I0131 01:09:19.834609 7 log.go:181] (0xc003a78b00) Reply frame received for 3 I0131 01:09:19.834666 7 log.go:181] (0xc003a78b00) (0xc0019d4e60) Create stream I0131 01:09:19.834692 7 log.go:181] (0xc003a78b00) (0xc0019d4e60) Stream added, broadcasting: 5 I0131 01:09:19.835598 7 log.go:181] (0xc003a78b00) Reply frame received for 5 I0131 01:09:19.891480 7 log.go:181] (0xc003a78b00) Data frame received for 5 I0131 01:09:19.891508 7 log.go:181] (0xc0019d4e60) (5) Data frame handling I0131 01:09:19.891568 7 log.go:181] (0xc003a78b00) Data frame received for 3 I0131 01:09:19.891642 7 log.go:181] (0xc0039af360) (3) Data frame handling I0131 01:09:19.891684 7 log.go:181] (0xc0039af360) (3) Data frame sent I0131 01:09:19.891715 7 log.go:181] (0xc003a78b00) Data frame received for 3 I0131 01:09:19.891737 7 log.go:181] (0xc0039af360) (3) Data frame handling I0131 01:09:19.893462 7 log.go:181] (0xc003a78b00) Data frame received for 1 I0131 01:09:19.893484 7 log.go:181] (0xc0040b90e0) (1) Data frame handling I0131 01:09:19.893505 7 log.go:181] (0xc0040b90e0) (1) Data frame sent I0131 01:09:19.893682 7 log.go:181] (0xc003a78b00) (0xc0040b90e0) Stream removed, broadcasting: 1 I0131 01:09:19.893729 7 log.go:181] (0xc003a78b00) Go away received I0131 01:09:19.893847 7 log.go:181] (0xc003a78b00) (0xc0040b90e0) Stream removed, broadcasting: 1 I0131 01:09:19.893870 7 log.go:181] (0xc003a78b00) (0xc0039af360) Stream removed, broadcasting: 3 I0131 01:09:19.893881 7 log.go:181] (0xc003a78b00) (0xc0019d4e60) Stream removed, broadcasting: 5 Jan 31 01:09:19.893: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Jan 31 01:09:19.893: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5796 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 31 01:09:19.893: INFO: >>> kubeConfig: /root/.kube/config I0131 01:09:19.921596 7 log.go:181] (0xc002fa88f0) (0xc001728500) Create stream I0131 01:09:19.921623 7 log.go:181] (0xc002fa88f0) (0xc001728500) Stream added, broadcasting: 1 I0131 01:09:19.923281 7 log.go:181] (0xc002fa88f0) Reply frame received for 1 I0131 01:09:19.923310 7 log.go:181] (0xc002fa88f0) (0xc0019d4f00) Create stream I0131 01:09:19.923320 7 log.go:181] (0xc002fa88f0) (0xc0019d4f00) Stream added, broadcasting: 3 I0131 01:09:19.924183 7 log.go:181] (0xc002fa88f0) Reply frame received for 3 I0131 01:09:19.924235 7 log.go:181] (0xc002fa88f0) (0xc0040b9180) Create stream I0131 01:09:19.924257 7 log.go:181] (0xc002fa88f0) (0xc0040b9180) Stream added, broadcasting: 5 I0131 01:09:19.925279 7 log.go:181] (0xc002fa88f0) Reply frame received for 5 I0131 01:09:19.976823 7 log.go:181] (0xc002fa88f0) Data frame received for 3 I0131 01:09:19.976927 7 log.go:181] (0xc0019d4f00) (3) Data frame handling I0131 01:09:19.976948 7 log.go:181] (0xc0019d4f00) (3) Data frame sent I0131 01:09:19.976960 7 log.go:181] (0xc002fa88f0) Data frame received for 3 I0131 01:09:19.976970 7 log.go:181] (0xc0019d4f00) (3) Data frame handling I0131 01:09:19.977010 7 log.go:181] (0xc002fa88f0) Data frame received for 5 I0131 01:09:19.977033 7 log.go:181] (0xc0040b9180) (5) Data frame handling I0131 01:09:19.978482 7 log.go:181] (0xc002fa88f0) Data frame received for 1 I0131 01:09:19.978547 7 log.go:181] (0xc001728500) (1) Data frame handling I0131 01:09:19.978578 7 log.go:181] (0xc001728500) (1) Data frame sent I0131 01:09:19.978598 7 log.go:181] (0xc002fa88f0) (0xc001728500) Stream removed, broadcasting: 1 I0131 01:09:19.978629 7 log.go:181] (0xc002fa88f0) Go away received I0131 01:09:19.978667 7 log.go:181] (0xc002fa88f0) (0xc001728500) Stream removed, broadcasting: 1 I0131 01:09:19.978686 7 log.go:181] (0xc002fa88f0) (0xc0019d4f00) Stream removed, broadcasting: 3 I0131 01:09:19.978695 7 log.go:181] (0xc002fa88f0) (0xc0040b9180) Stream removed, broadcasting: 5 Jan 31 01:09:19.978: INFO: Exec stderr: "" Jan 31 01:09:19.978: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5796 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 31 01:09:19.978: INFO: >>> kubeConfig: /root/.kube/config I0131 01:09:20.022712 7 log.go:181] (0xc0004fd550) (0xc0039af5e0) Create stream I0131 01:09:20.022738 7 log.go:181] (0xc0004fd550) (0xc0039af5e0) Stream added, broadcasting: 1 I0131 01:09:20.024474 7 log.go:181] (0xc0004fd550) Reply frame received for 1 I0131 01:09:20.024505 7 log.go:181] (0xc0004fd550) (0xc000f88320) Create stream I0131 01:09:20.024515 7 log.go:181] (0xc0004fd550) (0xc000f88320) Stream added, broadcasting: 3 I0131 01:09:20.025747 7 log.go:181] (0xc0004fd550) Reply frame received for 3 I0131 01:09:20.025836 7 log.go:181] (0xc0004fd550) (0xc0017285a0) Create stream I0131 01:09:20.025854 7 log.go:181] (0xc0004fd550) (0xc0017285a0) Stream added, broadcasting: 5 I0131 01:09:20.026822 7 log.go:181] (0xc0004fd550) Reply frame received for 5 I0131 01:09:20.081905 7 log.go:181] (0xc0004fd550) Data frame received for 3 I0131 01:09:20.081926 7 log.go:181] (0xc000f88320) (3) Data frame handling I0131 01:09:20.081938 7 log.go:181] (0xc000f88320) (3) Data frame sent I0131 01:09:20.082028 7 log.go:181] (0xc0004fd550) Data frame received for 3 I0131 01:09:20.082056 7 log.go:181] (0xc0004fd550) Data frame received for 5 I0131 01:09:20.082102 7 log.go:181] (0xc0017285a0) (5) Data frame handling I0131 01:09:20.082162 7 log.go:181] (0xc000f88320) (3) Data frame handling I0131 01:09:20.083057 7 log.go:181] (0xc0004fd550) Data frame received for 1 I0131 01:09:20.083071 7 log.go:181] (0xc0039af5e0) (1) Data frame handling I0131 01:09:20.083084 7 log.go:181] (0xc0039af5e0) (1) Data frame sent I0131 01:09:20.083187 7 log.go:181] (0xc0004fd550) (0xc0039af5e0) Stream removed, broadcasting: 1 I0131 01:09:20.083235 7 log.go:181] (0xc0004fd550) Go away received I0131 01:09:20.083291 7 log.go:181] (0xc0004fd550) (0xc0039af5e0) Stream removed, broadcasting: 1 I0131 01:09:20.083346 7 log.go:181] (0xc0004fd550) (0xc000f88320) Stream removed, broadcasting: 3 I0131 01:09:20.083383 7 log.go:181] (0xc0004fd550) (0xc0017285a0) Stream removed, broadcasting: 5 Jan 31 01:09:20.083: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Jan 31 01:09:20.083: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5796 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 31 01:09:20.083: INFO: >>> kubeConfig: /root/.kube/config I0131 01:09:20.119803 7 log.go:181] (0xc002fa8fd0) (0xc001728780) Create stream I0131 01:09:20.119833 7 log.go:181] (0xc002fa8fd0) (0xc001728780) Stream added, broadcasting: 1 I0131 01:09:20.122688 7 log.go:181] (0xc002fa8fd0) Reply frame received for 1 I0131 01:09:20.122755 7 log.go:181] (0xc002fa8fd0) (0xc0040b9220) Create stream I0131 01:09:20.122774 7 log.go:181] (0xc002fa8fd0) (0xc0040b9220) Stream added, broadcasting: 3 I0131 01:09:20.123891 7 log.go:181] (0xc002fa8fd0) Reply frame received for 3 I0131 01:09:20.123920 7 log.go:181] (0xc002fa8fd0) (0xc0039af720) Create stream I0131 01:09:20.123938 7 log.go:181] (0xc002fa8fd0) (0xc0039af720) Stream added, broadcasting: 5 I0131 01:09:20.124991 7 log.go:181] (0xc002fa8fd0) Reply frame received for 5 I0131 01:09:20.192491 7 log.go:181] (0xc002fa8fd0) Data frame received for 5 I0131 01:09:20.192551 7 log.go:181] (0xc0039af720) (5) Data frame handling I0131 01:09:20.192632 7 log.go:181] (0xc002fa8fd0) Data frame received for 3 I0131 01:09:20.192653 7 log.go:181] (0xc0040b9220) (3) Data frame handling I0131 01:09:20.192672 7 log.go:181] (0xc0040b9220) (3) Data frame sent I0131 01:09:20.192682 7 log.go:181] (0xc002fa8fd0) Data frame received for 3 I0131 01:09:20.192691 7 log.go:181] (0xc0040b9220) (3) Data frame handling I0131 01:09:20.194348 7 log.go:181] (0xc002fa8fd0) Data frame received for 1 I0131 01:09:20.194365 7 log.go:181] (0xc001728780) (1) Data frame handling I0131 01:09:20.194373 7 log.go:181] (0xc001728780) (1) Data frame sent I0131 01:09:20.194381 7 log.go:181] (0xc002fa8fd0) (0xc001728780) Stream removed, broadcasting: 1 I0131 01:09:20.194445 7 log.go:181] (0xc002fa8fd0) Go away received I0131 01:09:20.194486 7 log.go:181] (0xc002fa8fd0) (0xc001728780) Stream removed, broadcasting: 1 I0131 01:09:20.194501 7 log.go:181] (0xc002fa8fd0) (0xc0040b9220) Stream removed, broadcasting: 3 I0131 01:09:20.194508 7 log.go:181] (0xc002fa8fd0) (0xc0039af720) Stream removed, broadcasting: 5 Jan 31 01:09:20.194: INFO: Exec stderr: "" Jan 31 01:09:20.194: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5796 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 31 01:09:20.194: INFO: >>> kubeConfig: /root/.kube/config I0131 01:09:20.240411 7 log.go:181] (0xc002fa96b0) (0xc001728b40) Create stream I0131 01:09:20.240447 7 log.go:181] (0xc002fa96b0) (0xc001728b40) Stream added, broadcasting: 1 I0131 01:09:20.242491 7 log.go:181] (0xc002fa96b0) Reply frame received for 1 I0131 01:09:20.242533 7 log.go:181] (0xc002fa96b0) (0xc0019d4fa0) Create stream I0131 01:09:20.242557 7 log.go:181] (0xc002fa96b0) (0xc0019d4fa0) Stream added, broadcasting: 3 I0131 01:09:20.243516 7 log.go:181] (0xc002fa96b0) Reply frame received for 3 I0131 01:09:20.243538 7 log.go:181] (0xc002fa96b0) (0xc0039af7c0) Create stream I0131 01:09:20.243549 7 log.go:181] (0xc002fa96b0) (0xc0039af7c0) Stream added, broadcasting: 5 I0131 01:09:20.244297 7 log.go:181] (0xc002fa96b0) Reply frame received for 5 I0131 01:09:20.299038 7 log.go:181] (0xc002fa96b0) Data frame received for 5 I0131 01:09:20.299075 7 log.go:181] (0xc0039af7c0) (5) Data frame handling I0131 01:09:20.299114 7 log.go:181] (0xc002fa96b0) Data frame received for 3 I0131 01:09:20.299144 7 log.go:181] (0xc0019d4fa0) (3) Data frame handling I0131 01:09:20.299164 7 log.go:181] (0xc0019d4fa0) (3) Data frame sent I0131 01:09:20.299177 7 log.go:181] (0xc002fa96b0) Data frame received for 3 I0131 01:09:20.299188 7 log.go:181] (0xc0019d4fa0) (3) Data frame handling I0131 01:09:20.300409 7 log.go:181] (0xc002fa96b0) Data frame received for 1 I0131 01:09:20.300427 7 log.go:181] (0xc001728b40) (1) Data frame handling I0131 01:09:20.300435 7 log.go:181] (0xc001728b40) (1) Data frame sent I0131 01:09:20.300453 7 log.go:181] (0xc002fa96b0) (0xc001728b40) Stream removed, broadcasting: 1 I0131 01:09:20.300467 7 log.go:181] (0xc002fa96b0) Go away received I0131 01:09:20.300603 7 log.go:181] (0xc002fa96b0) (0xc001728b40) Stream removed, broadcasting: 1 I0131 01:09:20.300629 7 log.go:181] (0xc002fa96b0) (0xc0019d4fa0) Stream removed, broadcasting: 3 I0131 01:09:20.300645 7 log.go:181] (0xc002fa96b0) (0xc0039af7c0) Stream removed, broadcasting: 5 Jan 31 01:09:20.300: INFO: Exec stderr: "" Jan 31 01:09:20.300: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5796 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 31 01:09:20.300: INFO: >>> kubeConfig: /root/.kube/config I0131 01:09:20.327545 7 log.go:181] (0xc00002efd0) (0xc000f883c0) Create stream I0131 01:09:20.327569 7 log.go:181] (0xc00002efd0) (0xc000f883c0) Stream added, broadcasting: 1 I0131 01:09:20.330292 7 log.go:181] (0xc00002efd0) Reply frame received for 1 I0131 01:09:20.330333 7 log.go:181] (0xc00002efd0) (0xc0019d5040) Create stream I0131 01:09:20.330350 7 log.go:181] (0xc00002efd0) (0xc0019d5040) Stream added, broadcasting: 3 I0131 01:09:20.331298 7 log.go:181] (0xc00002efd0) Reply frame received for 3 I0131 01:09:20.331351 7 log.go:181] (0xc00002efd0) (0xc001728c80) Create stream I0131 01:09:20.331373 7 log.go:181] (0xc00002efd0) (0xc001728c80) Stream added, broadcasting: 5 I0131 01:09:20.332293 7 log.go:181] (0xc00002efd0) Reply frame received for 5 I0131 01:09:20.398658 7 log.go:181] (0xc00002efd0) Data frame received for 5 I0131 01:09:20.398696 7 log.go:181] (0xc001728c80) (5) Data frame handling I0131 01:09:20.398754 7 log.go:181] (0xc00002efd0) Data frame received for 3 I0131 01:09:20.398769 7 log.go:181] (0xc0019d5040) (3) Data frame handling I0131 01:09:20.398793 7 log.go:181] (0xc0019d5040) (3) Data frame sent I0131 01:09:20.398812 7 log.go:181] (0xc00002efd0) Data frame received for 3 I0131 01:09:20.398822 7 log.go:181] (0xc0019d5040) (3) Data frame handling I0131 01:09:20.399914 7 log.go:181] (0xc00002efd0) Data frame received for 1 I0131 01:09:20.399950 7 log.go:181] (0xc000f883c0) (1) Data frame handling I0131 01:09:20.399968 7 log.go:181] (0xc000f883c0) (1) Data frame sent I0131 01:09:20.399981 7 log.go:181] (0xc00002efd0) (0xc000f883c0) Stream removed, broadcasting: 1 I0131 01:09:20.399999 7 log.go:181] (0xc00002efd0) Go away received I0131 01:09:20.400118 7 log.go:181] (0xc00002efd0) (0xc000f883c0) Stream removed, broadcasting: 1 I0131 01:09:20.400155 7 log.go:181] (0xc00002efd0) (0xc0019d5040) Stream removed, broadcasting: 3 I0131 01:09:20.400171 7 log.go:181] (0xc00002efd0) (0xc001728c80) Stream removed, broadcasting: 5 Jan 31 01:09:20.400: INFO: Exec stderr: "" Jan 31 01:09:20.400: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5796 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 31 01:09:20.400: INFO: >>> kubeConfig: /root/.kube/config I0131 01:09:20.439429 7 log.go:181] (0xc002fa9d90) (0xc001728f00) Create stream I0131 01:09:20.439468 7 log.go:181] (0xc002fa9d90) (0xc001728f00) Stream added, broadcasting: 1 I0131 01:09:20.443747 7 log.go:181] (0xc002fa9d90) Reply frame received for 1 I0131 01:09:20.443793 7 log.go:181] (0xc002fa9d90) (0xc001728fa0) Create stream I0131 01:09:20.443819 7 log.go:181] (0xc002fa9d90) (0xc001728fa0) Stream added, broadcasting: 3 I0131 01:09:20.445104 7 log.go:181] (0xc002fa9d90) Reply frame received for 3 I0131 01:09:20.445142 7 log.go:181] (0xc002fa9d90) (0xc0039afa40) Create stream I0131 01:09:20.445155 7 log.go:181] (0xc002fa9d90) (0xc0039afa40) Stream added, broadcasting: 5 I0131 01:09:20.446493 7 log.go:181] (0xc002fa9d90) Reply frame received for 5 I0131 01:09:20.510126 7 log.go:181] (0xc002fa9d90) Data frame received for 5 I0131 01:09:20.510189 7 log.go:181] (0xc0039afa40) (5) Data frame handling I0131 01:09:20.510299 7 log.go:181] (0xc002fa9d90) Data frame received for 3 I0131 01:09:20.510344 7 log.go:181] (0xc001728fa0) (3) Data frame handling I0131 01:09:20.510378 7 log.go:181] (0xc001728fa0) (3) Data frame sent I0131 01:09:20.510403 7 log.go:181] (0xc002fa9d90) Data frame received for 3 I0131 01:09:20.510425 7 log.go:181] (0xc001728fa0) (3) Data frame handling I0131 01:09:20.511966 7 log.go:181] (0xc002fa9d90) Data frame received for 1 I0131 01:09:20.512001 7 log.go:181] (0xc001728f00) (1) Data frame handling I0131 01:09:20.512019 7 log.go:181] (0xc001728f00) (1) Data frame sent I0131 01:09:20.512042 7 log.go:181] (0xc002fa9d90) (0xc001728f00) Stream removed, broadcasting: 1 I0131 01:09:20.512078 7 log.go:181] (0xc002fa9d90) Go away received I0131 01:09:20.512142 7 log.go:181] (0xc002fa9d90) (0xc001728f00) Stream removed, broadcasting: 1 I0131 01:09:20.512169 7 log.go:181] (0xc002fa9d90) (0xc001728fa0) Stream removed, broadcasting: 3 I0131 01:09:20.512181 7 log.go:181] (0xc002fa9d90) (0xc0039afa40) Stream removed, broadcasting: 5 Jan 31 01:09:20.512: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:09:20.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-5796" for this suite. • [SLOW TEST:11.226 seconds] [k8s.io] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:635 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":155,"skipped":2983,"failed":0} SSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:09:20.523: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 Jan 31 01:09:20.604: INFO: The status of Pod test-webserver-8dc10277-97e1-46ea-baf7-5f07c18534ac is Pending, waiting for it to be Running (with Ready = true) Jan 31 01:09:22.608: INFO: The status of Pod test-webserver-8dc10277-97e1-46ea-baf7-5f07c18534ac is Pending, waiting for it to be Running (with Ready = true) Jan 31 01:09:24.609: INFO: The status of Pod test-webserver-8dc10277-97e1-46ea-baf7-5f07c18534ac is Running (Ready = false) Jan 31 01:09:26.671: INFO: The status of Pod test-webserver-8dc10277-97e1-46ea-baf7-5f07c18534ac is Running (Ready = false) Jan 31 01:09:28.608: INFO: The status of Pod test-webserver-8dc10277-97e1-46ea-baf7-5f07c18534ac is Running (Ready = false) Jan 31 01:09:30.609: INFO: The status of Pod test-webserver-8dc10277-97e1-46ea-baf7-5f07c18534ac is Running (Ready = false) Jan 31 01:09:32.608: INFO: The status of Pod test-webserver-8dc10277-97e1-46ea-baf7-5f07c18534ac is Running (Ready = false) Jan 31 01:09:34.608: INFO: The status of Pod test-webserver-8dc10277-97e1-46ea-baf7-5f07c18534ac is Running (Ready = false) Jan 31 01:09:36.608: INFO: The status of Pod test-webserver-8dc10277-97e1-46ea-baf7-5f07c18534ac is Running (Ready = false) Jan 31 01:09:38.610: INFO: The status of Pod test-webserver-8dc10277-97e1-46ea-baf7-5f07c18534ac is Running (Ready = false) Jan 31 01:09:40.617: INFO: The status of Pod test-webserver-8dc10277-97e1-46ea-baf7-5f07c18534ac is Running (Ready = false) Jan 31 01:09:42.611: INFO: The status of Pod test-webserver-8dc10277-97e1-46ea-baf7-5f07c18534ac is Running (Ready = false) Jan 31 01:09:44.635: INFO: The status of Pod test-webserver-8dc10277-97e1-46ea-baf7-5f07c18534ac is Running (Ready = false) Jan 31 01:09:46.635: INFO: The status of Pod test-webserver-8dc10277-97e1-46ea-baf7-5f07c18534ac is Running (Ready = false) Jan 31 01:09:48.608: INFO: The status of Pod test-webserver-8dc10277-97e1-46ea-baf7-5f07c18534ac is Running (Ready = true) Jan 31 01:09:48.611: INFO: Container started at 2021-01-31 01:09:23 +0000 UTC, pod became ready at 2021-01-31 01:09:47 +0000 UTC [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:09:48.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3250" for this suite. • [SLOW TEST:28.098 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:635 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":311,"completed":156,"skipped":2986,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:09:48.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should check if kubectl diff finds a difference for Deployments [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: create deployment with httpd image Jan 31 01:09:48.697: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-2049 create -f -' Jan 31 01:09:49.031: INFO: stderr: "" Jan 31 01:09:49.031: INFO: stdout: "deployment.apps/httpd-deployment created\n" STEP: verify diff finds difference between live and declared image Jan 31 01:09:49.031: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-2049 diff -f -' Jan 31 01:09:49.553: INFO: rc: 1 Jan 31 01:09:49.554: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-2049 delete -f -' Jan 31 01:09:49.664: INFO: stderr: "" Jan 31 01:09:49.664: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:09:49.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2049" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":311,"completed":157,"skipped":3016,"failed":0} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:09:49.671: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should check if v1 is in available api versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: validating api versions Jan 31 01:09:49.765: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-3367 api-versions' Jan 31 01:09:49.997: INFO: stderr: "" Jan 31 01:09:49.997: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nflowcontrol.apiserver.k8s.io/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1\nnode.k8s.io/v1beta1\npingcap.com/v1alpha1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:09:49.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3367" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":311,"completed":158,"skipped":3024,"failed":0} SSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:09:50.006: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating configMap that has name configmap-test-emptyKey-bc58585e-6530-40bc-ae5a-4748c0079442 [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:09:50.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1122" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":311,"completed":159,"skipped":3032,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:09:50.205: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a pod to test downward api env vars Jan 31 01:09:50.356: INFO: Waiting up to 5m0s for pod "downward-api-80ebf0ab-f1ae-46f1-81af-835d20033c8b" in namespace "downward-api-189" to be "Succeeded or Failed" Jan 31 01:09:50.359: INFO: Pod "downward-api-80ebf0ab-f1ae-46f1-81af-835d20033c8b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.860615ms Jan 31 01:09:52.364: INFO: Pod "downward-api-80ebf0ab-f1ae-46f1-81af-835d20033c8b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007816642s Jan 31 01:09:54.367: INFO: Pod "downward-api-80ebf0ab-f1ae-46f1-81af-835d20033c8b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011529572s STEP: Saw pod success Jan 31 01:09:54.367: INFO: Pod "downward-api-80ebf0ab-f1ae-46f1-81af-835d20033c8b" satisfied condition "Succeeded or Failed" Jan 31 01:09:54.370: INFO: Trying to get logs from node latest-worker2 pod downward-api-80ebf0ab-f1ae-46f1-81af-835d20033c8b container dapi-container: STEP: delete the pod Jan 31 01:09:54.490: INFO: Waiting for pod downward-api-80ebf0ab-f1ae-46f1-81af-835d20033c8b to disappear Jan 31 01:09:54.517: INFO: Pod downward-api-80ebf0ab-f1ae-46f1-81af-835d20033c8b no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:09:54.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-189" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":311,"completed":160,"skipped":3053,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:09:54.525: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:10:11.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6799" for this suite. • [SLOW TEST:16.548 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":311,"completed":161,"skipped":3060,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:10:11.074: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Jan 31 01:10:11.237: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-1350 129d2707-8e14-4f6e-8383-bea6d34a86ed 1125116 0 2021-01-31 01:10:11 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2021-01-31 01:10:11 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jan 31 01:10:11.237: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-1350 129d2707-8e14-4f6e-8383-bea6d34a86ed 1125117 0 2021-01-31 01:10:11 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2021-01-31 01:10:11 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:10:11.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1350" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":311,"completed":162,"skipped":3074,"failed":0} SSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:10:11.245: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating configMap with name cm-test-opt-del-ee46db1f-8337-4fa5-be44-b935523fa4ea STEP: Creating configMap with name cm-test-opt-upd-59a00ddf-5bd3-47f6-96a9-2407a1c764ce STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-ee46db1f-8337-4fa5-be44-b935523fa4ea STEP: Updating configmap cm-test-opt-upd-59a00ddf-5bd3-47f6-96a9-2407a1c764ce STEP: Creating configMap with name cm-test-opt-create-8dc6d11d-8f5e-4aad-aef8-51398dc4dffc STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:10:21.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4839" for this suite. • [SLOW TEST:10.239 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":311,"completed":163,"skipped":3078,"failed":0} SSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:10:21.484: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating configMap with name configmap-test-volume-8ec7601e-fd1b-4018-adf1-a2800871d571 STEP: Creating a pod to test consume configMaps Jan 31 01:10:21.601: INFO: Waiting up to 5m0s for pod "pod-configmaps-01cda718-5eb0-44de-8b39-745248f311bf" in namespace "configmap-8085" to be "Succeeded or Failed" Jan 31 01:10:21.656: INFO: Pod "pod-configmaps-01cda718-5eb0-44de-8b39-745248f311bf": Phase="Pending", Reason="", readiness=false. Elapsed: 55.033523ms Jan 31 01:10:23.780: INFO: Pod "pod-configmaps-01cda718-5eb0-44de-8b39-745248f311bf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.179062844s Jan 31 01:10:25.784: INFO: Pod "pod-configmaps-01cda718-5eb0-44de-8b39-745248f311bf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.183189757s STEP: Saw pod success Jan 31 01:10:25.784: INFO: Pod "pod-configmaps-01cda718-5eb0-44de-8b39-745248f311bf" satisfied condition "Succeeded or Failed" Jan 31 01:10:25.787: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-01cda718-5eb0-44de-8b39-745248f311bf container agnhost-container: STEP: delete the pod Jan 31 01:10:25.832: INFO: Waiting for pod pod-configmaps-01cda718-5eb0-44de-8b39-745248f311bf to disappear Jan 31 01:10:25.845: INFO: Pod pod-configmaps-01cda718-5eb0-44de-8b39-745248f311bf no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:10:25.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8085" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":311,"completed":164,"skipped":3085,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:10:25.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating secret with name secret-test-map-38a37a76-f543-412a-9db9-0a6129435138 STEP: Creating a pod to test consume secrets Jan 31 01:10:25.977: INFO: Waiting up to 5m0s for pod "pod-secrets-c946bd9e-6e8d-4af2-bacb-275f44fd9d65" in namespace "secrets-7836" to be "Succeeded or Failed" Jan 31 01:10:25.989: INFO: Pod "pod-secrets-c946bd9e-6e8d-4af2-bacb-275f44fd9d65": Phase="Pending", Reason="", readiness=false. Elapsed: 12.09997ms Jan 31 01:10:27.993: INFO: Pod "pod-secrets-c946bd9e-6e8d-4af2-bacb-275f44fd9d65": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016802463s Jan 31 01:10:30.037: INFO: Pod "pod-secrets-c946bd9e-6e8d-4af2-bacb-275f44fd9d65": Phase="Pending", Reason="", readiness=false. Elapsed: 4.060510034s Jan 31 01:10:32.040: INFO: Pod "pod-secrets-c946bd9e-6e8d-4af2-bacb-275f44fd9d65": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.063171518s STEP: Saw pod success Jan 31 01:10:32.040: INFO: Pod "pod-secrets-c946bd9e-6e8d-4af2-bacb-275f44fd9d65" satisfied condition "Succeeded or Failed" Jan 31 01:10:32.042: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-c946bd9e-6e8d-4af2-bacb-275f44fd9d65 container secret-volume-test: STEP: delete the pod Jan 31 01:10:32.138: INFO: Waiting for pod pod-secrets-c946bd9e-6e8d-4af2-bacb-275f44fd9d65 to disappear Jan 31 01:10:32.265: INFO: Pod pod-secrets-c946bd9e-6e8d-4af2-bacb-275f44fd9d65 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:10:32.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7836" for this suite. • [SLOW TEST:6.418 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":311,"completed":165,"skipped":3138,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:10:32.274: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should create services for rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: creating Agnhost RC Jan 31 01:10:32.367: INFO: namespace kubectl-7138 Jan 31 01:10:32.367: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-7138 create -f -' Jan 31 01:10:32.660: INFO: stderr: "" Jan 31 01:10:32.660: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Jan 31 01:10:33.663: INFO: Selector matched 1 pods for map[app:agnhost] Jan 31 01:10:33.663: INFO: Found 0 / 1 Jan 31 01:10:34.702: INFO: Selector matched 1 pods for map[app:agnhost] Jan 31 01:10:34.702: INFO: Found 0 / 1 Jan 31 01:10:35.665: INFO: Selector matched 1 pods for map[app:agnhost] Jan 31 01:10:35.665: INFO: Found 0 / 1 Jan 31 01:10:36.665: INFO: Selector matched 1 pods for map[app:agnhost] Jan 31 01:10:36.665: INFO: Found 1 / 1 Jan 31 01:10:36.665: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jan 31 01:10:36.667: INFO: Selector matched 1 pods for map[app:agnhost] Jan 31 01:10:36.667: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 31 01:10:36.667: INFO: wait on agnhost-primary startup in kubectl-7138 Jan 31 01:10:36.667: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-7138 logs agnhost-primary-w8n42 agnhost-primary' Jan 31 01:10:36.781: INFO: stderr: "" Jan 31 01:10:36.781: INFO: stdout: "Paused\n" STEP: exposing RC Jan 31 01:10:36.781: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-7138 expose rc agnhost-primary --name=rm2 --port=1234 --target-port=6379' Jan 31 01:10:36.932: INFO: stderr: "" Jan 31 01:10:36.932: INFO: stdout: "service/rm2 exposed\n" Jan 31 01:10:36.951: INFO: Service rm2 in namespace kubectl-7138 found. STEP: exposing service Jan 31 01:10:38.966: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-7138 expose service rm2 --name=rm3 --port=2345 --target-port=6379' Jan 31 01:10:39.108: INFO: stderr: "" Jan 31 01:10:39.108: INFO: stdout: "service/rm3 exposed\n" Jan 31 01:10:39.132: INFO: Service rm3 in namespace kubectl-7138 found. [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:10:41.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7138" for this suite. • [SLOW TEST:8.883 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1229 should create services for rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":311,"completed":166,"skipped":3162,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:10:41.158: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 Jan 31 01:10:41.249: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:10:42.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-3839" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":311,"completed":167,"skipped":3194,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Lease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:10:42.465: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [AfterEach] [k8s.io] Lease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:10:42.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-3785" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":311,"completed":168,"skipped":3214,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:10:42.659: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 31 01:10:43.315: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 31 01:10:45.330: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747652243, loc:(*time.Location)(0x79bd420)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747652243, loc:(*time.Location)(0x79bd420)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747652243, loc:(*time.Location)(0x79bd420)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747652243, loc:(*time.Location)(0x79bd420)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 31 01:10:48.375: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 Jan 31 01:10:48.381: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:10:49.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1432" for this suite. STEP: Destroying namespace "webhook-1432-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:7.097 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":311,"completed":169,"skipped":3225,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:10:49.757: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 31 01:10:50.895: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 31 01:10:52.903: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747652250, loc:(*time.Location)(0x79bd420)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747652250, loc:(*time.Location)(0x79bd420)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747652251, loc:(*time.Location)(0x79bd420)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747652250, loc:(*time.Location)(0x79bd420)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 31 01:10:55.943: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 Jan 31 01:10:55.948: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-9172-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:10:57.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1732" for this suite. STEP: Destroying namespace "webhook-1732-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:7.456 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":311,"completed":170,"skipped":3270,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:10:57.213: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [BeforeEach] Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1554 [It] should update a single-container pod's image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jan 31 01:10:57.294: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-5626 run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod' Jan 31 01:10:57.403: INFO: stderr: "" Jan 31 01:10:57.403: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Jan 31 01:11:02.454: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-5626 get pod e2e-test-httpd-pod -o json' Jan 31 01:11:02.564: INFO: stderr: "" Jan 31 01:11:02.564: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2021-01-31T01:10:57Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"managedFields\": [\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:run\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n \".\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {}\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {}\n }\n },\n \"manager\": \"kubectl-run\",\n \"operation\": \"Update\",\n \"time\": \"2021-01-31T01:10:57Z\"\n },\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:phase\": {},\n \"f:podIP\": {},\n \"f:podIPs\": {\n \".\": {},\n \"k:{\\\"ip\\\":\\\"10.244.2.210\\\"}\": {\n \".\": {},\n \"f:ip\": {}\n }\n },\n \"f:startTime\": {}\n }\n },\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"time\": \"2021-01-31T01:11:00Z\"\n }\n ],\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-5626\",\n \"resourceVersion\": \"1125586\",\n \"uid\": \"bb57faf7-60e5-4848-a086-80a2e7198759\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-hs8ts\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"latest-worker\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-hs8ts\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-hs8ts\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-01-31T01:10:57Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-01-31T01:11:00Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-01-31T01:11:00Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-01-31T01:10:57Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://af3d8c5ac3fd905769bf30cdc886e65f88defaa029818371a64e5a17b286d9e5\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2021-01-31T01:11:00Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.18.0.14\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.210\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.2.210\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2021-01-31T01:10:57Z\"\n }\n}\n" STEP: replace the image in the pod Jan 31 01:11:02.565: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-5626 replace -f -' Jan 31 01:11:02.928: INFO: stderr: "" Jan 31 01:11:02.928: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1558 Jan 31 01:11:02.936: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-5626 delete pods e2e-test-httpd-pod' Jan 31 01:11:11.092: INFO: stderr: "" Jan 31 01:11:11.092: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:11:11.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5626" for this suite. • [SLOW TEST:13.969 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1551 should update a single-container pod's image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":311,"completed":171,"skipped":3315,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:11:11.183: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating configMap with name configmap-test-volume-c7882097-4ecd-4cdd-aea4-3f603a4e000c STEP: Creating a pod to test consume configMaps Jan 31 01:11:11.367: INFO: Waiting up to 5m0s for pod "pod-configmaps-821f84fb-9099-4e5e-9ba1-ff6af6a2acf5" in namespace "configmap-7465" to be "Succeeded or Failed" Jan 31 01:11:11.389: INFO: Pod "pod-configmaps-821f84fb-9099-4e5e-9ba1-ff6af6a2acf5": Phase="Pending", Reason="", readiness=false. Elapsed: 21.780207ms Jan 31 01:11:13.410: INFO: Pod "pod-configmaps-821f84fb-9099-4e5e-9ba1-ff6af6a2acf5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042084177s Jan 31 01:11:15.413: INFO: Pod "pod-configmaps-821f84fb-9099-4e5e-9ba1-ff6af6a2acf5": Phase="Running", Reason="", readiness=true. Elapsed: 4.04557232s Jan 31 01:11:17.433: INFO: Pod "pod-configmaps-821f84fb-9099-4e5e-9ba1-ff6af6a2acf5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.065786491s STEP: Saw pod success Jan 31 01:11:17.433: INFO: Pod "pod-configmaps-821f84fb-9099-4e5e-9ba1-ff6af6a2acf5" satisfied condition "Succeeded or Failed" Jan 31 01:11:17.436: INFO: Trying to get logs from node latest-worker pod pod-configmaps-821f84fb-9099-4e5e-9ba1-ff6af6a2acf5 container configmap-volume-test: STEP: delete the pod Jan 31 01:11:17.471: INFO: Waiting for pod pod-configmaps-821f84fb-9099-4e5e-9ba1-ff6af6a2acf5 to disappear Jan 31 01:11:17.481: INFO: Pod pod-configmaps-821f84fb-9099-4e5e-9ba1-ff6af6a2acf5 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:11:17.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7465" for this suite. • [SLOW TEST:6.332 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":311,"completed":172,"skipped":3385,"failed":0} [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:11:17.516: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 31 01:11:18.254: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 31 01:11:20.264: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747652278, loc:(*time.Location)(0x79bd420)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747652278, loc:(*time.Location)(0x79bd420)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747652278, loc:(*time.Location)(0x79bd420)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747652278, loc:(*time.Location)(0x79bd420)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 31 01:11:22.269: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747652278, loc:(*time.Location)(0x79bd420)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747652278, loc:(*time.Location)(0x79bd420)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747652278, loc:(*time.Location)(0x79bd420)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747652278, loc:(*time.Location)(0x79bd420)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 31 01:11:25.340: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:11:25.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4072" for this suite. STEP: Destroying namespace "webhook-4072-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:8.082 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":311,"completed":173,"skipped":3385,"failed":0} SSSSSS ------------------------------ [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:11:25.598: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: creating service in namespace services-5726 STEP: creating service affinity-nodeport in namespace services-5726 STEP: creating replication controller affinity-nodeport in namespace services-5726 I0131 01:11:25.794202 7 runners.go:190] Created replication controller with name: affinity-nodeport, namespace: services-5726, replica count: 3 I0131 01:11:28.844626 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0131 01:11:31.845057 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 31 01:11:31.857: INFO: Creating new exec pod Jan 31 01:11:36.904: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=services-5726 exec execpod-affinitywssmb -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport 80' Jan 31 01:11:37.137: INFO: stderr: "I0131 01:11:37.027931 1785 log.go:181] (0xc0007b4210) (0xc000aea3c0) Create stream\nI0131 01:11:37.028014 1785 log.go:181] (0xc0007b4210) (0xc000aea3c0) Stream added, broadcasting: 1\nI0131 01:11:37.029920 1785 log.go:181] (0xc0007b4210) Reply frame received for 1\nI0131 01:11:37.029977 1785 log.go:181] (0xc0007b4210) (0xc000443900) Create stream\nI0131 01:11:37.029996 1785 log.go:181] (0xc0007b4210) (0xc000443900) Stream added, broadcasting: 3\nI0131 01:11:37.030970 1785 log.go:181] (0xc0007b4210) Reply frame received for 3\nI0131 01:11:37.030994 1785 log.go:181] (0xc0007b4210) (0xc000aea460) Create stream\nI0131 01:11:37.031001 1785 log.go:181] (0xc0007b4210) (0xc000aea460) Stream added, broadcasting: 5\nI0131 01:11:37.031891 1785 log.go:181] (0xc0007b4210) Reply frame received for 5\nI0131 01:11:37.127110 1785 log.go:181] (0xc0007b4210) Data frame received for 5\nI0131 01:11:37.127136 1785 log.go:181] (0xc000aea460) (5) Data frame handling\nI0131 01:11:37.127150 1785 log.go:181] (0xc000aea460) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport 80\nI0131 01:11:37.128083 1785 log.go:181] (0xc0007b4210) Data frame received for 5\nI0131 01:11:37.128104 1785 log.go:181] (0xc000aea460) (5) Data frame handling\nI0131 01:11:37.128123 1785 log.go:181] (0xc000aea460) (5) Data frame sent\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\nI0131 01:11:37.128606 1785 log.go:181] (0xc0007b4210) Data frame received for 5\nI0131 01:11:37.128627 1785 log.go:181] (0xc000aea460) (5) Data frame handling\nI0131 01:11:37.128699 1785 log.go:181] (0xc0007b4210) Data frame received for 3\nI0131 01:11:37.128734 1785 log.go:181] (0xc000443900) (3) Data frame handling\nI0131 01:11:37.130392 1785 log.go:181] (0xc0007b4210) Data frame received for 1\nI0131 01:11:37.130427 1785 log.go:181] (0xc000aea3c0) (1) Data frame handling\nI0131 01:11:37.130443 1785 log.go:181] (0xc000aea3c0) (1) Data frame sent\nI0131 01:11:37.130475 1785 log.go:181] (0xc0007b4210) (0xc000aea3c0) Stream removed, broadcasting: 1\nI0131 01:11:37.130507 1785 log.go:181] (0xc0007b4210) Go away received\nI0131 01:11:37.130943 1785 log.go:181] (0xc0007b4210) (0xc000aea3c0) Stream removed, broadcasting: 1\nI0131 01:11:37.130963 1785 log.go:181] (0xc0007b4210) (0xc000443900) Stream removed, broadcasting: 3\nI0131 01:11:37.130973 1785 log.go:181] (0xc0007b4210) (0xc000aea460) Stream removed, broadcasting: 5\n" Jan 31 01:11:37.137: INFO: stdout: "" Jan 31 01:11:37.138: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=services-5726 exec execpod-affinitywssmb -- /bin/sh -x -c nc -zv -t -w 2 10.96.44.137 80' Jan 31 01:11:37.342: INFO: stderr: "I0131 01:11:37.263633 1803 log.go:181] (0xc00003b130) (0xc0009743c0) Create stream\nI0131 01:11:37.263689 1803 log.go:181] (0xc00003b130) (0xc0009743c0) Stream added, broadcasting: 1\nI0131 01:11:37.265552 1803 log.go:181] (0xc00003b130) Reply frame received for 1\nI0131 01:11:37.265613 1803 log.go:181] (0xc00003b130) (0xc0000cafa0) Create stream\nI0131 01:11:37.265633 1803 log.go:181] (0xc00003b130) (0xc0000cafa0) Stream added, broadcasting: 3\nI0131 01:11:37.266630 1803 log.go:181] (0xc00003b130) Reply frame received for 3\nI0131 01:11:37.266682 1803 log.go:181] (0xc00003b130) (0xc00071c0a0) Create stream\nI0131 01:11:37.266706 1803 log.go:181] (0xc00003b130) (0xc00071c0a0) Stream added, broadcasting: 5\nI0131 01:11:37.267556 1803 log.go:181] (0xc00003b130) Reply frame received for 5\nI0131 01:11:37.335141 1803 log.go:181] (0xc00003b130) Data frame received for 3\nI0131 01:11:37.335195 1803 log.go:181] (0xc0000cafa0) (3) Data frame handling\nI0131 01:11:37.335245 1803 log.go:181] (0xc00003b130) Data frame received for 5\nI0131 01:11:37.335298 1803 log.go:181] (0xc00071c0a0) (5) Data frame handling\nI0131 01:11:37.335332 1803 log.go:181] (0xc00071c0a0) (5) Data frame sent\nI0131 01:11:37.335346 1803 log.go:181] (0xc00003b130) Data frame received for 5\nI0131 01:11:37.335356 1803 log.go:181] (0xc00071c0a0) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.44.137 80\nConnection to 10.96.44.137 80 port [tcp/http] succeeded!\nI0131 01:11:37.337149 1803 log.go:181] (0xc00003b130) Data frame received for 1\nI0131 01:11:37.337174 1803 log.go:181] (0xc0009743c0) (1) Data frame handling\nI0131 01:11:37.337195 1803 log.go:181] (0xc0009743c0) (1) Data frame sent\nI0131 01:11:37.337217 1803 log.go:181] (0xc00003b130) (0xc0009743c0) Stream removed, broadcasting: 1\nI0131 01:11:37.337243 1803 log.go:181] (0xc00003b130) Go away received\nI0131 01:11:37.337561 1803 log.go:181] (0xc00003b130) (0xc0009743c0) Stream removed, broadcasting: 1\nI0131 01:11:37.337582 1803 log.go:181] (0xc00003b130) (0xc0000cafa0) Stream removed, broadcasting: 3\nI0131 01:11:37.337589 1803 log.go:181] (0xc00003b130) (0xc00071c0a0) Stream removed, broadcasting: 5\n" Jan 31 01:11:37.342: INFO: stdout: "" Jan 31 01:11:37.342: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=services-5726 exec execpod-affinitywssmb -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.14 30985' Jan 31 01:11:37.554: INFO: stderr: "I0131 01:11:37.476996 1821 log.go:181] (0xc000a662c0) (0xc000316140) Create stream\nI0131 01:11:37.477098 1821 log.go:181] (0xc000a662c0) (0xc000316140) Stream added, broadcasting: 1\nI0131 01:11:37.479451 1821 log.go:181] (0xc000a662c0) Reply frame received for 1\nI0131 01:11:37.479494 1821 log.go:181] (0xc000a662c0) (0xc00044e140) Create stream\nI0131 01:11:37.479506 1821 log.go:181] (0xc000a662c0) (0xc00044e140) Stream added, broadcasting: 3\nI0131 01:11:37.480676 1821 log.go:181] (0xc000a662c0) Reply frame received for 3\nI0131 01:11:37.480723 1821 log.go:181] (0xc000a662c0) (0xc000aea3c0) Create stream\nI0131 01:11:37.480737 1821 log.go:181] (0xc000a662c0) (0xc000aea3c0) Stream added, broadcasting: 5\nI0131 01:11:37.481891 1821 log.go:181] (0xc000a662c0) Reply frame received for 5\nI0131 01:11:37.547356 1821 log.go:181] (0xc000a662c0) Data frame received for 5\nI0131 01:11:37.547383 1821 log.go:181] (0xc000aea3c0) (5) Data frame handling\nI0131 01:11:37.547392 1821 log.go:181] (0xc000aea3c0) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.14 30985\nConnection to 172.18.0.14 30985 port [tcp/30985] succeeded!\nI0131 01:11:37.547417 1821 log.go:181] (0xc000a662c0) Data frame received for 3\nI0131 01:11:37.547443 1821 log.go:181] (0xc00044e140) (3) Data frame handling\nI0131 01:11:37.547511 1821 log.go:181] (0xc000a662c0) Data frame received for 5\nI0131 01:11:37.547528 1821 log.go:181] (0xc000aea3c0) (5) Data frame handling\nI0131 01:11:37.548328 1821 log.go:181] (0xc000a662c0) Data frame received for 1\nI0131 01:11:37.548352 1821 log.go:181] (0xc000316140) (1) Data frame handling\nI0131 01:11:37.548366 1821 log.go:181] (0xc000316140) (1) Data frame sent\nI0131 01:11:37.548379 1821 log.go:181] (0xc000a662c0) (0xc000316140) Stream removed, broadcasting: 1\nI0131 01:11:37.548417 1821 log.go:181] (0xc000a662c0) Go away received\nI0131 01:11:37.548761 1821 log.go:181] (0xc000a662c0) (0xc000316140) Stream removed, broadcasting: 1\nI0131 01:11:37.548775 1821 log.go:181] (0xc000a662c0) (0xc00044e140) Stream removed, broadcasting: 3\nI0131 01:11:37.548782 1821 log.go:181] (0xc000a662c0) (0xc000aea3c0) Stream removed, broadcasting: 5\n" Jan 31 01:11:37.554: INFO: stdout: "" Jan 31 01:11:37.554: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=services-5726 exec execpod-affinitywssmb -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.16 30985' Jan 31 01:11:37.744: INFO: stderr: "I0131 01:11:37.682647 1839 log.go:181] (0xc00003a6e0) (0xc00053e5a0) Create stream\nI0131 01:11:37.682718 1839 log.go:181] (0xc00003a6e0) (0xc00053e5a0) Stream added, broadcasting: 1\nI0131 01:11:37.684658 1839 log.go:181] (0xc00003a6e0) Reply frame received for 1\nI0131 01:11:37.684712 1839 log.go:181] (0xc00003a6e0) (0xc00099a5a0) Create stream\nI0131 01:11:37.684735 1839 log.go:181] (0xc00003a6e0) (0xc00099a5a0) Stream added, broadcasting: 3\nI0131 01:11:37.685917 1839 log.go:181] (0xc00003a6e0) Reply frame received for 3\nI0131 01:11:37.685965 1839 log.go:181] (0xc00003a6e0) (0xc0005e21e0) Create stream\nI0131 01:11:37.686006 1839 log.go:181] (0xc00003a6e0) (0xc0005e21e0) Stream added, broadcasting: 5\nI0131 01:11:37.686887 1839 log.go:181] (0xc00003a6e0) Reply frame received for 5\nI0131 01:11:37.737549 1839 log.go:181] (0xc00003a6e0) Data frame received for 3\nI0131 01:11:37.737573 1839 log.go:181] (0xc00099a5a0) (3) Data frame handling\nI0131 01:11:37.737619 1839 log.go:181] (0xc00003a6e0) Data frame received for 5\nI0131 01:11:37.737652 1839 log.go:181] (0xc0005e21e0) (5) Data frame handling\nI0131 01:11:37.737671 1839 log.go:181] (0xc0005e21e0) (5) Data frame sent\nI0131 01:11:37.737684 1839 log.go:181] (0xc00003a6e0) Data frame received for 5\nI0131 01:11:37.737694 1839 log.go:181] (0xc0005e21e0) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.16 30985\nConnection to 172.18.0.16 30985 port [tcp/30985] succeeded!\nI0131 01:11:37.739326 1839 log.go:181] (0xc00003a6e0) Data frame received for 1\nI0131 01:11:37.739338 1839 log.go:181] (0xc00053e5a0) (1) Data frame handling\nI0131 01:11:37.739345 1839 log.go:181] (0xc00053e5a0) (1) Data frame sent\nI0131 01:11:37.739526 1839 log.go:181] (0xc00003a6e0) (0xc00053e5a0) Stream removed, broadcasting: 1\nI0131 01:11:37.739598 1839 log.go:181] (0xc00003a6e0) Go away received\nI0131 01:11:37.739902 1839 log.go:181] (0xc00003a6e0) (0xc00053e5a0) Stream removed, broadcasting: 1\nI0131 01:11:37.739919 1839 log.go:181] (0xc00003a6e0) (0xc00099a5a0) Stream removed, broadcasting: 3\nI0131 01:11:37.739934 1839 log.go:181] (0xc00003a6e0) (0xc0005e21e0) Stream removed, broadcasting: 5\n" Jan 31 01:11:37.744: INFO: stdout: "" Jan 31 01:11:37.744: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=services-5726 exec execpod-affinitywssmb -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.14:30985/ ; done' Jan 31 01:11:38.033: INFO: stderr: "I0131 01:11:37.858619 1857 log.go:181] (0xc0008de0b0) (0xc000a0d4a0) Create stream\nI0131 01:11:37.858670 1857 log.go:181] (0xc0008de0b0) (0xc000a0d4a0) Stream added, broadcasting: 1\nI0131 01:11:37.860327 1857 log.go:181] (0xc0008de0b0) Reply frame received for 1\nI0131 01:11:37.860366 1857 log.go:181] (0xc0008de0b0) (0xc0001f21e0) Create stream\nI0131 01:11:37.860376 1857 log.go:181] (0xc0008de0b0) (0xc0001f21e0) Stream added, broadcasting: 3\nI0131 01:11:37.861275 1857 log.go:181] (0xc0008de0b0) Reply frame received for 3\nI0131 01:11:37.861322 1857 log.go:181] (0xc0008de0b0) (0xc0003ca640) Create stream\nI0131 01:11:37.861338 1857 log.go:181] (0xc0008de0b0) (0xc0003ca640) Stream added, broadcasting: 5\nI0131 01:11:37.862089 1857 log.go:181] (0xc0008de0b0) Reply frame received for 5\nI0131 01:11:37.920976 1857 log.go:181] (0xc0008de0b0) Data frame received for 3\nI0131 01:11:37.921013 1857 log.go:181] (0xc0001f21e0) (3) Data frame handling\nI0131 01:11:37.921035 1857 log.go:181] (0xc0001f21e0) (3) Data frame sent\nI0131 01:11:37.921060 1857 log.go:181] (0xc0008de0b0) Data frame received for 5\nI0131 01:11:37.921081 1857 log.go:181] (0xc0003ca640) (5) Data frame handling\nI0131 01:11:37.921160 1857 log.go:181] (0xc0003ca640) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30985/\nI0131 01:11:37.927076 1857 log.go:181] (0xc0008de0b0) Data frame received for 3\nI0131 01:11:37.927100 1857 log.go:181] (0xc0001f21e0) (3) Data frame handling\nI0131 01:11:37.927114 1857 log.go:181] (0xc0001f21e0) (3) Data frame sent\nI0131 01:11:37.927590 1857 log.go:181] (0xc0008de0b0) Data frame received for 3\nI0131 01:11:37.927624 1857 log.go:181] (0xc0008de0b0) Data frame received for 5\nI0131 01:11:37.927652 1857 log.go:181] (0xc0003ca640) (5) Data frame handling\nI0131 01:11:37.927667 1857 log.go:181] (0xc0003ca640) (5) Data frame sent\nI0131 01:11:37.927679 1857 log.go:181] (0xc0008de0b0) Data frame received for 5\nI0131 01:11:37.927689 1857 log.go:181] (0xc0003ca640) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30985/\nI0131 01:11:37.927713 1857 log.go:181] (0xc0003ca640) (5) Data frame sent\nI0131 01:11:37.927727 1857 log.go:181] (0xc0001f21e0) (3) Data frame handling\nI0131 01:11:37.927742 1857 log.go:181] (0xc0001f21e0) (3) Data frame sent\nI0131 01:11:37.932176 1857 log.go:181] (0xc0008de0b0) Data frame received for 3\nI0131 01:11:37.932208 1857 log.go:181] (0xc0001f21e0) (3) Data frame handling\nI0131 01:11:37.932235 1857 log.go:181] (0xc0001f21e0) (3) Data frame sent\nI0131 01:11:37.932927 1857 log.go:181] (0xc0008de0b0) Data frame received for 3\nI0131 01:11:37.932962 1857 log.go:181] (0xc0008de0b0) Data frame received for 5\nI0131 01:11:37.932986 1857 log.go:181] (0xc0003ca640) (5) Data frame handling\nI0131 01:11:37.933000 1857 log.go:181] (0xc0003ca640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30985/\nI0131 01:11:37.933018 1857 log.go:181] (0xc0001f21e0) (3) Data frame handling\nI0131 01:11:37.933042 1857 log.go:181] (0xc0001f21e0) (3) Data frame sent\nI0131 01:11:37.939158 1857 log.go:181] (0xc0008de0b0) Data frame received for 3\nI0131 01:11:37.939180 1857 log.go:181] (0xc0001f21e0) (3) Data frame handling\nI0131 01:11:37.939200 1857 log.go:181] (0xc0001f21e0) (3) Data frame sent\nI0131 01:11:37.939530 1857 log.go:181] (0xc0008de0b0) Data frame received for 3\nI0131 01:11:37.939569 1857 log.go:181] (0xc0001f21e0) (3) Data frame handling\nI0131 01:11:37.939590 1857 log.go:181] (0xc0001f21e0) (3) Data frame sent\nI0131 01:11:37.939624 1857 log.go:181] (0xc0008de0b0) Data frame received for 5\nI0131 01:11:37.939641 1857 log.go:181] (0xc0003ca640) (5) Data frame handling\nI0131 01:11:37.939661 1857 log.go:181] (0xc0003ca640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30985/\nI0131 01:11:37.945588 1857 log.go:181] (0xc0008de0b0) Data frame received for 3\nI0131 01:11:37.945616 1857 log.go:181] (0xc0001f21e0) (3) Data frame handling\nI0131 01:11:37.945639 1857 log.go:181] (0xc0001f21e0) (3) Data frame sent\nI0131 01:11:37.946327 1857 log.go:181] (0xc0008de0b0) Data frame received for 3\nI0131 01:11:37.946355 1857 log.go:181] (0xc0001f21e0) (3) Data frame handling\nI0131 01:11:37.946365 1857 log.go:181] (0xc0001f21e0) (3) Data frame sent\nI0131 01:11:37.946383 1857 log.go:181] (0xc0008de0b0) Data frame received for 5\nI0131 01:11:37.946406 1857 log.go:181] (0xc0003ca640) (5) Data frame handling\nI0131 01:11:37.946424 1857 log.go:181] (0xc0003ca640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30985/I0131 01:11:37.946438 1857 log.go:181] (0xc0008de0b0) Data frame received for 5\nI0131 01:11:37.946471 1857 log.go:181] (0xc0003ca640) (5) Data frame handling\nI0131 01:11:37.946490 1857 log.go:181] (0xc0003ca640) (5) Data frame sent\n\nI0131 01:11:37.950174 1857 log.go:181] (0xc0008de0b0) Data frame received for 3\nI0131 01:11:37.950191 1857 log.go:181] (0xc0001f21e0) (3) Data frame handling\nI0131 01:11:37.950201 1857 log.go:181] (0xc0001f21e0) (3) Data frame sent\nI0131 01:11:37.951038 1857 log.go:181] (0xc0008de0b0) Data frame received for 5\nI0131 01:11:37.951069 1857 log.go:181] (0xc0003ca640) (5) Data frame handling\nI0131 01:11:37.951095 1857 log.go:181] (0xc0003ca640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30985/\nI0131 01:11:37.951118 1857 log.go:181] (0xc0008de0b0) Data frame received for 3\nI0131 01:11:37.951137 1857 log.go:181] (0xc0001f21e0) (3) Data frame handling\nI0131 01:11:37.951155 1857 log.go:181] (0xc0001f21e0) (3) Data frame sent\nI0131 01:11:37.956950 1857 log.go:181] (0xc0008de0b0) Data frame received for 3\nI0131 01:11:37.956964 1857 log.go:181] (0xc0001f21e0) (3) Data frame handling\nI0131 01:11:37.956971 1857 log.go:181] (0xc0001f21e0) (3) Data frame sent\nI0131 01:11:37.957615 1857 log.go:181] (0xc0008de0b0) Data frame received for 5\nI0131 01:11:37.957645 1857 log.go:181] (0xc0003ca640) (5) Data frame handling\nI0131 01:11:37.957660 1857 log.go:181] (0xc0003ca640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30985/\nI0131 01:11:37.957821 1857 log.go:181] (0xc0008de0b0) Data frame received for 3\nI0131 01:11:37.957833 1857 log.go:181] (0xc0001f21e0) (3) Data frame handling\nI0131 01:11:37.957844 1857 log.go:181] (0xc0001f21e0) (3) Data frame sent\nI0131 01:11:37.963485 1857 log.go:181] (0xc0008de0b0) Data frame received for 3\nI0131 01:11:37.963502 1857 log.go:181] (0xc0001f21e0) (3) Data frame handling\nI0131 01:11:37.963518 1857 log.go:181] (0xc0001f21e0) (3) Data frame sent\nI0131 01:11:37.964142 1857 log.go:181] (0xc0008de0b0) Data frame received for 5\nI0131 01:11:37.964169 1857 log.go:181] (0xc0003ca640) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30985/\nI0131 01:11:37.964186 1857 log.go:181] (0xc0008de0b0) Data frame received for 3\nI0131 01:11:37.964203 1857 log.go:181] (0xc0001f21e0) (3) Data frame handling\nI0131 01:11:37.964209 1857 log.go:181] (0xc0001f21e0) (3) Data frame sent\nI0131 01:11:37.964219 1857 log.go:181] (0xc0003ca640) (5) Data frame sent\nI0131 01:11:37.969948 1857 log.go:181] (0xc0008de0b0) Data frame received for 3\nI0131 01:11:37.969973 1857 log.go:181] (0xc0001f21e0) (3) Data frame handling\nI0131 01:11:37.970081 1857 log.go:181] (0xc0001f21e0) (3) Data frame sent\nI0131 01:11:37.970654 1857 log.go:181] (0xc0008de0b0) Data frame received for 3\nI0131 01:11:37.970665 1857 log.go:181] (0xc0001f21e0) (3) Data frame handling\nI0131 01:11:37.970670 1857 log.go:181] (0xc0001f21e0) (3) Data frame sent\nI0131 01:11:37.970708 1857 log.go:181] (0xc0008de0b0) Data frame received for 5\nI0131 01:11:37.970736 1857 log.go:181] (0xc0003ca640) (5) Data frame handling\nI0131 01:11:37.970762 1857 log.go:181] (0xc0003ca640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30985/\nI0131 01:11:37.976692 1857 log.go:181] (0xc0008de0b0) Data frame received for 3\nI0131 01:11:37.976706 1857 log.go:181] (0xc0001f21e0) (3) Data frame handling\nI0131 01:11:37.976714 1857 log.go:181] (0xc0001f21e0) (3) Data frame sent\nI0131 01:11:37.977850 1857 log.go:181] (0xc0008de0b0) Data frame received for 5\nI0131 01:11:37.977879 1857 log.go:181] (0xc0003ca640) (5) Data frame handling\nI0131 01:11:37.977892 1857 log.go:181] (0xc0003ca640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30985/\nI0131 01:11:37.977912 1857 log.go:181] (0xc0008de0b0) Data frame received for 3\nI0131 01:11:37.977928 1857 log.go:181] (0xc0001f21e0) (3) Data frame handling\nI0131 01:11:37.977965 1857 log.go:181] (0xc0001f21e0) (3) Data frame sent\nI0131 01:11:37.982917 1857 log.go:181] (0xc0008de0b0) Data frame received for 3\nI0131 01:11:37.982943 1857 log.go:181] (0xc0001f21e0) (3) Data frame handling\nI0131 01:11:37.982964 1857 log.go:181] (0xc0001f21e0) (3) Data frame sent\nI0131 01:11:37.983497 1857 log.go:181] (0xc0008de0b0) Data frame received for 3\nI0131 01:11:37.983524 1857 log.go:181] (0xc0001f21e0) (3) Data frame handling\nI0131 01:11:37.983536 1857 log.go:181] (0xc0001f21e0) (3) Data frame sent\nI0131 01:11:37.983554 1857 log.go:181] (0xc0008de0b0) Data frame received for 5\nI0131 01:11:37.983564 1857 log.go:181] (0xc0003ca640) (5) Data frame handling\nI0131 01:11:37.983574 1857 log.go:181] (0xc0003ca640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30985/\nI0131 01:11:37.989989 1857 log.go:181] (0xc0008de0b0) Data frame received for 3\nI0131 01:11:37.990007 1857 log.go:181] (0xc0001f21e0) (3) Data frame handling\nI0131 01:11:37.990018 1857 log.go:181] (0xc0001f21e0) (3) Data frame sent\nI0131 01:11:37.990542 1857 log.go:181] (0xc0008de0b0) Data frame received for 3\nI0131 01:11:37.990558 1857 log.go:181] (0xc0001f21e0) (3) Data frame handling\nI0131 01:11:37.990567 1857 log.go:181] (0xc0001f21e0) (3) Data frame sent\nI0131 01:11:37.990618 1857 log.go:181] (0xc0008de0b0) Data frame received for 5\nI0131 01:11:37.990648 1857 log.go:181] (0xc0003ca640) (5) Data frame handling\nI0131 01:11:37.990669 1857 log.go:181] (0xc0003ca640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30985/\nI0131 01:11:37.997521 1857 log.go:181] (0xc0008de0b0) Data frame received for 3\nI0131 01:11:37.997550 1857 log.go:181] (0xc0001f21e0) (3) Data frame handling\nI0131 01:11:37.997576 1857 log.go:181] (0xc0001f21e0) (3) Data frame sent\nI0131 01:11:37.998147 1857 log.go:181] (0xc0008de0b0) Data frame received for 5\nI0131 01:11:37.998176 1857 log.go:181] (0xc0008de0b0) Data frame received for 3\nI0131 01:11:37.998199 1857 log.go:181] (0xc0001f21e0) (3) Data frame handling\nI0131 01:11:37.998241 1857 log.go:181] (0xc0001f21e0) (3) Data frame sent\nI0131 01:11:37.998287 1857 log.go:181] (0xc0003ca640) (5) Data frame handling\nI0131 01:11:37.998321 1857 log.go:181] (0xc0003ca640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30985/\nI0131 01:11:38.004303 1857 log.go:181] (0xc0008de0b0) Data frame received for 3\nI0131 01:11:38.004328 1857 log.go:181] (0xc0001f21e0) (3) Data frame handling\nI0131 01:11:38.004344 1857 log.go:181] (0xc0001f21e0) (3) Data frame sent\nI0131 01:11:38.005005 1857 log.go:181] (0xc0008de0b0) Data frame received for 5\nI0131 01:11:38.005020 1857 log.go:181] (0xc0003ca640) (5) Data frame handling\nI0131 01:11:38.005029 1857 log.go:181] (0xc0003ca640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30985/\nI0131 01:11:38.005041 1857 log.go:181] (0xc0008de0b0) Data frame received for 3\nI0131 01:11:38.005054 1857 log.go:181] (0xc0001f21e0) (3) Data frame handling\nI0131 01:11:38.005068 1857 log.go:181] (0xc0001f21e0) (3) Data frame sent\nI0131 01:11:38.011520 1857 log.go:181] (0xc0008de0b0) Data frame received for 3\nI0131 01:11:38.011537 1857 log.go:181] (0xc0001f21e0) (3) Data frame handling\nI0131 01:11:38.011557 1857 log.go:181] (0xc0001f21e0) (3) Data frame sent\nI0131 01:11:38.012190 1857 log.go:181] (0xc0008de0b0) Data frame received for 3\nI0131 01:11:38.012222 1857 log.go:181] (0xc0001f21e0) (3) Data frame handling\nI0131 01:11:38.012234 1857 log.go:181] (0xc0001f21e0) (3) Data frame sent\nI0131 01:11:38.012273 1857 log.go:181] (0xc0008de0b0) Data frame received for 5\nI0131 01:11:38.012292 1857 log.go:181] (0xc0003ca640) (5) Data frame handling\nI0131 01:11:38.012309 1857 log.go:181] (0xc0003ca640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30985/\nI0131 01:11:38.017413 1857 log.go:181] (0xc0008de0b0) Data frame received for 3\nI0131 01:11:38.017429 1857 log.go:181] (0xc0001f21e0) (3) Data frame handling\nI0131 01:11:38.017442 1857 log.go:181] (0xc0001f21e0) (3) Data frame sent\nI0131 01:11:38.018028 1857 log.go:181] (0xc0008de0b0) Data frame received for 3\nI0131 01:11:38.018052 1857 log.go:181] (0xc0001f21e0) (3) Data frame handling\nI0131 01:11:38.018063 1857 log.go:181] (0xc0001f21e0) (3) Data frame sent\nI0131 01:11:38.018079 1857 log.go:181] (0xc0008de0b0) Data frame received for 5\nI0131 01:11:38.018088 1857 log.go:181] (0xc0003ca640) (5) Data frame handling\nI0131 01:11:38.018095 1857 log.go:181] (0xc0003ca640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30985/\nI0131 01:11:38.025148 1857 log.go:181] (0xc0008de0b0) Data frame received for 3\nI0131 01:11:38.025190 1857 log.go:181] (0xc0001f21e0) (3) Data frame handling\nI0131 01:11:38.025211 1857 log.go:181] (0xc0001f21e0) (3) Data frame sent\nI0131 01:11:38.025974 1857 log.go:181] (0xc0008de0b0) Data frame received for 5\nI0131 01:11:38.025992 1857 log.go:181] (0xc0003ca640) (5) Data frame handling\nI0131 01:11:38.026046 1857 log.go:181] (0xc0008de0b0) Data frame received for 3\nI0131 01:11:38.026075 1857 log.go:181] (0xc0001f21e0) (3) Data frame handling\nI0131 01:11:38.028228 1857 log.go:181] (0xc0008de0b0) Data frame received for 1\nI0131 01:11:38.028252 1857 log.go:181] (0xc000a0d4a0) (1) Data frame handling\nI0131 01:11:38.028264 1857 log.go:181] (0xc000a0d4a0) (1) Data frame sent\nI0131 01:11:38.028281 1857 log.go:181] (0xc0008de0b0) (0xc000a0d4a0) Stream removed, broadcasting: 1\nI0131 01:11:38.028385 1857 log.go:181] (0xc0008de0b0) Go away received\nI0131 01:11:38.028611 1857 log.go:181] (0xc0008de0b0) (0xc000a0d4a0) Stream removed, broadcasting: 1\nI0131 01:11:38.028632 1857 log.go:181] (0xc0008de0b0) (0xc0001f21e0) Stream removed, broadcasting: 3\nI0131 01:11:38.028639 1857 log.go:181] (0xc0008de0b0) (0xc0003ca640) Stream removed, broadcasting: 5\n" Jan 31 01:11:38.033: INFO: stdout: "\naffinity-nodeport-8dbfs\naffinity-nodeport-8dbfs\naffinity-nodeport-8dbfs\naffinity-nodeport-8dbfs\naffinity-nodeport-8dbfs\naffinity-nodeport-8dbfs\naffinity-nodeport-8dbfs\naffinity-nodeport-8dbfs\naffinity-nodeport-8dbfs\naffinity-nodeport-8dbfs\naffinity-nodeport-8dbfs\naffinity-nodeport-8dbfs\naffinity-nodeport-8dbfs\naffinity-nodeport-8dbfs\naffinity-nodeport-8dbfs\naffinity-nodeport-8dbfs" Jan 31 01:11:38.033: INFO: Received response from host: affinity-nodeport-8dbfs Jan 31 01:11:38.033: INFO: Received response from host: affinity-nodeport-8dbfs Jan 31 01:11:38.033: INFO: Received response from host: affinity-nodeport-8dbfs Jan 31 01:11:38.033: INFO: Received response from host: affinity-nodeport-8dbfs Jan 31 01:11:38.033: INFO: Received response from host: affinity-nodeport-8dbfs Jan 31 01:11:38.033: INFO: Received response from host: affinity-nodeport-8dbfs Jan 31 01:11:38.033: INFO: Received response from host: affinity-nodeport-8dbfs Jan 31 01:11:38.034: INFO: Received response from host: affinity-nodeport-8dbfs Jan 31 01:11:38.034: INFO: Received response from host: affinity-nodeport-8dbfs Jan 31 01:11:38.034: INFO: Received response from host: affinity-nodeport-8dbfs Jan 31 01:11:38.034: INFO: Received response from host: affinity-nodeport-8dbfs Jan 31 01:11:38.034: INFO: Received response from host: affinity-nodeport-8dbfs Jan 31 01:11:38.034: INFO: Received response from host: affinity-nodeport-8dbfs Jan 31 01:11:38.034: INFO: Received response from host: affinity-nodeport-8dbfs Jan 31 01:11:38.034: INFO: Received response from host: affinity-nodeport-8dbfs Jan 31 01:11:38.034: INFO: Received response from host: affinity-nodeport-8dbfs Jan 31 01:11:38.034: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport in namespace services-5726, will wait for the garbage collector to delete the pods Jan 31 01:11:38.163: INFO: Deleting ReplicationController affinity-nodeport took: 7.076255ms Jan 31 01:11:38.663: INFO: Terminating ReplicationController affinity-nodeport pods took: 500.201009ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:12:11.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5726" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 • [SLOW TEST:45.757 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":311,"completed":174,"skipped":3391,"failed":0} SSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:12:11.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 Jan 31 01:12:11.402: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Jan 31 01:12:13.461: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:12:14.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2572" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":311,"completed":175,"skipped":3399,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:12:14.749: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Performing setup for networking test in namespace pod-network-test-683 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 31 01:12:15.075: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jan 31 01:12:15.985: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 31 01:12:18.266: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 31 01:12:20.176: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 31 01:12:21.990: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 31 01:12:24.002: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 31 01:12:25.990: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 31 01:12:27.989: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 31 01:12:29.990: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 31 01:12:31.990: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 31 01:12:33.990: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 31 01:12:35.990: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 31 01:12:37.988: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 31 01:12:39.990: INFO: The status of Pod netserver-0 is Running (Ready = true) Jan 31 01:12:39.997: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Jan 31 01:12:44.052: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 Jan 31 01:12:44.052: INFO: Going to poll 10.244.2.218 on port 8081 at least 0 times, with a maximum of 34 tries before failing Jan 31 01:12:44.055: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.218 8081 | grep -v '^\s*$'] Namespace:pod-network-test-683 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 31 01:12:44.055: INFO: >>> kubeConfig: /root/.kube/config I0131 01:12:44.088528 7 log.go:181] (0xc0000e5d90) (0xc004b0a780) Create stream I0131 01:12:44.088560 7 log.go:181] (0xc0000e5d90) (0xc004b0a780) Stream added, broadcasting: 1 I0131 01:12:44.090215 7 log.go:181] (0xc0000e5d90) Reply frame received for 1 I0131 01:12:44.090256 7 log.go:181] (0xc0000e5d90) (0xc004b0a820) Create stream I0131 01:12:44.090265 7 log.go:181] (0xc0000e5d90) (0xc004b0a820) Stream added, broadcasting: 3 I0131 01:12:44.090970 7 log.go:181] (0xc0000e5d90) Reply frame received for 3 I0131 01:12:44.090996 7 log.go:181] (0xc0000e5d90) (0xc004b0a8c0) Create stream I0131 01:12:44.091004 7 log.go:181] (0xc0000e5d90) (0xc004b0a8c0) Stream added, broadcasting: 5 I0131 01:12:44.091809 7 log.go:181] (0xc0000e5d90) Reply frame received for 5 I0131 01:12:45.153963 7 log.go:181] (0xc0000e5d90) Data frame received for 5 I0131 01:12:45.154026 7 log.go:181] (0xc004b0a8c0) (5) Data frame handling I0131 01:12:45.154113 7 log.go:181] (0xc0000e5d90) Data frame received for 3 I0131 01:12:45.154143 7 log.go:181] (0xc004b0a820) (3) Data frame handling I0131 01:12:45.154164 7 log.go:181] (0xc004b0a820) (3) Data frame sent I0131 01:12:45.154175 7 log.go:181] (0xc0000e5d90) Data frame received for 3 I0131 01:12:45.154194 7 log.go:181] (0xc004b0a820) (3) Data frame handling I0131 01:12:45.156201 7 log.go:181] (0xc0000e5d90) Data frame received for 1 I0131 01:12:45.156232 7 log.go:181] (0xc004b0a780) (1) Data frame handling I0131 01:12:45.156249 7 log.go:181] (0xc004b0a780) (1) Data frame sent I0131 01:12:45.156268 7 log.go:181] (0xc0000e5d90) (0xc004b0a780) Stream removed, broadcasting: 1 I0131 01:12:45.156284 7 log.go:181] (0xc0000e5d90) Go away received I0131 01:12:45.156406 7 log.go:181] (0xc0000e5d90) (0xc004b0a780) Stream removed, broadcasting: 1 I0131 01:12:45.156428 7 log.go:181] (0xc0000e5d90) (0xc004b0a820) Stream removed, broadcasting: 3 I0131 01:12:45.156442 7 log.go:181] (0xc0000e5d90) (0xc004b0a8c0) Stream removed, broadcasting: 5 Jan 31 01:12:45.156: INFO: Found all 1 expected endpoints: [netserver-0] Jan 31 01:12:45.156: INFO: Going to poll 10.244.1.187 on port 8081 at least 0 times, with a maximum of 34 tries before failing Jan 31 01:12:45.160: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.187 8081 | grep -v '^\s*$'] Namespace:pod-network-test-683 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 31 01:12:45.160: INFO: >>> kubeConfig: /root/.kube/config I0131 01:12:45.194174 7 log.go:181] (0xc00705a6e0) (0xc002146be0) Create stream I0131 01:12:45.194215 7 log.go:181] (0xc00705a6e0) (0xc002146be0) Stream added, broadcasting: 1 I0131 01:12:45.195823 7 log.go:181] (0xc00705a6e0) Reply frame received for 1 I0131 01:12:45.195862 7 log.go:181] (0xc00705a6e0) (0xc004b0a960) Create stream I0131 01:12:45.195873 7 log.go:181] (0xc00705a6e0) (0xc004b0a960) Stream added, broadcasting: 3 I0131 01:12:45.196940 7 log.go:181] (0xc00705a6e0) Reply frame received for 3 I0131 01:12:45.196998 7 log.go:181] (0xc00705a6e0) (0xc000f89680) Create stream I0131 01:12:45.197013 7 log.go:181] (0xc00705a6e0) (0xc000f89680) Stream added, broadcasting: 5 I0131 01:12:45.198072 7 log.go:181] (0xc00705a6e0) Reply frame received for 5 I0131 01:12:46.298142 7 log.go:181] (0xc00705a6e0) Data frame received for 3 I0131 01:12:46.298206 7 log.go:181] (0xc004b0a960) (3) Data frame handling I0131 01:12:46.298266 7 log.go:181] (0xc004b0a960) (3) Data frame sent I0131 01:12:46.298293 7 log.go:181] (0xc00705a6e0) Data frame received for 3 I0131 01:12:46.298311 7 log.go:181] (0xc004b0a960) (3) Data frame handling I0131 01:12:46.298359 7 log.go:181] (0xc00705a6e0) Data frame received for 5 I0131 01:12:46.298411 7 log.go:181] (0xc000f89680) (5) Data frame handling I0131 01:12:46.300319 7 log.go:181] (0xc00705a6e0) Data frame received for 1 I0131 01:12:46.300345 7 log.go:181] (0xc002146be0) (1) Data frame handling I0131 01:12:46.300365 7 log.go:181] (0xc002146be0) (1) Data frame sent I0131 01:12:46.300383 7 log.go:181] (0xc00705a6e0) (0xc002146be0) Stream removed, broadcasting: 1 I0131 01:12:46.300507 7 log.go:181] (0xc00705a6e0) (0xc002146be0) Stream removed, broadcasting: 1 I0131 01:12:46.300537 7 log.go:181] (0xc00705a6e0) (0xc004b0a960) Stream removed, broadcasting: 3 I0131 01:12:46.300737 7 log.go:181] (0xc00705a6e0) Go away received I0131 01:12:46.300792 7 log.go:181] (0xc00705a6e0) (0xc000f89680) Stream removed, broadcasting: 5 Jan 31 01:12:46.301: INFO: Found all 1 expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:12:46.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-683" for this suite. • [SLOW TEST:31.565 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:27 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:30 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":176,"skipped":3422,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:12:46.315: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a pod to test override arguments Jan 31 01:12:46.441: INFO: Waiting up to 5m0s for pod "client-containers-232b87f4-9d2a-4678-a276-6778a702c8e2" in namespace "containers-7484" to be "Succeeded or Failed" Jan 31 01:12:46.444: INFO: Pod "client-containers-232b87f4-9d2a-4678-a276-6778a702c8e2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.550534ms Jan 31 01:12:48.448: INFO: Pod "client-containers-232b87f4-9d2a-4678-a276-6778a702c8e2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006758153s Jan 31 01:12:50.493: INFO: Pod "client-containers-232b87f4-9d2a-4678-a276-6778a702c8e2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.051400375s STEP: Saw pod success Jan 31 01:12:50.493: INFO: Pod "client-containers-232b87f4-9d2a-4678-a276-6778a702c8e2" satisfied condition "Succeeded or Failed" Jan 31 01:12:50.496: INFO: Trying to get logs from node latest-worker2 pod client-containers-232b87f4-9d2a-4678-a276-6778a702c8e2 container agnhost-container: STEP: delete the pod Jan 31 01:12:50.528: INFO: Waiting for pod client-containers-232b87f4-9d2a-4678-a276-6778a702c8e2 to disappear Jan 31 01:12:50.531: INFO: Pod client-containers-232b87f4-9d2a-4678-a276-6778a702c8e2 no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:12:50.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7484" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":311,"completed":177,"skipped":3470,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:12:50.539: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:129 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 Jan 31 01:12:50.980: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Jan 31 01:12:51.012: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 01:12:51.014: INFO: Number of nodes with available pods: 0 Jan 31 01:12:51.014: INFO: Node latest-worker is running more than one daemon pod Jan 31 01:12:52.020: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 01:12:52.023: INFO: Number of nodes with available pods: 0 Jan 31 01:12:52.023: INFO: Node latest-worker is running more than one daemon pod Jan 31 01:12:53.313: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 01:12:53.643: INFO: Number of nodes with available pods: 0 Jan 31 01:12:53.643: INFO: Node latest-worker is running more than one daemon pod Jan 31 01:12:54.149: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 01:12:54.153: INFO: Number of nodes with available pods: 0 Jan 31 01:12:54.153: INFO: Node latest-worker is running more than one daemon pod Jan 31 01:12:55.021: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 01:12:55.024: INFO: Number of nodes with available pods: 0 Jan 31 01:12:55.024: INFO: Node latest-worker is running more than one daemon pod Jan 31 01:12:56.020: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 01:12:56.024: INFO: Number of nodes with available pods: 1 Jan 31 01:12:56.024: INFO: Node latest-worker2 is running more than one daemon pod Jan 31 01:12:57.020: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 01:12:57.024: INFO: Number of nodes with available pods: 2 Jan 31 01:12:57.024: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Jan 31 01:12:57.053: INFO: Wrong image for pod: daemon-set-m44jj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:12:57.053: INFO: Wrong image for pod: daemon-set-trxv9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:12:57.056: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 01:12:58.062: INFO: Wrong image for pod: daemon-set-m44jj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:12:58.062: INFO: Wrong image for pod: daemon-set-trxv9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:12:58.067: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 01:12:59.062: INFO: Wrong image for pod: daemon-set-m44jj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:12:59.062: INFO: Wrong image for pod: daemon-set-trxv9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:12:59.066: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 01:13:00.062: INFO: Wrong image for pod: daemon-set-m44jj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:00.062: INFO: Wrong image for pod: daemon-set-trxv9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:00.067: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 01:13:01.062: INFO: Wrong image for pod: daemon-set-m44jj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:01.062: INFO: Wrong image for pod: daemon-set-trxv9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:01.062: INFO: Pod daemon-set-trxv9 is not available Jan 31 01:13:01.066: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 01:13:02.062: INFO: Wrong image for pod: daemon-set-m44jj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:02.062: INFO: Wrong image for pod: daemon-set-trxv9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:02.062: INFO: Pod daemon-set-trxv9 is not available Jan 31 01:13:02.066: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 01:13:03.062: INFO: Wrong image for pod: daemon-set-m44jj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:03.062: INFO: Wrong image for pod: daemon-set-trxv9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:03.062: INFO: Pod daemon-set-trxv9 is not available Jan 31 01:13:03.065: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 01:13:04.062: INFO: Wrong image for pod: daemon-set-m44jj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:04.062: INFO: Wrong image for pod: daemon-set-trxv9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:04.062: INFO: Pod daemon-set-trxv9 is not available Jan 31 01:13:04.066: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 01:13:05.062: INFO: Wrong image for pod: daemon-set-m44jj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:05.062: INFO: Wrong image for pod: daemon-set-trxv9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:05.062: INFO: Pod daemon-set-trxv9 is not available Jan 31 01:13:05.066: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 01:13:06.062: INFO: Wrong image for pod: daemon-set-m44jj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:06.062: INFO: Wrong image for pod: daemon-set-trxv9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:06.062: INFO: Pod daemon-set-trxv9 is not available Jan 31 01:13:06.067: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 01:13:07.061: INFO: Wrong image for pod: daemon-set-m44jj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:07.061: INFO: Wrong image for pod: daemon-set-trxv9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:07.062: INFO: Pod daemon-set-trxv9 is not available Jan 31 01:13:07.066: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 01:13:08.062: INFO: Wrong image for pod: daemon-set-m44jj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:08.062: INFO: Wrong image for pod: daemon-set-trxv9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:08.062: INFO: Pod daemon-set-trxv9 is not available Jan 31 01:13:08.067: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 01:13:09.061: INFO: Wrong image for pod: daemon-set-m44jj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:09.061: INFO: Wrong image for pod: daemon-set-trxv9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:09.061: INFO: Pod daemon-set-trxv9 is not available Jan 31 01:13:09.066: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 01:13:10.062: INFO: Wrong image for pod: daemon-set-m44jj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:10.062: INFO: Wrong image for pod: daemon-set-trxv9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:10.062: INFO: Pod daemon-set-trxv9 is not available Jan 31 01:13:10.067: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 01:13:11.062: INFO: Wrong image for pod: daemon-set-m44jj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:11.062: INFO: Wrong image for pod: daemon-set-trxv9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:11.062: INFO: Pod daemon-set-trxv9 is not available Jan 31 01:13:11.067: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 01:13:12.062: INFO: Wrong image for pod: daemon-set-m44jj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:12.062: INFO: Wrong image for pod: daemon-set-trxv9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:12.062: INFO: Pod daemon-set-trxv9 is not available Jan 31 01:13:12.066: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 01:13:13.061: INFO: Wrong image for pod: daemon-set-m44jj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:13.061: INFO: Wrong image for pod: daemon-set-trxv9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:13.061: INFO: Pod daemon-set-trxv9 is not available Jan 31 01:13:13.066: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 01:13:14.061: INFO: Wrong image for pod: daemon-set-m44jj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:14.061: INFO: Wrong image for pod: daemon-set-trxv9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:14.061: INFO: Pod daemon-set-trxv9 is not available Jan 31 01:13:14.064: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 01:13:15.061: INFO: Wrong image for pod: daemon-set-m44jj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:15.061: INFO: Wrong image for pod: daemon-set-trxv9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:15.061: INFO: Pod daemon-set-trxv9 is not available Jan 31 01:13:15.066: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 01:13:16.063: INFO: Wrong image for pod: daemon-set-m44jj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:16.063: INFO: Wrong image for pod: daemon-set-trxv9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:16.063: INFO: Pod daemon-set-trxv9 is not available Jan 31 01:13:16.067: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 01:13:17.074: INFO: Wrong image for pod: daemon-set-m44jj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:17.075: INFO: Wrong image for pod: daemon-set-trxv9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:17.075: INFO: Pod daemon-set-trxv9 is not available Jan 31 01:13:17.079: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 01:13:18.062: INFO: Wrong image for pod: daemon-set-m44jj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:18.062: INFO: Wrong image for pod: daemon-set-trxv9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:18.062: INFO: Pod daemon-set-trxv9 is not available Jan 31 01:13:18.065: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 01:13:19.061: INFO: Wrong image for pod: daemon-set-m44jj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:19.061: INFO: Wrong image for pod: daemon-set-trxv9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:19.061: INFO: Pod daemon-set-trxv9 is not available Jan 31 01:13:19.066: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 01:13:20.062: INFO: Wrong image for pod: daemon-set-m44jj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:20.062: INFO: Wrong image for pod: daemon-set-trxv9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:20.062: INFO: Pod daemon-set-trxv9 is not available Jan 31 01:13:20.065: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 01:13:21.060: INFO: Wrong image for pod: daemon-set-m44jj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:21.060: INFO: Wrong image for pod: daemon-set-trxv9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:21.060: INFO: Pod daemon-set-trxv9 is not available Jan 31 01:13:21.063: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 01:13:22.061: INFO: Wrong image for pod: daemon-set-m44jj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:22.061: INFO: Wrong image for pod: daemon-set-trxv9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:22.061: INFO: Pod daemon-set-trxv9 is not available Jan 31 01:13:22.065: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 01:13:23.080: INFO: Wrong image for pod: daemon-set-m44jj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:23.080: INFO: Wrong image for pod: daemon-set-trxv9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:23.080: INFO: Pod daemon-set-trxv9 is not available Jan 31 01:13:23.085: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 01:13:24.063: INFO: Wrong image for pod: daemon-set-m44jj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:24.063: INFO: Wrong image for pod: daemon-set-trxv9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:24.063: INFO: Pod daemon-set-trxv9 is not available Jan 31 01:13:24.068: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 01:13:25.061: INFO: Wrong image for pod: daemon-set-m44jj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:25.061: INFO: Wrong image for pod: daemon-set-trxv9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:25.061: INFO: Pod daemon-set-trxv9 is not available Jan 31 01:13:25.079: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 01:13:26.062: INFO: Wrong image for pod: daemon-set-m44jj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:26.062: INFO: Wrong image for pod: daemon-set-trxv9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:26.062: INFO: Pod daemon-set-trxv9 is not available Jan 31 01:13:26.067: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 01:13:27.062: INFO: Wrong image for pod: daemon-set-m44jj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:27.062: INFO: Wrong image for pod: daemon-set-trxv9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:27.062: INFO: Pod daemon-set-trxv9 is not available Jan 31 01:13:27.067: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 01:13:28.062: INFO: Wrong image for pod: daemon-set-m44jj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:28.062: INFO: Wrong image for pod: daemon-set-trxv9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:28.062: INFO: Pod daemon-set-trxv9 is not available Jan 31 01:13:28.066: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 01:13:29.061: INFO: Wrong image for pod: daemon-set-m44jj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:29.062: INFO: Wrong image for pod: daemon-set-trxv9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:29.062: INFO: Pod daemon-set-trxv9 is not available Jan 31 01:13:29.066: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 01:13:30.061: INFO: Wrong image for pod: daemon-set-m44jj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:30.061: INFO: Wrong image for pod: daemon-set-trxv9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:30.061: INFO: Pod daemon-set-trxv9 is not available Jan 31 01:13:30.066: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 01:13:31.061: INFO: Wrong image for pod: daemon-set-m44jj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:31.061: INFO: Wrong image for pod: daemon-set-trxv9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:31.061: INFO: Pod daemon-set-trxv9 is not available Jan 31 01:13:31.065: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 01:13:32.064: INFO: Wrong image for pod: daemon-set-m44jj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:32.064: INFO: Wrong image for pod: daemon-set-trxv9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:32.064: INFO: Pod daemon-set-trxv9 is not available Jan 31 01:13:32.069: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 01:13:33.061: INFO: Wrong image for pod: daemon-set-m44jj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:33.061: INFO: Wrong image for pod: daemon-set-trxv9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:33.061: INFO: Pod daemon-set-trxv9 is not available Jan 31 01:13:33.064: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 01:13:34.061: INFO: Wrong image for pod: daemon-set-m44jj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:34.061: INFO: Wrong image for pod: daemon-set-trxv9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:34.061: INFO: Pod daemon-set-trxv9 is not available Jan 31 01:13:34.065: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 01:13:35.062: INFO: Wrong image for pod: daemon-set-m44jj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:35.062: INFO: Wrong image for pod: daemon-set-trxv9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:35.062: INFO: Pod daemon-set-trxv9 is not available Jan 31 01:13:35.067: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 01:13:36.061: INFO: Wrong image for pod: daemon-set-m44jj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:36.061: INFO: Wrong image for pod: daemon-set-trxv9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:36.061: INFO: Pod daemon-set-trxv9 is not available Jan 31 01:13:36.066: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 01:13:37.062: INFO: Wrong image for pod: daemon-set-m44jj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:37.062: INFO: Wrong image for pod: daemon-set-trxv9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:37.062: INFO: Pod daemon-set-trxv9 is not available Jan 31 01:13:37.066: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 01:13:38.062: INFO: Wrong image for pod: daemon-set-m44jj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:38.062: INFO: Wrong image for pod: daemon-set-trxv9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:38.062: INFO: Pod daemon-set-trxv9 is not available Jan 31 01:13:38.067: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 01:13:39.062: INFO: Wrong image for pod: daemon-set-m44jj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:39.062: INFO: Wrong image for pod: daemon-set-trxv9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:39.062: INFO: Pod daemon-set-trxv9 is not available Jan 31 01:13:39.067: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 01:13:40.062: INFO: Wrong image for pod: daemon-set-m44jj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:40.062: INFO: Wrong image for pod: daemon-set-trxv9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:40.062: INFO: Pod daemon-set-trxv9 is not available Jan 31 01:13:40.067: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 01:13:41.061: INFO: Wrong image for pod: daemon-set-m44jj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:41.061: INFO: Wrong image for pod: daemon-set-trxv9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:41.061: INFO: Pod daemon-set-trxv9 is not available Jan 31 01:13:41.066: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 01:13:42.063: INFO: Wrong image for pod: daemon-set-m44jj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:42.063: INFO: Wrong image for pod: daemon-set-trxv9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:42.063: INFO: Pod daemon-set-trxv9 is not available Jan 31 01:13:42.067: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 01:13:43.061: INFO: Wrong image for pod: daemon-set-m44jj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:43.061: INFO: Wrong image for pod: daemon-set-trxv9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:43.061: INFO: Pod daemon-set-trxv9 is not available Jan 31 01:13:43.065: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 01:13:44.061: INFO: Wrong image for pod: daemon-set-m44jj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:44.061: INFO: Wrong image for pod: daemon-set-trxv9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:44.061: INFO: Pod daemon-set-trxv9 is not available Jan 31 01:13:44.066: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 01:13:45.062: INFO: Wrong image for pod: daemon-set-m44jj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:45.062: INFO: Wrong image for pod: daemon-set-trxv9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:45.062: INFO: Pod daemon-set-trxv9 is not available Jan 31 01:13:45.066: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 01:13:46.062: INFO: Wrong image for pod: daemon-set-m44jj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:46.062: INFO: Wrong image for pod: daemon-set-trxv9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:46.062: INFO: Pod daemon-set-trxv9 is not available Jan 31 01:13:46.068: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 01:13:47.061: INFO: Wrong image for pod: daemon-set-m44jj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:47.061: INFO: Wrong image for pod: daemon-set-trxv9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:47.061: INFO: Pod daemon-set-trxv9 is not available Jan 31 01:13:47.066: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 01:13:48.062: INFO: Wrong image for pod: daemon-set-m44jj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:48.062: INFO: Wrong image for pod: daemon-set-trxv9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:48.062: INFO: Pod daemon-set-trxv9 is not available Jan 31 01:13:48.067: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 01:13:49.061: INFO: Wrong image for pod: daemon-set-m44jj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:49.061: INFO: Wrong image for pod: daemon-set-trxv9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:49.061: INFO: Pod daemon-set-trxv9 is not available Jan 31 01:13:49.065: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 01:13:50.062: INFO: Wrong image for pod: daemon-set-m44jj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:50.062: INFO: Wrong image for pod: daemon-set-trxv9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:50.062: INFO: Pod daemon-set-trxv9 is not available Jan 31 01:13:50.070: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 01:13:51.061: INFO: Wrong image for pod: daemon-set-m44jj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:51.061: INFO: Wrong image for pod: daemon-set-trxv9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:51.061: INFO: Pod daemon-set-trxv9 is not available Jan 31 01:13:51.066: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 01:13:52.061: INFO: Wrong image for pod: daemon-set-m44jj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:52.061: INFO: Wrong image for pod: daemon-set-trxv9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:52.061: INFO: Pod daemon-set-trxv9 is not available Jan 31 01:13:52.066: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 01:13:53.061: INFO: Wrong image for pod: daemon-set-m44jj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:53.061: INFO: Wrong image for pod: daemon-set-trxv9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:53.061: INFO: Pod daemon-set-trxv9 is not available Jan 31 01:13:53.064: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 01:13:54.062: INFO: Wrong image for pod: daemon-set-m44jj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:54.062: INFO: Wrong image for pod: daemon-set-trxv9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:54.062: INFO: Pod daemon-set-trxv9 is not available Jan 31 01:13:54.066: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 01:13:55.061: INFO: Wrong image for pod: daemon-set-m44jj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:55.061: INFO: Wrong image for pod: daemon-set-trxv9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:55.061: INFO: Pod daemon-set-trxv9 is not available Jan 31 01:13:55.066: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 01:13:56.069: INFO: Wrong image for pod: daemon-set-m44jj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:56.069: INFO: Wrong image for pod: daemon-set-trxv9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:56.069: INFO: Pod daemon-set-trxv9 is not available Jan 31 01:13:56.074: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 01:13:57.061: INFO: Wrong image for pod: daemon-set-m44jj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:57.061: INFO: Wrong image for pod: daemon-set-trxv9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:57.061: INFO: Pod daemon-set-trxv9 is not available Jan 31 01:13:57.064: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 01:13:58.061: INFO: Wrong image for pod: daemon-set-m44jj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:58.061: INFO: Wrong image for pod: daemon-set-trxv9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:58.061: INFO: Pod daemon-set-trxv9 is not available Jan 31 01:13:58.066: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 01:13:59.060: INFO: Wrong image for pod: daemon-set-m44jj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:59.060: INFO: Wrong image for pod: daemon-set-trxv9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:13:59.060: INFO: Pod daemon-set-trxv9 is not available Jan 31 01:13:59.064: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 01:14:00.061: INFO: Wrong image for pod: daemon-set-m44jj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:14:00.061: INFO: Wrong image for pod: daemon-set-trxv9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:14:00.061: INFO: Pod daemon-set-trxv9 is not available Jan 31 01:14:00.066: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 01:14:01.061: INFO: Pod daemon-set-hjgs6 is not available Jan 31 01:14:01.061: INFO: Wrong image for pod: daemon-set-m44jj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:14:01.065: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 01:14:02.061: INFO: Pod daemon-set-hjgs6 is not available Jan 31 01:14:02.061: INFO: Wrong image for pod: daemon-set-m44jj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:14:02.066: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 01:14:03.061: INFO: Pod daemon-set-hjgs6 is not available Jan 31 01:14:03.061: INFO: Wrong image for pod: daemon-set-m44jj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:14:03.066: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 01:14:04.061: INFO: Pod daemon-set-hjgs6 is not available Jan 31 01:14:04.061: INFO: Wrong image for pod: daemon-set-m44jj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:14:04.066: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 01:14:05.060: INFO: Wrong image for pod: daemon-set-m44jj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:14:05.064: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 01:14:06.060: INFO: Wrong image for pod: daemon-set-m44jj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:14:06.060: INFO: Pod daemon-set-m44jj is not available Jan 31 01:14:06.063: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 01:14:07.061: INFO: Wrong image for pod: daemon-set-m44jj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:14:07.061: INFO: Pod daemon-set-m44jj is not available Jan 31 01:14:07.066: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 01:14:08.061: INFO: Wrong image for pod: daemon-set-m44jj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:14:08.061: INFO: Pod daemon-set-m44jj is not available Jan 31 01:14:08.066: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 01:14:09.062: INFO: Wrong image for pod: daemon-set-m44jj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:14:09.062: INFO: Pod daemon-set-m44jj is not available Jan 31 01:14:09.067: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 01:14:10.074: INFO: Wrong image for pod: daemon-set-m44jj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:14:10.075: INFO: Pod daemon-set-m44jj is not available Jan 31 01:14:10.078: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 01:14:11.060: INFO: Wrong image for pod: daemon-set-m44jj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 31 01:14:11.060: INFO: Pod daemon-set-m44jj is not available Jan 31 01:14:11.064: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 01:14:12.061: INFO: Pod daemon-set-s79lc is not available Jan 31 01:14:12.067: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Jan 31 01:14:12.071: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 01:14:12.074: INFO: Number of nodes with available pods: 1 Jan 31 01:14:12.074: INFO: Node latest-worker is running more than one daemon pod Jan 31 01:14:13.079: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 01:14:13.082: INFO: Number of nodes with available pods: 1 Jan 31 01:14:13.082: INFO: Node latest-worker is running more than one daemon pod Jan 31 01:14:14.078: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 01:14:14.084: INFO: Number of nodes with available pods: 1 Jan 31 01:14:14.084: INFO: Node latest-worker is running more than one daemon pod Jan 31 01:14:15.080: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 31 01:14:15.083: INFO: Number of nodes with available pods: 2 Jan 31 01:14:15.083: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:95 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8554, will wait for the garbage collector to delete the pods Jan 31 01:14:15.155: INFO: Deleting DaemonSet.extensions daemon-set took: 6.794754ms Jan 31 01:14:15.755: INFO: Terminating DaemonSet.extensions daemon-set pods took: 600.251271ms Jan 31 01:14:21.158: INFO: Number of nodes with available pods: 0 Jan 31 01:14:21.158: INFO: Number of running nodes: 0, number of available pods: 0 Jan 31 01:14:21.161: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"1126473"},"items":null} Jan 31 01:14:21.164: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1126473"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:14:21.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8554" for this suite. • [SLOW TEST:90.640 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":311,"completed":178,"skipped":3490,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:14:21.179: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a pod to test downward API volume plugin Jan 31 01:14:21.315: INFO: Waiting up to 5m0s for pod "downwardapi-volume-be0ff176-c23c-46b2-b9a6-1e1dd12b8fc4" in namespace "projected-8132" to be "Succeeded or Failed" Jan 31 01:14:21.318: INFO: Pod "downwardapi-volume-be0ff176-c23c-46b2-b9a6-1e1dd12b8fc4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.184454ms Jan 31 01:14:23.323: INFO: Pod "downwardapi-volume-be0ff176-c23c-46b2-b9a6-1e1dd12b8fc4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007734283s Jan 31 01:14:25.328: INFO: Pod "downwardapi-volume-be0ff176-c23c-46b2-b9a6-1e1dd12b8fc4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012759621s STEP: Saw pod success Jan 31 01:14:25.328: INFO: Pod "downwardapi-volume-be0ff176-c23c-46b2-b9a6-1e1dd12b8fc4" satisfied condition "Succeeded or Failed" Jan 31 01:14:25.331: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-be0ff176-c23c-46b2-b9a6-1e1dd12b8fc4 container client-container: STEP: delete the pod Jan 31 01:14:25.498: INFO: Waiting for pod downwardapi-volume-be0ff176-c23c-46b2-b9a6-1e1dd12b8fc4 to disappear Jan 31 01:14:25.511: INFO: Pod downwardapi-volume-be0ff176-c23c-46b2-b9a6-1e1dd12b8fc4 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:14:25.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8132" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":179,"skipped":3511,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-node] PodTemplates should delete a collection of pod templates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:14:25.524: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of pod templates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Create set of pod templates Jan 31 01:14:25.709: INFO: created test-podtemplate-1 Jan 31 01:14:25.721: INFO: created test-podtemplate-2 Jan 31 01:14:25.745: INFO: created test-podtemplate-3 STEP: get a list of pod templates with a label in the current namespace STEP: delete collection of pod templates Jan 31 01:14:25.787: INFO: requesting DeleteCollection of pod templates STEP: check that the list of pod templates matches the requested quantity Jan 31 01:14:25.874: INFO: requesting list of pod templates to confirm quantity [AfterEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:14:25.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-3369" for this suite. •{"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":311,"completed":180,"skipped":3527,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:14:25.945: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:14:26.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-4702" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":311,"completed":181,"skipped":3533,"failed":0} SSSSSS ------------------------------ [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:14:26.086: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should run through the lifecycle of a ServiceAccount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: creating a ServiceAccount STEP: watching for the ServiceAccount to be added STEP: patching the ServiceAccount STEP: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector) STEP: deleting the ServiceAccount [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:14:26.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-1960" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":311,"completed":182,"skipped":3539,"failed":0} SSSSS ------------------------------ [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:14:26.322: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:85 [It] should run the lifecycle of a Deployment [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: creating a Deployment STEP: waiting for Deployment to be created STEP: waiting for all Replicas to be Ready Jan 31 01:14:26.392: INFO: observed Deployment test-deployment in namespace deployment-8500 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jan 31 01:14:26.392: INFO: observed Deployment test-deployment in namespace deployment-8500 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jan 31 01:14:26.416: INFO: observed Deployment test-deployment in namespace deployment-8500 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jan 31 01:14:26.416: INFO: observed Deployment test-deployment in namespace deployment-8500 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jan 31 01:14:26.496: INFO: observed Deployment test-deployment in namespace deployment-8500 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jan 31 01:14:26.496: INFO: observed Deployment test-deployment in namespace deployment-8500 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jan 31 01:14:26.534: INFO: observed Deployment test-deployment in namespace deployment-8500 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jan 31 01:14:26.534: INFO: observed Deployment test-deployment in namespace deployment-8500 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jan 31 01:14:30.052: INFO: observed Deployment test-deployment in namespace deployment-8500 with ReadyReplicas 1 and labels map[test-deployment-static:true] Jan 31 01:14:30.052: INFO: observed Deployment test-deployment in namespace deployment-8500 with ReadyReplicas 1 and labels map[test-deployment-static:true] Jan 31 01:14:30.137: INFO: observed Deployment test-deployment in namespace deployment-8500 with ReadyReplicas 2 and labels map[test-deployment-static:true] STEP: patching the Deployment Jan 31 01:14:30.154: INFO: observed event type ADDED STEP: waiting for Replicas to scale Jan 31 01:14:30.156: INFO: observed Deployment test-deployment in namespace deployment-8500 with ReadyReplicas 0 Jan 31 01:14:30.156: INFO: observed Deployment test-deployment in namespace deployment-8500 with ReadyReplicas 0 Jan 31 01:14:30.156: INFO: observed Deployment test-deployment in namespace deployment-8500 with ReadyReplicas 0 Jan 31 01:14:30.156: INFO: observed Deployment test-deployment in namespace deployment-8500 with ReadyReplicas 0 Jan 31 01:14:30.156: INFO: observed Deployment test-deployment in namespace deployment-8500 with ReadyReplicas 0 Jan 31 01:14:30.156: INFO: observed Deployment test-deployment in namespace deployment-8500 with ReadyReplicas 0 Jan 31 01:14:30.156: INFO: observed Deployment test-deployment in namespace deployment-8500 with ReadyReplicas 0 Jan 31 01:14:30.156: INFO: observed Deployment test-deployment in namespace deployment-8500 with ReadyReplicas 0 Jan 31 01:14:30.156: INFO: observed Deployment test-deployment in namespace deployment-8500 with ReadyReplicas 1 Jan 31 01:14:30.156: INFO: observed Deployment test-deployment in namespace deployment-8500 with ReadyReplicas 1 Jan 31 01:14:30.156: INFO: observed Deployment test-deployment in namespace deployment-8500 with ReadyReplicas 2 Jan 31 01:14:30.156: INFO: observed Deployment test-deployment in namespace deployment-8500 with ReadyReplicas 2 Jan 31 01:14:30.156: INFO: observed Deployment test-deployment in namespace deployment-8500 with ReadyReplicas 2 Jan 31 01:14:30.156: INFO: observed Deployment test-deployment in namespace deployment-8500 with ReadyReplicas 2 Jan 31 01:14:30.188: INFO: observed Deployment test-deployment in namespace deployment-8500 with ReadyReplicas 2 Jan 31 01:14:30.188: INFO: observed Deployment test-deployment in namespace deployment-8500 with ReadyReplicas 2 Jan 31 01:14:30.250: INFO: observed Deployment test-deployment in namespace deployment-8500 with ReadyReplicas 2 Jan 31 01:14:30.250: INFO: observed Deployment test-deployment in namespace deployment-8500 with ReadyReplicas 2 Jan 31 01:14:30.566: INFO: observed Deployment test-deployment in namespace deployment-8500 with ReadyReplicas 1 STEP: listing Deployments Jan 31 01:14:30.771: INFO: Found test-deployment with labels: map[test-deployment:patched test-deployment-static:true] STEP: updating the Deployment Jan 31 01:14:30.783: INFO: observed Deployment test-deployment in namespace deployment-8500 with ReadyReplicas 1 STEP: fetching the DeploymentStatus Jan 31 01:14:30.845: INFO: observed Deployment test-deployment in namespace deployment-8500 with ReadyReplicas 1 and labels map[test-deployment:patched test-deployment-static:true] Jan 31 01:14:30.845: INFO: observed Deployment test-deployment in namespace deployment-8500 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Jan 31 01:14:30.862: INFO: observed Deployment test-deployment in namespace deployment-8500 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Jan 31 01:14:30.929: INFO: observed Deployment test-deployment in namespace deployment-8500 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Jan 31 01:14:31.560: INFO: observed Deployment test-deployment in namespace deployment-8500 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Jan 31 01:14:32.730: INFO: observed Deployment test-deployment in namespace deployment-8500 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Jan 31 01:14:32.767: INFO: observed Deployment test-deployment in namespace deployment-8500 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] STEP: patching the DeploymentStatus STEP: fetching the DeploymentStatus Jan 31 01:14:36.960: INFO: observed Deployment test-deployment in namespace deployment-8500 with ReadyReplicas 1 Jan 31 01:14:36.960: INFO: observed Deployment test-deployment in namespace deployment-8500 with ReadyReplicas 1 Jan 31 01:14:36.961: INFO: observed Deployment test-deployment in namespace deployment-8500 with ReadyReplicas 1 Jan 31 01:14:36.961: INFO: observed Deployment test-deployment in namespace deployment-8500 with ReadyReplicas 1 Jan 31 01:14:36.961: INFO: observed Deployment test-deployment in namespace deployment-8500 with ReadyReplicas 1 Jan 31 01:14:36.961: INFO: observed Deployment test-deployment in namespace deployment-8500 with ReadyReplicas 1 Jan 31 01:14:36.961: INFO: observed Deployment test-deployment in namespace deployment-8500 with ReadyReplicas 1 STEP: deleting the Deployment Jan 31 01:14:37.517: INFO: observed event type MODIFIED Jan 31 01:14:37.517: INFO: observed event type MODIFIED Jan 31 01:14:37.517: INFO: observed event type MODIFIED Jan 31 01:14:37.517: INFO: observed event type MODIFIED Jan 31 01:14:37.517: INFO: observed event type MODIFIED Jan 31 01:14:37.518: INFO: observed event type MODIFIED Jan 31 01:14:37.518: INFO: observed event type MODIFIED Jan 31 01:14:37.518: INFO: observed event type MODIFIED Jan 31 01:14:37.518: INFO: observed event type MODIFIED Jan 31 01:14:37.518: INFO: observed event type MODIFIED Jan 31 01:14:37.518: INFO: observed event type MODIFIED Jan 31 01:14:37.518: INFO: observed event type MODIFIED [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:79 Jan 31 01:14:37.526: INFO: Log out all the ReplicaSets if there is no deployment created Jan 31 01:14:37.532: INFO: ReplicaSet "test-deployment-768947d6f5": &ReplicaSet{ObjectMeta:{test-deployment-768947d6f5 deployment-8500 ef706d1e-00af-48d2-87a7-cc36622514ef 1126733 3 2021-01-31 01:14:30 +0000 UTC map[pod-template-hash:768947d6f5 test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:3] [{apps/v1 Deployment test-deployment f8d8cff9-b227-498e-8650-c80ee2a4ae05 0xc006b29467 0xc006b29468}] [] [{kube-controller-manager Update apps/v1 2021-01-31 01:14:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f8d8cff9-b227-498e-8650-c80ee2a4ae05\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 768947d6f5,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:768947d6f5 test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc006b294d0 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:2,FullyLabeledReplicas:2,ObservedGeneration:3,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jan 31 01:14:37.535: INFO: pod: "test-deployment-768947d6f5-4fjlk": &Pod{ObjectMeta:{test-deployment-768947d6f5-4fjlk test-deployment-768947d6f5- deployment-8500 10fc0e62-fc6a-4ce5-8f74-49fa76e0ded2 1126737 0 2021-01-31 01:14:36 +0000 UTC map[pod-template-hash:768947d6f5 test-deployment-static:true] map[] [{apps/v1 ReplicaSet test-deployment-768947d6f5 ef706d1e-00af-48d2-87a7-cc36622514ef 0xc0009f4557 0xc0009f4558}] [] [{kube-controller-manager Update v1 2021-01-31 01:14:36 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ef706d1e-00af-48d2-87a7-cc36622514ef\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-31 01:14:37 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-484hg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-484hg,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-484hg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 01:14:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 01:14:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [test-deployment],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 01:14:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [test-deployment],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 01:14:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2021-01-31 01:14:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 31 01:14:37.535: INFO: pod: "test-deployment-768947d6f5-mrg99": &Pod{ObjectMeta:{test-deployment-768947d6f5-mrg99 test-deployment-768947d6f5- deployment-8500 3965556c-cdf3-4259-84d9-3ec2fa50b487 1126715 0 2021-01-31 01:14:32 +0000 UTC map[pod-template-hash:768947d6f5 test-deployment-static:true] map[] [{apps/v1 ReplicaSet test-deployment-768947d6f5 ef706d1e-00af-48d2-87a7-cc36622514ef 0xc0009f4e47 0xc0009f4e48}] [] [{kube-controller-manager Update v1 2021-01-31 01:14:32 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ef706d1e-00af-48d2-87a7-cc36622514ef\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-31 01:14:36 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.192\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-484hg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-484hg,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-484hg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 01:14:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 01:14:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 01:14:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 01:14:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:10.244.1.192,StartTime:2021-01-31 01:14:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-01-31 01:14:35 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://7b03fa561420ad37f49ffa3ef3c57bfaf5230fcddeeb9fe3dfe6277d7c042f7a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.192,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 31 01:14:37.535: INFO: ReplicaSet "test-deployment-7c65d4bcf9": &ReplicaSet{ObjectMeta:{test-deployment-7c65d4bcf9 deployment-8500 5d015f41-1d6d-4965-94dc-e750396ea0e6 1126734 4 2021-01-31 01:14:30 +0000 UTC map[pod-template-hash:7c65d4bcf9 test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-deployment f8d8cff9-b227-498e-8650-c80ee2a4ae05 0xc006b29537 0xc006b29538}] [] [{kube-controller-manager Update apps/v1 2021-01-31 01:14:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f8d8cff9-b227-498e-8650-c80ee2a4ae05\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 7c65d4bcf9,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:7c65d4bcf9 test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment k8s.gcr.io/pause:3.2 [/bin/sleep 100000] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc006b295b8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:4,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 31 01:14:37.538: INFO: ReplicaSet "test-deployment-8b6954bfb": &ReplicaSet{ObjectMeta:{test-deployment-8b6954bfb deployment-8500 682f4cce-c690-4aa9-b2ba-473f4fd11a2f 1126637 2 2021-01-31 01:14:26 +0000 UTC map[pod-template-hash:8b6954bfb test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-deployment f8d8cff9-b227-498e-8650-c80ee2a4ae05 0xc006b29617 0xc006b29618}] [] [{kube-controller-manager Update apps/v1 2021-01-31 01:14:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f8d8cff9-b227-498e-8650-c80ee2a4ae05\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 8b6954bfb,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:8b6954bfb test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment k8s.gcr.io/e2e-test-images/agnhost:2.21 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc006b29680 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jan 31 01:14:37.540: INFO: pod: "test-deployment-8b6954bfb-r8dw5": &Pod{ObjectMeta:{test-deployment-8b6954bfb-r8dw5 test-deployment-8b6954bfb- deployment-8500 aca00833-7381-480c-9b28-f57de79c920c 1126602 0 2021-01-31 01:14:26 +0000 UTC map[pod-template-hash:8b6954bfb test-deployment-static:true] map[] [{apps/v1 ReplicaSet test-deployment-8b6954bfb 682f4cce-c690-4aa9-b2ba-473f4fd11a2f 0xc00703b287 0xc00703b288}] [] [{kube-controller-manager Update v1 2021-01-31 01:14:26 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"682f4cce-c690-4aa9-b2ba-473f4fd11a2f\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-31 01:14:29 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.224\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-484hg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-484hg,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-484hg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 01:14:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 01:14:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 01:14:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 01:14:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.2.224,StartTime:2021-01-31 01:14:26 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-01-31 01:14:29 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a,ContainerID:containerd://1edf391635dc1c90dcf45385c5e8cc93f27b47cec61b31b619aed0a2ac3bdb33,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.224,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:14:37.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8500" for this suite. • [SLOW TEST:11.223 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run the lifecycle of a Deployment [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","total":311,"completed":183,"skipped":3544,"failed":0} [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:14:37.545: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jan 31 01:14:43.174: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:14:43.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9151" for this suite. • [SLOW TEST:5.693 seconds] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:635 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 on terminated container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:134 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":311,"completed":184,"skipped":3544,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:14:43.239: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 31 01:14:44.189: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 31 01:14:46.359: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747652484, loc:(*time.Location)(0x79bd420)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747652484, loc:(*time.Location)(0x79bd420)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747652484, loc:(*time.Location)(0x79bd420)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747652484, loc:(*time.Location)(0x79bd420)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 31 01:14:49.387: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:14:50.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5594" for this suite. STEP: Destroying namespace "webhook-5594-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:7.034 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":311,"completed":185,"skipped":3560,"failed":0} SSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:14:50.273: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:92 Jan 31 01:14:50.350: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 31 01:14:50.367: INFO: Waiting for terminating namespaces to be deleted... Jan 31 01:14:50.369: INFO: Logging pods the apiserver thinks is on node latest-worker before test Jan 31 01:14:50.374: INFO: chaos-controller-manager-69c479c674-tdrls from default started at 2021-01-26 11:59:56 +0000 UTC (1 container statuses recorded) Jan 31 01:14:50.374: INFO: Container chaos-mesh ready: true, restart count 0 Jan 31 01:14:50.374: INFO: chaos-daemon-vkxzr from default started at 2021-01-26 11:59:56 +0000 UTC (1 container statuses recorded) Jan 31 01:14:50.374: INFO: Container chaos-daemon ready: true, restart count 0 Jan 31 01:14:50.374: INFO: kindnet-5bf5g from kube-system started at 2021-01-26 08:08:41 +0000 UTC (1 container statuses recorded) Jan 31 01:14:50.374: INFO: Container kindnet-cni ready: true, restart count 0 Jan 31 01:14:50.374: INFO: kube-proxy-f59c8 from kube-system started at 2021-01-26 08:08:41 +0000 UTC (1 container statuses recorded) Jan 31 01:14:50.374: INFO: Container kube-proxy ready: true, restart count 0 Jan 31 01:14:50.374: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Jan 31 01:14:50.406: INFO: chaos-daemon-g67vf from default started at 2021-01-26 11:59:56 +0000 UTC (1 container statuses recorded) Jan 31 01:14:50.406: INFO: Container chaos-daemon ready: true, restart count 0 Jan 31 01:14:50.406: INFO: coredns-74ff55c5b-ngxdm from kube-system started at 2021-01-27 12:43:36 +0000 UTC (1 container statuses recorded) Jan 31 01:14:50.406: INFO: Container coredns ready: true, restart count 0 Jan 31 01:14:50.406: INFO: coredns-74ff55c5b-ntztq from kube-system started at 2021-01-27 12:43:35 +0000 UTC (1 container statuses recorded) Jan 31 01:14:50.406: INFO: Container coredns ready: true, restart count 0 Jan 31 01:14:50.406: INFO: kindnet-98jtw from kube-system started at 2021-01-26 08:08:41 +0000 UTC (1 container statuses recorded) Jan 31 01:14:50.406: INFO: Container kindnet-cni ready: true, restart count 0 Jan 31 01:14:50.406: INFO: kube-proxy-skm7x from kube-system started at 2021-01-26 08:08:41 +0000 UTC (1 container statuses recorded) Jan 31 01:14:50.406: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-fc9bb09c-6a6b-41a2-89d8-7e3e9e6b0b91 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 172.18.0.14 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-fc9bb09c-6a6b-41a2-89d8-7e3e9e6b0b91 off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-fc9bb09c-6a6b-41a2-89d8-7e3e9e6b0b91 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:19:58.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6490" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:83 • [SLOW TEST:308.471 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":311,"completed":186,"skipped":3565,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:19:58.744: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1464.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-1464.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1464.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-1464.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 31 01:20:04.909: INFO: DNS probes using dns-test-52007eef-a195-419f-b62b-5d35e9aebe66 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1464.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-1464.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1464.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-1464.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 31 01:20:13.402: INFO: File wheezy_udp@dns-test-service-3.dns-1464.svc.cluster.local from pod dns-1464/dns-test-feb66933-5288-45e2-b11d-01a40279d59b contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 31 01:20:13.406: INFO: File jessie_udp@dns-test-service-3.dns-1464.svc.cluster.local from pod dns-1464/dns-test-feb66933-5288-45e2-b11d-01a40279d59b contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 31 01:20:13.406: INFO: Lookups using dns-1464/dns-test-feb66933-5288-45e2-b11d-01a40279d59b failed for: [wheezy_udp@dns-test-service-3.dns-1464.svc.cluster.local jessie_udp@dns-test-service-3.dns-1464.svc.cluster.local] Jan 31 01:20:18.412: INFO: File wheezy_udp@dns-test-service-3.dns-1464.svc.cluster.local from pod dns-1464/dns-test-feb66933-5288-45e2-b11d-01a40279d59b contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 31 01:20:18.416: INFO: File jessie_udp@dns-test-service-3.dns-1464.svc.cluster.local from pod dns-1464/dns-test-feb66933-5288-45e2-b11d-01a40279d59b contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 31 01:20:18.416: INFO: Lookups using dns-1464/dns-test-feb66933-5288-45e2-b11d-01a40279d59b failed for: [wheezy_udp@dns-test-service-3.dns-1464.svc.cluster.local jessie_udp@dns-test-service-3.dns-1464.svc.cluster.local] Jan 31 01:20:23.411: INFO: File wheezy_udp@dns-test-service-3.dns-1464.svc.cluster.local from pod dns-1464/dns-test-feb66933-5288-45e2-b11d-01a40279d59b contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 31 01:20:23.414: INFO: File jessie_udp@dns-test-service-3.dns-1464.svc.cluster.local from pod dns-1464/dns-test-feb66933-5288-45e2-b11d-01a40279d59b contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 31 01:20:23.414: INFO: Lookups using dns-1464/dns-test-feb66933-5288-45e2-b11d-01a40279d59b failed for: [wheezy_udp@dns-test-service-3.dns-1464.svc.cluster.local jessie_udp@dns-test-service-3.dns-1464.svc.cluster.local] Jan 31 01:20:28.410: INFO: File wheezy_udp@dns-test-service-3.dns-1464.svc.cluster.local from pod dns-1464/dns-test-feb66933-5288-45e2-b11d-01a40279d59b contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 31 01:20:28.414: INFO: File jessie_udp@dns-test-service-3.dns-1464.svc.cluster.local from pod dns-1464/dns-test-feb66933-5288-45e2-b11d-01a40279d59b contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 31 01:20:28.414: INFO: Lookups using dns-1464/dns-test-feb66933-5288-45e2-b11d-01a40279d59b failed for: [wheezy_udp@dns-test-service-3.dns-1464.svc.cluster.local jessie_udp@dns-test-service-3.dns-1464.svc.cluster.local] Jan 31 01:20:33.412: INFO: File wheezy_udp@dns-test-service-3.dns-1464.svc.cluster.local from pod dns-1464/dns-test-feb66933-5288-45e2-b11d-01a40279d59b contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 31 01:20:33.416: INFO: File jessie_udp@dns-test-service-3.dns-1464.svc.cluster.local from pod dns-1464/dns-test-feb66933-5288-45e2-b11d-01a40279d59b contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 31 01:20:33.416: INFO: Lookups using dns-1464/dns-test-feb66933-5288-45e2-b11d-01a40279d59b failed for: [wheezy_udp@dns-test-service-3.dns-1464.svc.cluster.local jessie_udp@dns-test-service-3.dns-1464.svc.cluster.local] Jan 31 01:20:38.415: INFO: DNS probes using dns-test-feb66933-5288-45e2-b11d-01a40279d59b succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1464.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-1464.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1464.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-1464.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 31 01:20:47.315: INFO: DNS probes using dns-test-923dbe65-79c2-4822-ae69-00fca9a42beb succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:20:47.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1464" for this suite. • [SLOW TEST:48.872 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":311,"completed":187,"skipped":3589,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:20:47.617: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 31 01:20:48.477: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 31 01:20:50.489: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747652848, loc:(*time.Location)(0x79bd420)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747652848, loc:(*time.Location)(0x79bd420)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747652848, loc:(*time.Location)(0x79bd420)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747652848, loc:(*time.Location)(0x79bd420)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 31 01:20:53.519: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:20:53.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-541" for this suite. STEP: Destroying namespace "webhook-541-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:6.259 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":311,"completed":188,"skipped":3601,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:20:53.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:20:54.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-3765" for this suite. STEP: Destroying namespace "nspatchtest-1178a5ec-c7f7-4492-9d0b-6058a2a96900-8976" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":311,"completed":189,"skipped":3617,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:20:54.121: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-4616 [It] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a new StatefulSet Jan 31 01:20:54.260: INFO: Found 0 stateful pods, waiting for 3 Jan 31 01:21:04.265: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 31 01:21:04.265: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 31 01:21:04.265: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 31 01:21:14.267: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 31 01:21:14.267: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 31 01:21:14.267: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Jan 31 01:21:14.277: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-4616 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 31 01:21:17.739: INFO: stderr: "I0131 01:21:17.639005 1874 log.go:181] (0xc000140370) (0xc000b181e0) Create stream\nI0131 01:21:17.639069 1874 log.go:181] (0xc000140370) (0xc000b181e0) Stream added, broadcasting: 1\nI0131 01:21:17.641368 1874 log.go:181] (0xc000140370) Reply frame received for 1\nI0131 01:21:17.641417 1874 log.go:181] (0xc000140370) (0xc000e60320) Create stream\nI0131 01:21:17.641429 1874 log.go:181] (0xc000140370) (0xc000e60320) Stream added, broadcasting: 3\nI0131 01:21:17.642285 1874 log.go:181] (0xc000140370) Reply frame received for 3\nI0131 01:21:17.642322 1874 log.go:181] (0xc000140370) (0xc000b18280) Create stream\nI0131 01:21:17.642332 1874 log.go:181] (0xc000140370) (0xc000b18280) Stream added, broadcasting: 5\nI0131 01:21:17.643088 1874 log.go:181] (0xc000140370) Reply frame received for 5\nI0131 01:21:17.705185 1874 log.go:181] (0xc000140370) Data frame received for 5\nI0131 01:21:17.705225 1874 log.go:181] (0xc000b18280) (5) Data frame handling\nI0131 01:21:17.705262 1874 log.go:181] (0xc000b18280) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0131 01:21:17.731653 1874 log.go:181] (0xc000140370) Data frame received for 3\nI0131 01:21:17.731671 1874 log.go:181] (0xc000e60320) (3) Data frame handling\nI0131 01:21:17.731678 1874 log.go:181] (0xc000e60320) (3) Data frame sent\nI0131 01:21:17.731682 1874 log.go:181] (0xc000140370) Data frame received for 3\nI0131 01:21:17.731687 1874 log.go:181] (0xc000e60320) (3) Data frame handling\nI0131 01:21:17.732274 1874 log.go:181] (0xc000140370) Data frame received for 5\nI0131 01:21:17.732290 1874 log.go:181] (0xc000b18280) (5) Data frame handling\nI0131 01:21:17.733865 1874 log.go:181] (0xc000140370) Data frame received for 1\nI0131 01:21:17.733892 1874 log.go:181] (0xc000b181e0) (1) Data frame handling\nI0131 01:21:17.733901 1874 log.go:181] (0xc000b181e0) (1) Data frame sent\nI0131 01:21:17.733909 1874 log.go:181] (0xc000140370) (0xc000b181e0) Stream removed, broadcasting: 1\nI0131 01:21:17.733950 1874 log.go:181] (0xc000140370) Go away received\nI0131 01:21:17.734177 1874 log.go:181] (0xc000140370) (0xc000b181e0) Stream removed, broadcasting: 1\nI0131 01:21:17.734199 1874 log.go:181] (0xc000140370) (0xc000e60320) Stream removed, broadcasting: 3\nI0131 01:21:17.734208 1874 log.go:181] (0xc000140370) (0xc000b18280) Stream removed, broadcasting: 5\n" Jan 31 01:21:17.739: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 31 01:21:17.739: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Jan 31 01:21:27.814: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Jan 31 01:21:37.834: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-4616 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 31 01:21:38.022: INFO: stderr: "I0131 01:21:37.966476 1893 log.go:181] (0xc0000ce2c0) (0xc0007121e0) Create stream\nI0131 01:21:37.966528 1893 log.go:181] (0xc0000ce2c0) (0xc0007121e0) Stream added, broadcasting: 1\nI0131 01:21:37.970677 1893 log.go:181] (0xc0000ce2c0) Reply frame received for 1\nI0131 01:21:37.970730 1893 log.go:181] (0xc0000ce2c0) (0xc000a921e0) Create stream\nI0131 01:21:37.970745 1893 log.go:181] (0xc0000ce2c0) (0xc000a921e0) Stream added, broadcasting: 3\nI0131 01:21:37.973552 1893 log.go:181] (0xc0000ce2c0) Reply frame received for 3\nI0131 01:21:37.973607 1893 log.go:181] (0xc0000ce2c0) (0xc000712320) Create stream\nI0131 01:21:37.973623 1893 log.go:181] (0xc0000ce2c0) (0xc000712320) Stream added, broadcasting: 5\nI0131 01:21:37.974400 1893 log.go:181] (0xc0000ce2c0) Reply frame received for 5\nI0131 01:21:38.014796 1893 log.go:181] (0xc0000ce2c0) Data frame received for 5\nI0131 01:21:38.014838 1893 log.go:181] (0xc000712320) (5) Data frame handling\nI0131 01:21:38.014861 1893 log.go:181] (0xc000712320) (5) Data frame sent\nI0131 01:21:38.014873 1893 log.go:181] (0xc0000ce2c0) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0131 01:21:38.014884 1893 log.go:181] (0xc000712320) (5) Data frame handling\nI0131 01:21:38.014900 1893 log.go:181] (0xc0000ce2c0) Data frame received for 3\nI0131 01:21:38.014914 1893 log.go:181] (0xc000a921e0) (3) Data frame handling\nI0131 01:21:38.014940 1893 log.go:181] (0xc000a921e0) (3) Data frame sent\nI0131 01:21:38.014952 1893 log.go:181] (0xc0000ce2c0) Data frame received for 3\nI0131 01:21:38.014959 1893 log.go:181] (0xc000a921e0) (3) Data frame handling\nI0131 01:21:38.016506 1893 log.go:181] (0xc0000ce2c0) Data frame received for 1\nI0131 01:21:38.016527 1893 log.go:181] (0xc0007121e0) (1) Data frame handling\nI0131 01:21:38.016549 1893 log.go:181] (0xc0007121e0) (1) Data frame sent\nI0131 01:21:38.016560 1893 log.go:181] (0xc0000ce2c0) (0xc0007121e0) Stream removed, broadcasting: 1\nI0131 01:21:38.016572 1893 log.go:181] (0xc0000ce2c0) Go away received\nI0131 01:21:38.017103 1893 log.go:181] (0xc0000ce2c0) (0xc0007121e0) Stream removed, broadcasting: 1\nI0131 01:21:38.017122 1893 log.go:181] (0xc0000ce2c0) (0xc000a921e0) Stream removed, broadcasting: 3\nI0131 01:21:38.017133 1893 log.go:181] (0xc0000ce2c0) (0xc000712320) Stream removed, broadcasting: 5\n" Jan 31 01:21:38.022: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 31 01:21:38.022: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 31 01:21:48.051: INFO: Waiting for StatefulSet statefulset-4616/ss2 to complete update Jan 31 01:21:48.051: INFO: Waiting for Pod statefulset-4616/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 31 01:21:48.051: INFO: Waiting for Pod statefulset-4616/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 31 01:21:48.051: INFO: Waiting for Pod statefulset-4616/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 31 01:21:58.060: INFO: Waiting for StatefulSet statefulset-4616/ss2 to complete update Jan 31 01:21:58.061: INFO: Waiting for Pod statefulset-4616/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 31 01:21:58.061: INFO: Waiting for Pod statefulset-4616/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 31 01:21:58.061: INFO: Waiting for Pod statefulset-4616/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 31 01:22:08.059: INFO: Waiting for StatefulSet statefulset-4616/ss2 to complete update Jan 31 01:22:08.059: INFO: Waiting for Pod statefulset-4616/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 31 01:22:08.059: INFO: Waiting for Pod statefulset-4616/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 31 01:22:08.059: INFO: Waiting for Pod statefulset-4616/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 31 01:22:18.060: INFO: Waiting for StatefulSet statefulset-4616/ss2 to complete update Jan 31 01:22:18.060: INFO: Waiting for Pod statefulset-4616/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 31 01:22:18.060: INFO: Waiting for Pod statefulset-4616/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 31 01:22:18.060: INFO: Waiting for Pod statefulset-4616/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 31 01:22:28.059: INFO: Waiting for StatefulSet statefulset-4616/ss2 to complete update Jan 31 01:22:28.059: INFO: Waiting for Pod statefulset-4616/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 31 01:22:28.059: INFO: Waiting for Pod statefulset-4616/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 31 01:22:38.059: INFO: Waiting for StatefulSet statefulset-4616/ss2 to complete update Jan 31 01:22:38.059: INFO: Waiting for Pod statefulset-4616/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 31 01:22:38.059: INFO: Waiting for Pod statefulset-4616/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 31 01:22:48.060: INFO: Waiting for StatefulSet statefulset-4616/ss2 to complete update Jan 31 01:22:48.060: INFO: Waiting for Pod statefulset-4616/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 31 01:22:48.060: INFO: Waiting for Pod statefulset-4616/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 31 01:22:58.060: INFO: Waiting for StatefulSet statefulset-4616/ss2 to complete update Jan 31 01:22:58.060: INFO: Waiting for Pod statefulset-4616/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 31 01:22:58.060: INFO: Waiting for Pod statefulset-4616/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 31 01:23:08.058: INFO: Waiting for StatefulSet statefulset-4616/ss2 to complete update Jan 31 01:23:08.058: INFO: Waiting for Pod statefulset-4616/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 31 01:23:08.058: INFO: Waiting for Pod statefulset-4616/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 31 01:23:18.060: INFO: Waiting for StatefulSet statefulset-4616/ss2 to complete update Jan 31 01:23:18.060: INFO: Waiting for Pod statefulset-4616/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 31 01:23:18.060: INFO: Waiting for Pod statefulset-4616/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 31 01:23:28.060: INFO: Waiting for StatefulSet statefulset-4616/ss2 to complete update Jan 31 01:23:28.060: INFO: Waiting for Pod statefulset-4616/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Rolling back to a previous revision Jan 31 01:23:38.066: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-4616 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 31 01:23:38.309: INFO: stderr: "I0131 01:23:38.196256 1911 log.go:181] (0xc000218000) (0xc000160000) Create stream\nI0131 01:23:38.196318 1911 log.go:181] (0xc000218000) (0xc000160000) Stream added, broadcasting: 1\nI0131 01:23:38.198224 1911 log.go:181] (0xc000218000) Reply frame received for 1\nI0131 01:23:38.198275 1911 log.go:181] (0xc000218000) (0xc000a7a000) Create stream\nI0131 01:23:38.198296 1911 log.go:181] (0xc000218000) (0xc000a7a000) Stream added, broadcasting: 3\nI0131 01:23:38.199403 1911 log.go:181] (0xc000218000) Reply frame received for 3\nI0131 01:23:38.199435 1911 log.go:181] (0xc000218000) (0xc000a7a0a0) Create stream\nI0131 01:23:38.199442 1911 log.go:181] (0xc000218000) (0xc000a7a0a0) Stream added, broadcasting: 5\nI0131 01:23:38.200297 1911 log.go:181] (0xc000218000) Reply frame received for 5\nI0131 01:23:38.261198 1911 log.go:181] (0xc000218000) Data frame received for 5\nI0131 01:23:38.261226 1911 log.go:181] (0xc000a7a0a0) (5) Data frame handling\nI0131 01:23:38.261244 1911 log.go:181] (0xc000a7a0a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0131 01:23:38.299635 1911 log.go:181] (0xc000218000) Data frame received for 5\nI0131 01:23:38.299687 1911 log.go:181] (0xc000a7a0a0) (5) Data frame handling\nI0131 01:23:38.299735 1911 log.go:181] (0xc000218000) Data frame received for 3\nI0131 01:23:38.299764 1911 log.go:181] (0xc000a7a000) (3) Data frame handling\nI0131 01:23:38.299788 1911 log.go:181] (0xc000a7a000) (3) Data frame sent\nI0131 01:23:38.299937 1911 log.go:181] (0xc000218000) Data frame received for 3\nI0131 01:23:38.299967 1911 log.go:181] (0xc000a7a000) (3) Data frame handling\nI0131 01:23:38.301795 1911 log.go:181] (0xc000218000) Data frame received for 1\nI0131 01:23:38.301828 1911 log.go:181] (0xc000160000) (1) Data frame handling\nI0131 01:23:38.301848 1911 log.go:181] (0xc000160000) (1) Data frame sent\nI0131 01:23:38.301913 1911 log.go:181] (0xc000218000) (0xc000160000) Stream removed, broadcasting: 1\nI0131 01:23:38.301964 1911 log.go:181] (0xc000218000) Go away received\nI0131 01:23:38.302584 1911 log.go:181] (0xc000218000) (0xc000160000) Stream removed, broadcasting: 1\nI0131 01:23:38.302604 1911 log.go:181] (0xc000218000) (0xc000a7a000) Stream removed, broadcasting: 3\nI0131 01:23:38.302615 1911 log.go:181] (0xc000218000) (0xc000a7a0a0) Stream removed, broadcasting: 5\n" Jan 31 01:23:38.309: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 31 01:23:38.309: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 31 01:23:48.346: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Jan 31 01:23:58.401: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-4616 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 31 01:23:58.621: INFO: stderr: "I0131 01:23:58.554419 1929 log.go:181] (0xc0001b1080) (0xc00063c500) Create stream\nI0131 01:23:58.554501 1929 log.go:181] (0xc0001b1080) (0xc00063c500) Stream added, broadcasting: 1\nI0131 01:23:58.556521 1929 log.go:181] (0xc0001b1080) Reply frame received for 1\nI0131 01:23:58.556570 1929 log.go:181] (0xc0001b1080) (0xc000737cc0) Create stream\nI0131 01:23:58.556591 1929 log.go:181] (0xc0001b1080) (0xc000737cc0) Stream added, broadcasting: 3\nI0131 01:23:58.557840 1929 log.go:181] (0xc0001b1080) Reply frame received for 3\nI0131 01:23:58.557880 1929 log.go:181] (0xc0001b1080) (0xc0004310e0) Create stream\nI0131 01:23:58.557889 1929 log.go:181] (0xc0001b1080) (0xc0004310e0) Stream added, broadcasting: 5\nI0131 01:23:58.558929 1929 log.go:181] (0xc0001b1080) Reply frame received for 5\nI0131 01:23:58.612811 1929 log.go:181] (0xc0001b1080) Data frame received for 5\nI0131 01:23:58.612977 1929 log.go:181] (0xc0004310e0) (5) Data frame handling\nI0131 01:23:58.612996 1929 log.go:181] (0xc0004310e0) (5) Data frame sent\nI0131 01:23:58.613016 1929 log.go:181] (0xc0001b1080) Data frame received for 5\nI0131 01:23:58.613027 1929 log.go:181] (0xc0004310e0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0131 01:23:58.613072 1929 log.go:181] (0xc0001b1080) Data frame received for 3\nI0131 01:23:58.613106 1929 log.go:181] (0xc000737cc0) (3) Data frame handling\nI0131 01:23:58.613160 1929 log.go:181] (0xc000737cc0) (3) Data frame sent\nI0131 01:23:58.613175 1929 log.go:181] (0xc0001b1080) Data frame received for 3\nI0131 01:23:58.613185 1929 log.go:181] (0xc000737cc0) (3) Data frame handling\nI0131 01:23:58.614357 1929 log.go:181] (0xc0001b1080) Data frame received for 1\nI0131 01:23:58.614387 1929 log.go:181] (0xc00063c500) (1) Data frame handling\nI0131 01:23:58.614403 1929 log.go:181] (0xc00063c500) (1) Data frame sent\nI0131 01:23:58.614418 1929 log.go:181] (0xc0001b1080) (0xc00063c500) Stream removed, broadcasting: 1\nI0131 01:23:58.614437 1929 log.go:181] (0xc0001b1080) Go away received\nI0131 01:23:58.614778 1929 log.go:181] (0xc0001b1080) (0xc00063c500) Stream removed, broadcasting: 1\nI0131 01:23:58.614799 1929 log.go:181] (0xc0001b1080) (0xc000737cc0) Stream removed, broadcasting: 3\nI0131 01:23:58.614810 1929 log.go:181] (0xc0001b1080) (0xc0004310e0) Stream removed, broadcasting: 5\n" Jan 31 01:23:58.621: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 31 01:23:58.621: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 31 01:24:08.642: INFO: Waiting for StatefulSet statefulset-4616/ss2 to complete update Jan 31 01:24:08.642: INFO: Waiting for Pod statefulset-4616/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 31 01:24:08.642: INFO: Waiting for Pod statefulset-4616/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 31 01:24:08.642: INFO: Waiting for Pod statefulset-4616/ss2-2 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 31 01:24:18.651: INFO: Waiting for StatefulSet statefulset-4616/ss2 to complete update Jan 31 01:24:18.652: INFO: Waiting for Pod statefulset-4616/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 31 01:24:18.652: INFO: Waiting for Pod statefulset-4616/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 31 01:24:28.650: INFO: Waiting for StatefulSet statefulset-4616/ss2 to complete update Jan 31 01:24:28.650: INFO: Waiting for Pod statefulset-4616/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 31 01:24:38.651: INFO: Waiting for StatefulSet statefulset-4616/ss2 to complete update Jan 31 01:24:38.651: INFO: Waiting for Pod statefulset-4616/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 31 01:24:48.652: INFO: Waiting for StatefulSet statefulset-4616/ss2 to complete update Jan 31 01:24:48.653: INFO: Waiting for Pod statefulset-4616/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 31 01:24:58.650: INFO: Waiting for StatefulSet statefulset-4616/ss2 to complete update Jan 31 01:24:58.650: INFO: Waiting for Pod statefulset-4616/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 31 01:25:08.652: INFO: Waiting for StatefulSet statefulset-4616/ss2 to complete update Jan 31 01:25:08.652: INFO: Waiting for Pod statefulset-4616/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 31 01:25:18.651: INFO: Waiting for StatefulSet statefulset-4616/ss2 to complete update Jan 31 01:25:18.651: INFO: Waiting for Pod statefulset-4616/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Jan 31 01:25:28.652: INFO: Deleting all statefulset in ns statefulset-4616 Jan 31 01:25:28.654: INFO: Scaling statefulset ss2 to 0 Jan 31 01:26:48.677: INFO: Waiting for statefulset status.replicas updated to 0 Jan 31 01:26:48.679: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:26:48.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4616" for this suite. • [SLOW TEST:354.659 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:635 should perform rolling updates and roll backs of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":311,"completed":190,"skipped":3630,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:26:48.781: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:26:55.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6118" for this suite. • [SLOW TEST:7.141 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":311,"completed":191,"skipped":3717,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:26:55.923: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:157 [It] should call prestop when killing a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating server pod server in namespace prestop-3996 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-3996 STEP: Deleting pre-stop pod Jan 31 01:27:09.184: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:27:09.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-3996" for this suite. • [SLOW TEST:13.325 seconds] [k8s.io] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:635 should call prestop when killing a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":311,"completed":192,"skipped":3749,"failed":0} SS ------------------------------ [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:27:09.249: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange Jan 31 01:27:09.615: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values Jan 31 01:27:09.666: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Jan 31 01:27:09.666: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange Jan 31 01:27:09.698: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Jan 31 01:27:09.698: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange Jan 31 01:27:09.739: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] Jan 31 01:27:09.740: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted Jan 31 01:27:16.831: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:27:16.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-8418" for this suite. • [SLOW TEST:7.603 seconds] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":311,"completed":193,"skipped":3751,"failed":0} SSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:27:16.852: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating configMap with name projected-configmap-test-volume-8ded4b3f-8f12-4f7a-b7c1-50dab469262a STEP: Creating a pod to test consume configMaps Jan 31 01:27:16.993: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ab96da1d-a3c1-4768-84d5-8119b49a80be" in namespace "projected-9495" to be "Succeeded or Failed" Jan 31 01:27:16.996: INFO: Pod "pod-projected-configmaps-ab96da1d-a3c1-4768-84d5-8119b49a80be": Phase="Pending", Reason="", readiness=false. Elapsed: 3.180624ms Jan 31 01:27:19.000: INFO: Pod "pod-projected-configmaps-ab96da1d-a3c1-4768-84d5-8119b49a80be": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007073388s Jan 31 01:27:21.005: INFO: Pod "pod-projected-configmaps-ab96da1d-a3c1-4768-84d5-8119b49a80be": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011903184s Jan 31 01:27:23.063: INFO: Pod "pod-projected-configmaps-ab96da1d-a3c1-4768-84d5-8119b49a80be": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.069898199s STEP: Saw pod success Jan 31 01:27:23.063: INFO: Pod "pod-projected-configmaps-ab96da1d-a3c1-4768-84d5-8119b49a80be" satisfied condition "Succeeded or Failed" Jan 31 01:27:23.074: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-ab96da1d-a3c1-4768-84d5-8119b49a80be container agnhost-container: STEP: delete the pod Jan 31 01:27:23.614: INFO: Waiting for pod pod-projected-configmaps-ab96da1d-a3c1-4768-84d5-8119b49a80be to disappear Jan 31 01:27:23.617: INFO: Pod pod-projected-configmaps-ab96da1d-a3c1-4768-84d5-8119b49a80be no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:27:23.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9495" for this suite. • [SLOW TEST:6.772 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":311,"completed":194,"skipped":3756,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:27:23.625: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should check if Kubernetes control plane services is included in cluster-info [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: validating cluster-info Jan 31 01:27:24.168: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-4214 cluster-info' Jan 31 01:27:24.257: INFO: stderr: "" Jan 31 01:27:24.257: INFO: stdout: "\x1b[0;32mKubernetes control plane\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:36371\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:27:24.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4214" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info [Conformance]","total":311,"completed":195,"skipped":3773,"failed":0} SS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:27:24.313: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:27:30.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2156" for this suite. • [SLOW TEST:5.955 seconds] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":311,"completed":196,"skipped":3775,"failed":0} [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:27:30.269: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating configMap with name projected-configmap-test-volume-map-a8aa0289-8264-4e95-a830-530f4b3bc2b0 STEP: Creating a pod to test consume configMaps Jan 31 01:27:30.363: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-cfb9cf85-ed6d-4424-a6cd-6d99a956907b" in namespace "projected-3843" to be "Succeeded or Failed" Jan 31 01:27:30.383: INFO: Pod "pod-projected-configmaps-cfb9cf85-ed6d-4424-a6cd-6d99a956907b": Phase="Pending", Reason="", readiness=false. Elapsed: 19.078683ms Jan 31 01:27:32.388: INFO: Pod "pod-projected-configmaps-cfb9cf85-ed6d-4424-a6cd-6d99a956907b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02432877s Jan 31 01:27:34.392: INFO: Pod "pod-projected-configmaps-cfb9cf85-ed6d-4424-a6cd-6d99a956907b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028960422s STEP: Saw pod success Jan 31 01:27:34.393: INFO: Pod "pod-projected-configmaps-cfb9cf85-ed6d-4424-a6cd-6d99a956907b" satisfied condition "Succeeded or Failed" Jan 31 01:27:34.396: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-cfb9cf85-ed6d-4424-a6cd-6d99a956907b container agnhost-container: STEP: delete the pod Jan 31 01:27:34.505: INFO: Waiting for pod pod-projected-configmaps-cfb9cf85-ed6d-4424-a6cd-6d99a956907b to disappear Jan 31 01:27:34.518: INFO: Pod pod-projected-configmaps-cfb9cf85-ed6d-4424-a6cd-6d99a956907b no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:27:34.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3843" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":197,"skipped":3775,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:27:34.525: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating secret with name secret-test-783b11ec-052a-46e5-9bba-3cda5474f0f1 STEP: Creating a pod to test consume secrets Jan 31 01:27:34.652: INFO: Waiting up to 5m0s for pod "pod-secrets-c76c476d-577f-4c49-afcb-27cd00bba718" in namespace "secrets-1865" to be "Succeeded or Failed" Jan 31 01:27:34.678: INFO: Pod "pod-secrets-c76c476d-577f-4c49-afcb-27cd00bba718": Phase="Pending", Reason="", readiness=false. Elapsed: 26.62223ms Jan 31 01:27:36.682: INFO: Pod "pod-secrets-c76c476d-577f-4c49-afcb-27cd00bba718": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029906757s Jan 31 01:27:38.686: INFO: Pod "pod-secrets-c76c476d-577f-4c49-afcb-27cd00bba718": Phase="Running", Reason="", readiness=true. Elapsed: 4.034662695s Jan 31 01:27:40.690: INFO: Pod "pod-secrets-c76c476d-577f-4c49-afcb-27cd00bba718": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.038880436s STEP: Saw pod success Jan 31 01:27:40.691: INFO: Pod "pod-secrets-c76c476d-577f-4c49-afcb-27cd00bba718" satisfied condition "Succeeded or Failed" Jan 31 01:27:40.693: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-c76c476d-577f-4c49-afcb-27cd00bba718 container secret-volume-test: STEP: delete the pod Jan 31 01:27:40.743: INFO: Waiting for pod pod-secrets-c76c476d-577f-4c49-afcb-27cd00bba718 to disappear Jan 31 01:27:40.757: INFO: Pod pod-secrets-c76c476d-577f-4c49-afcb-27cd00bba718 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:27:40.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1865" for this suite. • [SLOW TEST:6.241 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":198,"skipped":3786,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:27:40.766: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:150 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:27:40.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1740" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":311,"completed":199,"skipped":3797,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:27:40.892: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 Jan 31 01:27:45.067: INFO: Deleting pod "var-expansion-3e401872-9c24-4564-9438-8438ea90d3e1" in namespace "var-expansion-2016" Jan 31 01:27:45.073: INFO: Wait up to 5m0s for pod "var-expansion-3e401872-9c24-4564-9438-8438ea90d3e1" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:28:01.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2016" for this suite. • [SLOW TEST:20.207 seconds] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:635 should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]","total":311,"completed":200,"skipped":3832,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:28:01.099: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a pod to test downward api env vars Jan 31 01:28:01.227: INFO: Waiting up to 5m0s for pod "downward-api-772f7f25-bcdd-43f8-b634-a5528ba57eef" in namespace "downward-api-1576" to be "Succeeded or Failed" Jan 31 01:28:01.243: INFO: Pod "downward-api-772f7f25-bcdd-43f8-b634-a5528ba57eef": Phase="Pending", Reason="", readiness=false. Elapsed: 16.896375ms Jan 31 01:28:03.314: INFO: Pod "downward-api-772f7f25-bcdd-43f8-b634-a5528ba57eef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087517229s Jan 31 01:28:05.318: INFO: Pod "downward-api-772f7f25-bcdd-43f8-b634-a5528ba57eef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.091585946s STEP: Saw pod success Jan 31 01:28:05.318: INFO: Pod "downward-api-772f7f25-bcdd-43f8-b634-a5528ba57eef" satisfied condition "Succeeded or Failed" Jan 31 01:28:05.321: INFO: Trying to get logs from node latest-worker2 pod downward-api-772f7f25-bcdd-43f8-b634-a5528ba57eef container dapi-container: STEP: delete the pod Jan 31 01:28:05.352: INFO: Waiting for pod downward-api-772f7f25-bcdd-43f8-b634-a5528ba57eef to disappear Jan 31 01:28:05.362: INFO: Pod downward-api-772f7f25-bcdd-43f8-b634-a5528ba57eef no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:28:05.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1576" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":311,"completed":201,"skipped":3848,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:28:05.371: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 31 01:28:06.218: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 31 01:28:08.229: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747653286, loc:(*time.Location)(0x79bd420)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747653286, loc:(*time.Location)(0x79bd420)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747653286, loc:(*time.Location)(0x79bd420)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747653286, loc:(*time.Location)(0x79bd420)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 31 01:28:11.262: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:28:11.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4278" for this suite. STEP: Destroying namespace "webhook-4278-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:6.298 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":311,"completed":202,"skipped":3875,"failed":0} SSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:28:11.670: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a pod to test downward api env vars Jan 31 01:28:11.797: INFO: Waiting up to 5m0s for pod "downward-api-15040af4-1338-4a4d-a48f-a07e7537fc62" in namespace "downward-api-452" to be "Succeeded or Failed" Jan 31 01:28:11.800: INFO: Pod "downward-api-15040af4-1338-4a4d-a48f-a07e7537fc62": Phase="Pending", Reason="", readiness=false. Elapsed: 2.867897ms Jan 31 01:28:13.871: INFO: Pod "downward-api-15040af4-1338-4a4d-a48f-a07e7537fc62": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07383902s Jan 31 01:28:15.876: INFO: Pod "downward-api-15040af4-1338-4a4d-a48f-a07e7537fc62": Phase="Running", Reason="", readiness=true. Elapsed: 4.078846811s Jan 31 01:28:17.880: INFO: Pod "downward-api-15040af4-1338-4a4d-a48f-a07e7537fc62": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.082694006s STEP: Saw pod success Jan 31 01:28:17.880: INFO: Pod "downward-api-15040af4-1338-4a4d-a48f-a07e7537fc62" satisfied condition "Succeeded or Failed" Jan 31 01:28:17.883: INFO: Trying to get logs from node latest-worker2 pod downward-api-15040af4-1338-4a4d-a48f-a07e7537fc62 container dapi-container: STEP: delete the pod Jan 31 01:28:17.903: INFO: Waiting for pod downward-api-15040af4-1338-4a4d-a48f-a07e7537fc62 to disappear Jan 31 01:28:17.908: INFO: Pod downward-api-15040af4-1338-4a4d-a48f-a07e7537fc62 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:28:17.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-452" for this suite. • [SLOW TEST:6.245 seconds] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":311,"completed":203,"skipped":3881,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:28:17.915: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 [It] should get a host IP [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: creating pod Jan 31 01:28:22.074: INFO: Pod pod-hostip-a2688658-bf55-40a0-998f-63de164837cc has hostIP: 172.18.0.16 [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:28:22.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8337" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":311,"completed":204,"skipped":3903,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:28:22.082: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating the pod Jan 31 01:28:26.736: INFO: Successfully updated pod "labelsupdate31af32c7-2c72-4024-923c-7ce7ba816781" [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:28:28.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5526" for this suite. • [SLOW TEST:6.692 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35 should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":311,"completed":205,"skipped":3911,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:28:28.775: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating projection with secret that has name projected-secret-test-map-fd9761c2-62ba-4abf-a00b-c08a137bd453 STEP: Creating a pod to test consume secrets Jan 31 01:28:28.904: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9a7e00cb-3293-4ba6-ba8d-7e2a957d4937" in namespace "projected-6118" to be "Succeeded or Failed" Jan 31 01:28:28.930: INFO: Pod "pod-projected-secrets-9a7e00cb-3293-4ba6-ba8d-7e2a957d4937": Phase="Pending", Reason="", readiness=false. Elapsed: 25.694338ms Jan 31 01:28:31.170: INFO: Pod "pod-projected-secrets-9a7e00cb-3293-4ba6-ba8d-7e2a957d4937": Phase="Pending", Reason="", readiness=false. Elapsed: 2.26609951s Jan 31 01:28:33.174: INFO: Pod "pod-projected-secrets-9a7e00cb-3293-4ba6-ba8d-7e2a957d4937": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.269753272s STEP: Saw pod success Jan 31 01:28:33.174: INFO: Pod "pod-projected-secrets-9a7e00cb-3293-4ba6-ba8d-7e2a957d4937" satisfied condition "Succeeded or Failed" Jan 31 01:28:33.176: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-9a7e00cb-3293-4ba6-ba8d-7e2a957d4937 container projected-secret-volume-test: STEP: delete the pod Jan 31 01:28:33.208: INFO: Waiting for pod pod-projected-secrets-9a7e00cb-3293-4ba6-ba8d-7e2a957d4937 to disappear Jan 31 01:28:33.214: INFO: Pod pod-projected-secrets-9a7e00cb-3293-4ba6-ba8d-7e2a957d4937 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:28:33.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6118" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":311,"completed":206,"skipped":3929,"failed":0} ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:28:33.237: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 31 01:28:33.825: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 31 01:28:35.982: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747653313, loc:(*time.Location)(0x79bd420)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747653313, loc:(*time.Location)(0x79bd420)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747653314, loc:(*time.Location)(0x79bd420)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747653313, loc:(*time.Location)(0x79bd420)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 31 01:28:37.986: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747653313, loc:(*time.Location)(0x79bd420)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747653313, loc:(*time.Location)(0x79bd420)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747653314, loc:(*time.Location)(0x79bd420)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747653313, loc:(*time.Location)(0x79bd420)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 31 01:28:41.020: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:28:41.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5338" for this suite. STEP: Destroying namespace "webhook-5338-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:8.152 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":311,"completed":207,"skipped":3929,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:28:41.390: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 [It] should be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jan 31 01:28:45.975: INFO: Successfully updated pod "pod-update-2c20c177-8326-4038-b7c1-9b7b4df4de3b" STEP: verifying the updated pod is in kubernetes Jan 31 01:28:45.993: INFO: Pod update OK [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:28:45.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-890" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":311,"completed":208,"skipped":3952,"failed":0} SS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:28:46.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating secret with name secret-test-98f286e2-f33f-4991-8b92-65d6194fad60 STEP: Creating a pod to test consume secrets Jan 31 01:28:46.104: INFO: Waiting up to 5m0s for pod "pod-secrets-c658a84e-d185-47fa-8dd8-6e0bcd2d0ca2" in namespace "secrets-7993" to be "Succeeded or Failed" Jan 31 01:28:46.142: INFO: Pod "pod-secrets-c658a84e-d185-47fa-8dd8-6e0bcd2d0ca2": Phase="Pending", Reason="", readiness=false. Elapsed: 38.873012ms Jan 31 01:28:48.147: INFO: Pod "pod-secrets-c658a84e-d185-47fa-8dd8-6e0bcd2d0ca2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04324904s Jan 31 01:28:50.151: INFO: Pod "pod-secrets-c658a84e-d185-47fa-8dd8-6e0bcd2d0ca2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.047679516s STEP: Saw pod success Jan 31 01:28:50.151: INFO: Pod "pod-secrets-c658a84e-d185-47fa-8dd8-6e0bcd2d0ca2" satisfied condition "Succeeded or Failed" Jan 31 01:28:50.155: INFO: Trying to get logs from node latest-worker pod pod-secrets-c658a84e-d185-47fa-8dd8-6e0bcd2d0ca2 container secret-volume-test: STEP: delete the pod Jan 31 01:28:50.246: INFO: Waiting for pod pod-secrets-c658a84e-d185-47fa-8dd8-6e0bcd2d0ca2 to disappear Jan 31 01:28:50.269: INFO: Pod pod-secrets-c658a84e-d185-47fa-8dd8-6e0bcd2d0ca2 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:28:50.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7993" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":311,"completed":209,"skipped":3954,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:28:50.304: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: creating service in namespace services-4404 STEP: creating service affinity-nodeport-transition in namespace services-4404 STEP: creating replication controller affinity-nodeport-transition in namespace services-4404 I0131 01:28:50.512275 7 runners.go:190] Created replication controller with name: affinity-nodeport-transition, namespace: services-4404, replica count: 3 I0131 01:28:53.562719 7 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0131 01:28:56.562959 7 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 31 01:28:56.570: INFO: Creating new exec pod Jan 31 01:29:01.614: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=services-4404 exec execpod-affinityflx4v -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80' Jan 31 01:29:01.866: INFO: stderr: "I0131 01:29:01.758494 1964 log.go:181] (0xc00003af20) (0xc000283400) Create stream\nI0131 01:29:01.758593 1964 log.go:181] (0xc00003af20) (0xc000283400) Stream added, broadcasting: 1\nI0131 01:29:01.762873 1964 log.go:181] (0xc00003af20) Reply frame received for 1\nI0131 01:29:01.762920 1964 log.go:181] (0xc00003af20) (0xc00030c280) Create stream\nI0131 01:29:01.762934 1964 log.go:181] (0xc00003af20) (0xc00030c280) Stream added, broadcasting: 3\nI0131 01:29:01.763927 1964 log.go:181] (0xc00003af20) Reply frame received for 3\nI0131 01:29:01.763960 1964 log.go:181] (0xc00003af20) (0xc0003a4e60) Create stream\nI0131 01:29:01.763973 1964 log.go:181] (0xc00003af20) (0xc0003a4e60) Stream added, broadcasting: 5\nI0131 01:29:01.765036 1964 log.go:181] (0xc00003af20) Reply frame received for 5\nI0131 01:29:01.859082 1964 log.go:181] (0xc00003af20) Data frame received for 3\nI0131 01:29:01.859140 1964 log.go:181] (0xc00030c280) (3) Data frame handling\nI0131 01:29:01.859217 1964 log.go:181] (0xc00003af20) Data frame received for 5\nI0131 01:29:01.859236 1964 log.go:181] (0xc0003a4e60) (5) Data frame handling\nI0131 01:29:01.859249 1964 log.go:181] (0xc0003a4e60) (5) Data frame sent\nI0131 01:29:01.859255 1964 log.go:181] (0xc00003af20) Data frame received for 5\nI0131 01:29:01.859260 1964 log.go:181] (0xc0003a4e60) (5) Data frame handling\n+ nc -zv -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\nI0131 01:29:01.860689 1964 log.go:181] (0xc00003af20) Data frame received for 1\nI0131 01:29:01.860714 1964 log.go:181] (0xc000283400) (1) Data frame handling\nI0131 01:29:01.860727 1964 log.go:181] (0xc000283400) (1) Data frame sent\nI0131 01:29:01.860814 1964 log.go:181] (0xc00003af20) (0xc000283400) Stream removed, broadcasting: 1\nI0131 01:29:01.860971 1964 log.go:181] (0xc00003af20) Go away received\nI0131 01:29:01.861219 1964 log.go:181] (0xc00003af20) (0xc000283400) Stream removed, broadcasting: 1\nI0131 01:29:01.861234 1964 log.go:181] (0xc00003af20) (0xc00030c280) Stream removed, broadcasting: 3\nI0131 01:29:01.861241 1964 log.go:181] (0xc00003af20) (0xc0003a4e60) Stream removed, broadcasting: 5\n" Jan 31 01:29:01.866: INFO: stdout: "" Jan 31 01:29:01.867: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=services-4404 exec execpod-affinityflx4v -- /bin/sh -x -c nc -zv -t -w 2 10.96.128.96 80' Jan 31 01:29:02.075: INFO: stderr: "I0131 01:29:02.001323 1980 log.go:181] (0xc0008a2000) (0xc00091c0a0) Create stream\nI0131 01:29:02.001400 1980 log.go:181] (0xc0008a2000) (0xc00091c0a0) Stream added, broadcasting: 1\nI0131 01:29:02.003721 1980 log.go:181] (0xc0008a2000) Reply frame received for 1\nI0131 01:29:02.003777 1980 log.go:181] (0xc0008a2000) (0xc00091c500) Create stream\nI0131 01:29:02.003786 1980 log.go:181] (0xc0008a2000) (0xc00091c500) Stream added, broadcasting: 3\nI0131 01:29:02.005538 1980 log.go:181] (0xc0008a2000) Reply frame received for 3\nI0131 01:29:02.005582 1980 log.go:181] (0xc0008a2000) (0xc000a240a0) Create stream\nI0131 01:29:02.005600 1980 log.go:181] (0xc0008a2000) (0xc000a240a0) Stream added, broadcasting: 5\nI0131 01:29:02.006804 1980 log.go:181] (0xc0008a2000) Reply frame received for 5\nI0131 01:29:02.068134 1980 log.go:181] (0xc0008a2000) Data frame received for 5\nI0131 01:29:02.068172 1980 log.go:181] (0xc000a240a0) (5) Data frame handling\nI0131 01:29:02.068181 1980 log.go:181] (0xc000a240a0) (5) Data frame sent\nI0131 01:29:02.068188 1980 log.go:181] (0xc0008a2000) Data frame received for 5\nI0131 01:29:02.068194 1980 log.go:181] (0xc000a240a0) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.128.96 80\nConnection to 10.96.128.96 80 port [tcp/http] succeeded!\nI0131 01:29:02.068205 1980 log.go:181] (0xc0008a2000) Data frame received for 3\nI0131 01:29:02.068284 1980 log.go:181] (0xc00091c500) (3) Data frame handling\nI0131 01:29:02.069647 1980 log.go:181] (0xc0008a2000) Data frame received for 1\nI0131 01:29:02.069667 1980 log.go:181] (0xc00091c0a0) (1) Data frame handling\nI0131 01:29:02.069677 1980 log.go:181] (0xc00091c0a0) (1) Data frame sent\nI0131 01:29:02.069689 1980 log.go:181] (0xc0008a2000) (0xc00091c0a0) Stream removed, broadcasting: 1\nI0131 01:29:02.069966 1980 log.go:181] (0xc0008a2000) Go away received\nI0131 01:29:02.070066 1980 log.go:181] (0xc0008a2000) (0xc00091c0a0) Stream removed, broadcasting: 1\nI0131 01:29:02.070080 1980 log.go:181] (0xc0008a2000) (0xc00091c500) Stream removed, broadcasting: 3\nI0131 01:29:02.070087 1980 log.go:181] (0xc0008a2000) (0xc000a240a0) Stream removed, broadcasting: 5\n" Jan 31 01:29:02.075: INFO: stdout: "" Jan 31 01:29:02.075: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=services-4404 exec execpod-affinityflx4v -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.14 30030' Jan 31 01:29:02.282: INFO: stderr: "I0131 01:29:02.207940 1998 log.go:181] (0xc00003a0b0) (0xc0001d7900) Create stream\nI0131 01:29:02.208010 1998 log.go:181] (0xc00003a0b0) (0xc0001d7900) Stream added, broadcasting: 1\nI0131 01:29:02.210967 1998 log.go:181] (0xc00003a0b0) Reply frame received for 1\nI0131 01:29:02.211007 1998 log.go:181] (0xc00003a0b0) (0xc000b02460) Create stream\nI0131 01:29:02.211017 1998 log.go:181] (0xc00003a0b0) (0xc000b02460) Stream added, broadcasting: 3\nI0131 01:29:02.211785 1998 log.go:181] (0xc00003a0b0) Reply frame received for 3\nI0131 01:29:02.211818 1998 log.go:181] (0xc00003a0b0) (0xc0007703c0) Create stream\nI0131 01:29:02.211827 1998 log.go:181] (0xc00003a0b0) (0xc0007703c0) Stream added, broadcasting: 5\nI0131 01:29:02.212455 1998 log.go:181] (0xc00003a0b0) Reply frame received for 5\nI0131 01:29:02.273528 1998 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0131 01:29:02.273608 1998 log.go:181] (0xc000b02460) (3) Data frame handling\nI0131 01:29:02.273675 1998 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0131 01:29:02.273787 1998 log.go:181] (0xc0007703c0) (5) Data frame handling\nI0131 01:29:02.273825 1998 log.go:181] (0xc0007703c0) (5) Data frame sent\nI0131 01:29:02.273844 1998 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0131 01:29:02.273854 1998 log.go:181] (0xc0007703c0) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.14 30030\nConnection to 172.18.0.14 30030 port [tcp/30030] succeeded!\nI0131 01:29:02.275361 1998 log.go:181] (0xc00003a0b0) Data frame received for 1\nI0131 01:29:02.275396 1998 log.go:181] (0xc0001d7900) (1) Data frame handling\nI0131 01:29:02.275417 1998 log.go:181] (0xc0001d7900) (1) Data frame sent\nI0131 01:29:02.275438 1998 log.go:181] (0xc00003a0b0) (0xc0001d7900) Stream removed, broadcasting: 1\nI0131 01:29:02.275469 1998 log.go:181] (0xc00003a0b0) Go away received\nI0131 01:29:02.275961 1998 log.go:181] (0xc00003a0b0) (0xc0001d7900) Stream removed, broadcasting: 1\nI0131 01:29:02.276002 1998 log.go:181] (0xc00003a0b0) (0xc000b02460) Stream removed, broadcasting: 3\nI0131 01:29:02.276029 1998 log.go:181] (0xc00003a0b0) (0xc0007703c0) Stream removed, broadcasting: 5\n" Jan 31 01:29:02.282: INFO: stdout: "" Jan 31 01:29:02.283: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=services-4404 exec execpod-affinityflx4v -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.16 30030' Jan 31 01:29:02.495: INFO: stderr: "I0131 01:29:02.420236 2016 log.go:181] (0xc000e94000) (0xc000166140) Create stream\nI0131 01:29:02.420313 2016 log.go:181] (0xc000e94000) (0xc000166140) Stream added, broadcasting: 1\nI0131 01:29:02.423649 2016 log.go:181] (0xc000e94000) Reply frame received for 1\nI0131 01:29:02.423680 2016 log.go:181] (0xc000e94000) (0xc0001663c0) Create stream\nI0131 01:29:02.423687 2016 log.go:181] (0xc000e94000) (0xc0001663c0) Stream added, broadcasting: 3\nI0131 01:29:02.424587 2016 log.go:181] (0xc000e94000) Reply frame received for 3\nI0131 01:29:02.424627 2016 log.go:181] (0xc000e94000) (0xc000baa500) Create stream\nI0131 01:29:02.424638 2016 log.go:181] (0xc000e94000) (0xc000baa500) Stream added, broadcasting: 5\nI0131 01:29:02.425375 2016 log.go:181] (0xc000e94000) Reply frame received for 5\nI0131 01:29:02.488792 2016 log.go:181] (0xc000e94000) Data frame received for 3\nI0131 01:29:02.488829 2016 log.go:181] (0xc0001663c0) (3) Data frame handling\nI0131 01:29:02.488983 2016 log.go:181] (0xc000e94000) Data frame received for 5\nI0131 01:29:02.489026 2016 log.go:181] (0xc000baa500) (5) Data frame handling\nI0131 01:29:02.489059 2016 log.go:181] (0xc000baa500) (5) Data frame sent\nI0131 01:29:02.489083 2016 log.go:181] (0xc000e94000) Data frame received for 5\nI0131 01:29:02.489104 2016 log.go:181] (0xc000baa500) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.16 30030\nConnection to 172.18.0.16 30030 port [tcp/30030] succeeded!\nI0131 01:29:02.490336 2016 log.go:181] (0xc000e94000) Data frame received for 1\nI0131 01:29:02.490375 2016 log.go:181] (0xc000166140) (1) Data frame handling\nI0131 01:29:02.490399 2016 log.go:181] (0xc000166140) (1) Data frame sent\nI0131 01:29:02.490427 2016 log.go:181] (0xc000e94000) (0xc000166140) Stream removed, broadcasting: 1\nI0131 01:29:02.490460 2016 log.go:181] (0xc000e94000) Go away received\nI0131 01:29:02.490834 2016 log.go:181] (0xc000e94000) (0xc000166140) Stream removed, broadcasting: 1\nI0131 01:29:02.490856 2016 log.go:181] (0xc000e94000) (0xc0001663c0) Stream removed, broadcasting: 3\nI0131 01:29:02.490870 2016 log.go:181] (0xc000e94000) (0xc000baa500) Stream removed, broadcasting: 5\n" Jan 31 01:29:02.495: INFO: stdout: "" Jan 31 01:29:02.536: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=services-4404 exec execpod-affinityflx4v -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.14:30030/ ; done' Jan 31 01:29:02.871: INFO: stderr: "I0131 01:29:02.692641 2034 log.go:181] (0xc0001522c0) (0xc000c08000) Create stream\nI0131 01:29:02.692713 2034 log.go:181] (0xc0001522c0) (0xc000c08000) Stream added, broadcasting: 1\nI0131 01:29:02.694790 2034 log.go:181] (0xc0001522c0) Reply frame received for 1\nI0131 01:29:02.694843 2034 log.go:181] (0xc0001522c0) (0xc000c080a0) Create stream\nI0131 01:29:02.694866 2034 log.go:181] (0xc0001522c0) (0xc000c080a0) Stream added, broadcasting: 3\nI0131 01:29:02.695760 2034 log.go:181] (0xc0001522c0) Reply frame received for 3\nI0131 01:29:02.695788 2034 log.go:181] (0xc0001522c0) (0xc000b5a000) Create stream\nI0131 01:29:02.695796 2034 log.go:181] (0xc0001522c0) (0xc000b5a000) Stream added, broadcasting: 5\nI0131 01:29:02.696553 2034 log.go:181] (0xc0001522c0) Reply frame received for 5\nI0131 01:29:02.764642 2034 log.go:181] (0xc0001522c0) Data frame received for 5\nI0131 01:29:02.764677 2034 log.go:181] (0xc000b5a000) (5) Data frame handling\nI0131 01:29:02.764698 2034 log.go:181] (0xc000b5a000) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30030/\nI0131 01:29:02.765548 2034 log.go:181] (0xc0001522c0) Data frame received for 3\nI0131 01:29:02.765577 2034 log.go:181] (0xc000c080a0) (3) Data frame handling\nI0131 01:29:02.765609 2034 log.go:181] (0xc000c080a0) (3) Data frame sent\nI0131 01:29:02.769755 2034 log.go:181] (0xc0001522c0) Data frame received for 3\nI0131 01:29:02.769776 2034 log.go:181] (0xc000c080a0) (3) Data frame handling\nI0131 01:29:02.769796 2034 log.go:181] (0xc000c080a0) (3) Data frame sent\nI0131 01:29:02.770307 2034 log.go:181] (0xc0001522c0) Data frame received for 3\nI0131 01:29:02.770323 2034 log.go:181] (0xc000c080a0) (3) Data frame handling\nI0131 01:29:02.770341 2034 log.go:181] (0xc0001522c0) Data frame received for 5\nI0131 01:29:02.770368 2034 log.go:181] (0xc000b5a000) (5) Data frame handling\nI0131 01:29:02.770380 2034 log.go:181] (0xc000b5a000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30030/\nI0131 01:29:02.770398 2034 log.go:181] (0xc000c080a0) (3) Data frame sent\nI0131 01:29:02.774821 2034 log.go:181] (0xc0001522c0) Data frame received for 3\nI0131 01:29:02.774844 2034 log.go:181] (0xc000c080a0) (3) Data frame handling\nI0131 01:29:02.774863 2034 log.go:181] (0xc000c080a0) (3) Data frame sent\nI0131 01:29:02.775566 2034 log.go:181] (0xc0001522c0) Data frame received for 3\nI0131 01:29:02.775576 2034 log.go:181] (0xc000c080a0) (3) Data frame handling\nI0131 01:29:02.775582 2034 log.go:181] (0xc000c080a0) (3) Data frame sent\nI0131 01:29:02.775604 2034 log.go:181] (0xc0001522c0) Data frame received for 5\nI0131 01:29:02.775622 2034 log.go:181] (0xc000b5a000) (5) Data frame handling\nI0131 01:29:02.775638 2034 log.go:181] (0xc000b5a000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30030/\nI0131 01:29:02.782979 2034 log.go:181] (0xc0001522c0) Data frame received for 3\nI0131 01:29:02.783001 2034 log.go:181] (0xc000c080a0) (3) Data frame handling\nI0131 01:29:02.783022 2034 log.go:181] (0xc000c080a0) (3) Data frame sent\nI0131 01:29:02.783956 2034 log.go:181] (0xc0001522c0) Data frame received for 5\nI0131 01:29:02.784081 2034 log.go:181] (0xc000b5a000) (5) Data frame handling\nI0131 01:29:02.784113 2034 log.go:181] (0xc000b5a000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30030/\nI0131 01:29:02.784263 2034 log.go:181] (0xc0001522c0) Data frame received for 3\nI0131 01:29:02.784298 2034 log.go:181] (0xc000c080a0) (3) Data frame handling\nI0131 01:29:02.784339 2034 log.go:181] (0xc000c080a0) (3) Data frame sent\nI0131 01:29:02.793782 2034 log.go:181] (0xc0001522c0) Data frame received for 5\nI0131 01:29:02.793806 2034 log.go:181] (0xc000b5a000) (5) Data frame handling\nI0131 01:29:02.793814 2034 log.go:181] (0xc000b5a000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30030/\nI0131 01:29:02.793825 2034 log.go:181] (0xc0001522c0) Data frame received for 3\nI0131 01:29:02.793830 2034 log.go:181] (0xc000c080a0) (3) Data frame handling\nI0131 01:29:02.793835 2034 log.go:181] (0xc000c080a0) (3) Data frame sent\nI0131 01:29:02.793840 2034 log.go:181] (0xc0001522c0) Data frame received for 3\nI0131 01:29:02.793844 2034 log.go:181] (0xc000c080a0) (3) Data frame handling\nI0131 01:29:02.793857 2034 log.go:181] (0xc000c080a0) (3) Data frame sent\nI0131 01:29:02.796531 2034 log.go:181] (0xc0001522c0) Data frame received for 3\nI0131 01:29:02.796550 2034 log.go:181] (0xc000c080a0) (3) Data frame handling\nI0131 01:29:02.796574 2034 log.go:181] (0xc000c080a0) (3) Data frame sent\nI0131 01:29:02.797125 2034 log.go:181] (0xc0001522c0) Data frame received for 3\nI0131 01:29:02.797156 2034 log.go:181] (0xc000c080a0) (3) Data frame handling\nI0131 01:29:02.797171 2034 log.go:181] (0xc000c080a0) (3) Data frame sent\nI0131 01:29:02.797193 2034 log.go:181] (0xc0001522c0) Data frame received for 5\nI0131 01:29:02.797205 2034 log.go:181] (0xc000b5a000) (5) Data frame handling\nI0131 01:29:02.797227 2034 log.go:181] (0xc000b5a000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30030/\nI0131 01:29:02.805139 2034 log.go:181] (0xc0001522c0) Data frame received for 3\nI0131 01:29:02.805158 2034 log.go:181] (0xc000c080a0) (3) Data frame handling\nI0131 01:29:02.805174 2034 log.go:181] (0xc000c080a0) (3) Data frame sent\nI0131 01:29:02.805722 2034 log.go:181] (0xc0001522c0) Data frame received for 3\nI0131 01:29:02.805743 2034 log.go:181] (0xc000c080a0) (3) Data frame handling\nI0131 01:29:02.805759 2034 log.go:181] (0xc000c080a0) (3) Data frame sent\nI0131 01:29:02.805770 2034 log.go:181] (0xc0001522c0) Data frame received for 5\nI0131 01:29:02.805785 2034 log.go:181] (0xc000b5a000) (5) Data frame handling\nI0131 01:29:02.805794 2034 log.go:181] (0xc000b5a000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30030/\nI0131 01:29:02.809512 2034 log.go:181] (0xc0001522c0) Data frame received for 3\nI0131 01:29:02.809527 2034 log.go:181] (0xc000c080a0) (3) Data frame handling\nI0131 01:29:02.809533 2034 log.go:181] (0xc000c080a0) (3) Data frame sent\nI0131 01:29:02.810086 2034 log.go:181] (0xc0001522c0) Data frame received for 5\nI0131 01:29:02.810113 2034 log.go:181] (0xc000b5a000) (5) Data frame handling\nI0131 01:29:02.810125 2034 log.go:181] (0xc000b5a000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30030/\nI0131 01:29:02.810147 2034 log.go:181] (0xc0001522c0) Data frame received for 3\nI0131 01:29:02.810157 2034 log.go:181] (0xc000c080a0) (3) Data frame handling\nI0131 01:29:02.810167 2034 log.go:181] (0xc000c080a0) (3) Data frame sent\nI0131 01:29:02.813652 2034 log.go:181] (0xc0001522c0) Data frame received for 3\nI0131 01:29:02.813694 2034 log.go:181] (0xc000c080a0) (3) Data frame handling\nI0131 01:29:02.813728 2034 log.go:181] (0xc000c080a0) (3) Data frame sent\nI0131 01:29:02.814019 2034 log.go:181] (0xc0001522c0) Data frame received for 3\nI0131 01:29:02.814032 2034 log.go:181] (0xc000c080a0) (3) Data frame handling\nI0131 01:29:02.814038 2034 log.go:181] (0xc000c080a0) (3) Data frame sent\nI0131 01:29:02.814055 2034 log.go:181] (0xc0001522c0) Data frame received for 5\nI0131 01:29:02.814061 2034 log.go:181] (0xc000b5a000) (5) Data frame handling\nI0131 01:29:02.814065 2034 log.go:181] (0xc000b5a000) (5) Data frame sent\nI0131 01:29:02.814070 2034 log.go:181] (0xc0001522c0) Data frame received for 5\nI0131 01:29:02.814073 2034 log.go:181] (0xc000b5a000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30030/\nI0131 01:29:02.814116 2034 log.go:181] (0xc000b5a000) (5) Data frame sent\nI0131 01:29:02.819972 2034 log.go:181] (0xc0001522c0) Data frame received for 3\nI0131 01:29:02.819996 2034 log.go:181] (0xc000c080a0) (3) Data frame handling\nI0131 01:29:02.820056 2034 log.go:181] (0xc000c080a0) (3) Data frame sent\nI0131 01:29:02.820633 2034 log.go:181] (0xc0001522c0) Data frame received for 5\nI0131 01:29:02.820659 2034 log.go:181] (0xc0001522c0) Data frame received for 3\nI0131 01:29:02.820688 2034 log.go:181] (0xc000c080a0) (3) Data frame handling\nI0131 01:29:02.820707 2034 log.go:181] (0xc000c080a0) (3) Data frame sent\nI0131 01:29:02.820731 2034 log.go:181] (0xc000b5a000) (5) Data frame handling\nI0131 01:29:02.820746 2034 log.go:181] (0xc000b5a000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30030/\nI0131 01:29:02.824572 2034 log.go:181] (0xc0001522c0) Data frame received for 3\nI0131 01:29:02.824588 2034 log.go:181] (0xc000c080a0) (3) Data frame handling\nI0131 01:29:02.824603 2034 log.go:181] (0xc000c080a0) (3) Data frame sent\nI0131 01:29:02.825123 2034 log.go:181] (0xc0001522c0) Data frame received for 5\nI0131 01:29:02.825159 2034 log.go:181] (0xc000b5a000) (5) Data frame handling\nI0131 01:29:02.825180 2034 log.go:181] (0xc000b5a000) (5) Data frame sent\nI0131 01:29:02.825190 2034 log.go:181] (0xc0001522c0) Data frame received for 5\nI0131 01:29:02.825199 2034 log.go:181] (0xc000b5a000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30030/\nI0131 01:29:02.825233 2034 log.go:181] (0xc0001522c0) Data frame received for 3\nI0131 01:29:02.825268 2034 log.go:181] (0xc000c080a0) (3) Data frame handling\nI0131 01:29:02.825289 2034 log.go:181] (0xc000c080a0) (3) Data frame sent\nI0131 01:29:02.825317 2034 log.go:181] (0xc000b5a000) (5) Data frame sent\nI0131 01:29:02.829804 2034 log.go:181] (0xc0001522c0) Data frame received for 3\nI0131 01:29:02.829839 2034 log.go:181] (0xc000c080a0) (3) Data frame handling\nI0131 01:29:02.829861 2034 log.go:181] (0xc000c080a0) (3) Data frame sent\nI0131 01:29:02.830228 2034 log.go:181] (0xc0001522c0) Data frame received for 3\nI0131 01:29:02.830256 2034 log.go:181] (0xc000c080a0) (3) Data frame handling\nI0131 01:29:02.830265 2034 log.go:181] (0xc000c080a0) (3) Data frame sent\nI0131 01:29:02.830277 2034 log.go:181] (0xc0001522c0) Data frame received for 5\nI0131 01:29:02.830285 2034 log.go:181] (0xc000b5a000) (5) Data frame handling\nI0131 01:29:02.830292 2034 log.go:181] (0xc000b5a000) (5) Data frame sent\nI0131 01:29:02.830300 2034 log.go:181] (0xc0001522c0) Data frame received for 5\nI0131 01:29:02.830307 2034 log.go:181] (0xc000b5a000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30030/\nI0131 01:29:02.830327 2034 log.go:181] (0xc000b5a000) (5) Data frame sent\nI0131 01:29:02.835768 2034 log.go:181] (0xc0001522c0) Data frame received for 3\nI0131 01:29:02.835789 2034 log.go:181] (0xc000c080a0) (3) Data frame handling\nI0131 01:29:02.835811 2034 log.go:181] (0xc000c080a0) (3) Data frame sent\nI0131 01:29:02.836382 2034 log.go:181] (0xc0001522c0) Data frame received for 3\nI0131 01:29:02.836414 2034 log.go:181] (0xc000c080a0) (3) Data frame handling\nI0131 01:29:02.836429 2034 log.go:181] (0xc000c080a0) (3) Data frame sent\nI0131 01:29:02.836448 2034 log.go:181] (0xc0001522c0) Data frame received for 5\nI0131 01:29:02.836458 2034 log.go:181] (0xc000b5a000) (5) Data frame handling\nI0131 01:29:02.836469 2034 log.go:181] (0xc000b5a000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30030/\nI0131 01:29:02.843604 2034 log.go:181] (0xc0001522c0) Data frame received for 3\nI0131 01:29:02.843631 2034 log.go:181] (0xc000c080a0) (3) Data frame handling\nI0131 01:29:02.843652 2034 log.go:181] (0xc000c080a0) (3) Data frame sent\nI0131 01:29:02.844366 2034 log.go:181] (0xc0001522c0) Data frame received for 5\nI0131 01:29:02.844386 2034 log.go:181] (0xc000b5a000) (5) Data frame handling\nI0131 01:29:02.844396 2034 log.go:181] (0xc000b5a000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30030/\nI0131 01:29:02.844404 2034 log.go:181] (0xc0001522c0) Data frame received for 3\nI0131 01:29:02.844424 2034 log.go:181] (0xc000c080a0) (3) Data frame handling\nI0131 01:29:02.844438 2034 log.go:181] (0xc000c080a0) (3) Data frame sent\nI0131 01:29:02.849758 2034 log.go:181] (0xc0001522c0) Data frame received for 3\nI0131 01:29:02.849778 2034 log.go:181] (0xc000c080a0) (3) Data frame handling\nI0131 01:29:02.849793 2034 log.go:181] (0xc000c080a0) (3) Data frame sent\nI0131 01:29:02.850674 2034 log.go:181] (0xc0001522c0) Data frame received for 5\nI0131 01:29:02.850688 2034 log.go:181] (0xc000b5a000) (5) Data frame handling\nI0131 01:29:02.850701 2034 log.go:181] (0xc0001522c0) Data frame received for 3\nI0131 01:29:02.850721 2034 log.go:181] (0xc000c080a0) (3) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30030/\nI0131 01:29:02.850739 2034 log.go:181] (0xc000b5a000) (5) Data frame sent\nI0131 01:29:02.850762 2034 log.go:181] (0xc000c080a0) (3) Data frame sent\nI0131 01:29:02.856922 2034 log.go:181] (0xc0001522c0) Data frame received for 3\nI0131 01:29:02.856942 2034 log.go:181] (0xc000c080a0) (3) Data frame handling\nI0131 01:29:02.856952 2034 log.go:181] (0xc000c080a0) (3) Data frame sent\nI0131 01:29:02.857816 2034 log.go:181] (0xc0001522c0) Data frame received for 5\nI0131 01:29:02.857838 2034 log.go:181] (0xc000b5a000) (5) Data frame handling\nI0131 01:29:02.857850 2034 log.go:181] (0xc000b5a000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30030/\nI0131 01:29:02.857861 2034 log.go:181] (0xc0001522c0) Data frame received for 3\nI0131 01:29:02.857885 2034 log.go:181] (0xc000c080a0) (3) Data frame handling\nI0131 01:29:02.857898 2034 log.go:181] (0xc000c080a0) (3) Data frame sent\nI0131 01:29:02.862436 2034 log.go:181] (0xc0001522c0) Data frame received for 3\nI0131 01:29:02.862477 2034 log.go:181] (0xc000c080a0) (3) Data frame handling\nI0131 01:29:02.862503 2034 log.go:181] (0xc000c080a0) (3) Data frame sent\nI0131 01:29:02.863189 2034 log.go:181] (0xc0001522c0) Data frame received for 3\nI0131 01:29:02.863206 2034 log.go:181] (0xc000c080a0) (3) Data frame handling\nI0131 01:29:02.863307 2034 log.go:181] (0xc0001522c0) Data frame received for 5\nI0131 01:29:02.863335 2034 log.go:181] (0xc000b5a000) (5) Data frame handling\nI0131 01:29:02.864990 2034 log.go:181] (0xc0001522c0) Data frame received for 1\nI0131 01:29:02.865016 2034 log.go:181] (0xc000c08000) (1) Data frame handling\nI0131 01:29:02.865029 2034 log.go:181] (0xc000c08000) (1) Data frame sent\nI0131 01:29:02.865043 2034 log.go:181] (0xc0001522c0) (0xc000c08000) Stream removed, broadcasting: 1\nI0131 01:29:02.865061 2034 log.go:181] (0xc0001522c0) Go away received\nI0131 01:29:02.865430 2034 log.go:181] (0xc0001522c0) (0xc000c08000) Stream removed, broadcasting: 1\nI0131 01:29:02.865456 2034 log.go:181] (0xc0001522c0) (0xc000c080a0) Stream removed, broadcasting: 3\nI0131 01:29:02.865465 2034 log.go:181] (0xc0001522c0) (0xc000b5a000) Stream removed, broadcasting: 5\n" Jan 31 01:29:02.871: INFO: stdout: "\naffinity-nodeport-transition-zpfgw\naffinity-nodeport-transition-p226g\naffinity-nodeport-transition-rqkzv\naffinity-nodeport-transition-zpfgw\naffinity-nodeport-transition-rqkzv\naffinity-nodeport-transition-zpfgw\naffinity-nodeport-transition-p226g\naffinity-nodeport-transition-rqkzv\naffinity-nodeport-transition-p226g\naffinity-nodeport-transition-zpfgw\naffinity-nodeport-transition-p226g\naffinity-nodeport-transition-zpfgw\naffinity-nodeport-transition-rqkzv\naffinity-nodeport-transition-rqkzv\naffinity-nodeport-transition-p226g\naffinity-nodeport-transition-rqkzv" Jan 31 01:29:02.871: INFO: Received response from host: affinity-nodeport-transition-zpfgw Jan 31 01:29:02.871: INFO: Received response from host: affinity-nodeport-transition-p226g Jan 31 01:29:02.871: INFO: Received response from host: affinity-nodeport-transition-rqkzv Jan 31 01:29:02.871: INFO: Received response from host: affinity-nodeport-transition-zpfgw Jan 31 01:29:02.871: INFO: Received response from host: affinity-nodeport-transition-rqkzv Jan 31 01:29:02.871: INFO: Received response from host: affinity-nodeport-transition-zpfgw Jan 31 01:29:02.871: INFO: Received response from host: affinity-nodeport-transition-p226g Jan 31 01:29:02.871: INFO: Received response from host: affinity-nodeport-transition-rqkzv Jan 31 01:29:02.871: INFO: Received response from host: affinity-nodeport-transition-p226g Jan 31 01:29:02.871: INFO: Received response from host: affinity-nodeport-transition-zpfgw Jan 31 01:29:02.871: INFO: Received response from host: affinity-nodeport-transition-p226g Jan 31 01:29:02.871: INFO: Received response from host: affinity-nodeport-transition-zpfgw Jan 31 01:29:02.871: INFO: Received response from host: affinity-nodeport-transition-rqkzv Jan 31 01:29:02.871: INFO: Received response from host: affinity-nodeport-transition-rqkzv Jan 31 01:29:02.871: INFO: Received response from host: affinity-nodeport-transition-p226g Jan 31 01:29:02.871: INFO: Received response from host: affinity-nodeport-transition-rqkzv Jan 31 01:29:02.880: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=services-4404 exec execpod-affinityflx4v -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.14:30030/ ; done' Jan 31 01:29:03.180: INFO: stderr: "I0131 01:29:03.017210 2052 log.go:181] (0xc00001f4a0) (0xc0006a2780) Create stream\nI0131 01:29:03.017299 2052 log.go:181] (0xc00001f4a0) (0xc0006a2780) Stream added, broadcasting: 1\nI0131 01:29:03.019540 2052 log.go:181] (0xc00001f4a0) Reply frame received for 1\nI0131 01:29:03.019604 2052 log.go:181] (0xc00001f4a0) (0xc00040a320) Create stream\nI0131 01:29:03.019641 2052 log.go:181] (0xc00001f4a0) (0xc00040a320) Stream added, broadcasting: 3\nI0131 01:29:03.020647 2052 log.go:181] (0xc00001f4a0) Reply frame received for 3\nI0131 01:29:03.020684 2052 log.go:181] (0xc00001f4a0) (0xc0003b8c80) Create stream\nI0131 01:29:03.020700 2052 log.go:181] (0xc00001f4a0) (0xc0003b8c80) Stream added, broadcasting: 5\nI0131 01:29:03.021994 2052 log.go:181] (0xc00001f4a0) Reply frame received for 5\nI0131 01:29:03.081217 2052 log.go:181] (0xc00001f4a0) Data frame received for 3\nI0131 01:29:03.081251 2052 log.go:181] (0xc00040a320) (3) Data frame handling\nI0131 01:29:03.081262 2052 log.go:181] (0xc00040a320) (3) Data frame sent\nI0131 01:29:03.081269 2052 log.go:181] (0xc00001f4a0) Data frame received for 5\nI0131 01:29:03.081273 2052 log.go:181] (0xc0003b8c80) (5) Data frame handling\nI0131 01:29:03.081278 2052 log.go:181] (0xc0003b8c80) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30030/\nI0131 01:29:03.087106 2052 log.go:181] (0xc00001f4a0) Data frame received for 3\nI0131 01:29:03.087126 2052 log.go:181] (0xc00040a320) (3) Data frame handling\nI0131 01:29:03.087143 2052 log.go:181] (0xc00040a320) (3) Data frame sent\nI0131 01:29:03.087425 2052 log.go:181] (0xc00001f4a0) Data frame received for 5\nI0131 01:29:03.087443 2052 log.go:181] (0xc0003b8c80) (5) Data frame handling\nI0131 01:29:03.087455 2052 log.go:181] (0xc0003b8c80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30030/\nI0131 01:29:03.087490 2052 log.go:181] (0xc00001f4a0) Data frame received for 3\nI0131 01:29:03.087518 2052 log.go:181] (0xc00040a320) (3) Data frame handling\nI0131 01:29:03.087542 2052 log.go:181] (0xc00040a320) (3) Data frame sent\nI0131 01:29:03.094698 2052 log.go:181] (0xc00001f4a0) Data frame received for 3\nI0131 01:29:03.094714 2052 log.go:181] (0xc00040a320) (3) Data frame handling\nI0131 01:29:03.094722 2052 log.go:181] (0xc00040a320) (3) Data frame sent\nI0131 01:29:03.095453 2052 log.go:181] (0xc00001f4a0) Data frame received for 3\nI0131 01:29:03.095485 2052 log.go:181] (0xc00040a320) (3) Data frame handling\nI0131 01:29:03.095499 2052 log.go:181] (0xc00040a320) (3) Data frame sent\nI0131 01:29:03.095513 2052 log.go:181] (0xc00001f4a0) Data frame received for 5\nI0131 01:29:03.095522 2052 log.go:181] (0xc0003b8c80) (5) Data frame handling\nI0131 01:29:03.095530 2052 log.go:181] (0xc0003b8c80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30030/\nI0131 01:29:03.099574 2052 log.go:181] (0xc00001f4a0) Data frame received for 3\nI0131 01:29:03.099588 2052 log.go:181] (0xc00040a320) (3) Data frame handling\nI0131 01:29:03.099599 2052 log.go:181] (0xc00040a320) (3) Data frame sent\nI0131 01:29:03.100387 2052 log.go:181] (0xc00001f4a0) Data frame received for 3\nI0131 01:29:03.100421 2052 log.go:181] (0xc00040a320) (3) Data frame handling\nI0131 01:29:03.100434 2052 log.go:181] (0xc00040a320) (3) Data frame sent\nI0131 01:29:03.100454 2052 log.go:181] (0xc00001f4a0) Data frame received for 5\nI0131 01:29:03.100462 2052 log.go:181] (0xc0003b8c80) (5) Data frame handling\nI0131 01:29:03.100473 2052 log.go:181] (0xc0003b8c80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30030/\nI0131 01:29:03.107588 2052 log.go:181] (0xc00001f4a0) Data frame received for 3\nI0131 01:29:03.107616 2052 log.go:181] (0xc00040a320) (3) Data frame handling\nI0131 01:29:03.107635 2052 log.go:181] (0xc00040a320) (3) Data frame sent\nI0131 01:29:03.108156 2052 log.go:181] (0xc00001f4a0) Data frame received for 3\nI0131 01:29:03.108195 2052 log.go:181] (0xc00040a320) (3) Data frame handling\nI0131 01:29:03.108217 2052 log.go:181] (0xc00040a320) (3) Data frame sent\nI0131 01:29:03.108239 2052 log.go:181] (0xc00001f4a0) Data frame received for 5\nI0131 01:29:03.108255 2052 log.go:181] (0xc0003b8c80) (5) Data frame handling\nI0131 01:29:03.108269 2052 log.go:181] (0xc0003b8c80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30030/\nI0131 01:29:03.117362 2052 log.go:181] (0xc00001f4a0) Data frame received for 3\nI0131 01:29:03.117395 2052 log.go:181] (0xc00040a320) (3) Data frame handling\nI0131 01:29:03.117405 2052 log.go:181] (0xc00040a320) (3) Data frame sent\nI0131 01:29:03.117422 2052 log.go:181] (0xc00001f4a0) Data frame received for 5\nI0131 01:29:03.117429 2052 log.go:181] (0xc0003b8c80) (5) Data frame handling\nI0131 01:29:03.117438 2052 log.go:181] (0xc0003b8c80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30030/\nI0131 01:29:03.119250 2052 log.go:181] (0xc00001f4a0) Data frame received for 3\nI0131 01:29:03.119274 2052 log.go:181] (0xc00040a320) (3) Data frame handling\nI0131 01:29:03.119297 2052 log.go:181] (0xc00040a320) (3) Data frame sent\nI0131 01:29:03.119676 2052 log.go:181] (0xc00001f4a0) Data frame received for 3\nI0131 01:29:03.119699 2052 log.go:181] (0xc00040a320) (3) Data frame handling\nI0131 01:29:03.119712 2052 log.go:181] (0xc00040a320) (3) Data frame sent\nI0131 01:29:03.119730 2052 log.go:181] (0xc00001f4a0) Data frame received for 5\nI0131 01:29:03.119741 2052 log.go:181] (0xc0003b8c80) (5) Data frame handling\nI0131 01:29:03.119753 2052 log.go:181] (0xc0003b8c80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30030/\nI0131 01:29:03.123512 2052 log.go:181] (0xc00001f4a0) Data frame received for 3\nI0131 01:29:03.123525 2052 log.go:181] (0xc00040a320) (3) Data frame handling\nI0131 01:29:03.123532 2052 log.go:181] (0xc00040a320) (3) Data frame sent\nI0131 01:29:03.124097 2052 log.go:181] (0xc00001f4a0) Data frame received for 5\nI0131 01:29:03.124117 2052 log.go:181] (0xc0003b8c80) (5) Data frame handling\nI0131 01:29:03.124135 2052 log.go:181] (0xc0003b8c80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30030/\nI0131 01:29:03.124348 2052 log.go:181] (0xc00001f4a0) Data frame received for 3\nI0131 01:29:03.124359 2052 log.go:181] (0xc00040a320) (3) Data frame handling\nI0131 01:29:03.124366 2052 log.go:181] (0xc00040a320) (3) Data frame sent\nI0131 01:29:03.127263 2052 log.go:181] (0xc00001f4a0) Data frame received for 3\nI0131 01:29:03.127279 2052 log.go:181] (0xc00040a320) (3) Data frame handling\nI0131 01:29:03.127291 2052 log.go:181] (0xc00040a320) (3) Data frame sent\nI0131 01:29:03.127840 2052 log.go:181] (0xc00001f4a0) Data frame received for 5\nI0131 01:29:03.127856 2052 log.go:181] (0xc00001f4a0) Data frame received for 3\nI0131 01:29:03.127879 2052 log.go:181] (0xc00040a320) (3) Data frame handling\nI0131 01:29:03.127889 2052 log.go:181] (0xc00040a320) (3) Data frame sent\nI0131 01:29:03.127899 2052 log.go:181] (0xc0003b8c80) (5) Data frame handling\nI0131 01:29:03.127905 2052 log.go:181] (0xc0003b8c80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30030/\nI0131 01:29:03.130813 2052 log.go:181] (0xc00001f4a0) Data frame received for 3\nI0131 01:29:03.130824 2052 log.go:181] (0xc00040a320) (3) Data frame handling\nI0131 01:29:03.130830 2052 log.go:181] (0xc00040a320) (3) Data frame sent\nI0131 01:29:03.131560 2052 log.go:181] (0xc00001f4a0) Data frame received for 3\nI0131 01:29:03.131581 2052 log.go:181] (0xc00001f4a0) Data frame received for 5\nI0131 01:29:03.131594 2052 log.go:181] (0xc0003b8c80) (5) Data frame handling\nI0131 01:29:03.131602 2052 log.go:181] (0xc0003b8c80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30030/\nI0131 01:29:03.131613 2052 log.go:181] (0xc00040a320) (3) Data frame handling\nI0131 01:29:03.131634 2052 log.go:181] (0xc00040a320) (3) Data frame sent\nI0131 01:29:03.137543 2052 log.go:181] (0xc00001f4a0) Data frame received for 3\nI0131 01:29:03.137562 2052 log.go:181] (0xc00040a320) (3) Data frame handling\nI0131 01:29:03.137571 2052 log.go:181] (0xc00040a320) (3) Data frame sent\nI0131 01:29:03.138032 2052 log.go:181] (0xc00001f4a0) Data frame received for 3\nI0131 01:29:03.138077 2052 log.go:181] (0xc00040a320) (3) Data frame handling\nI0131 01:29:03.138093 2052 log.go:181] (0xc00001f4a0) Data frame received for 5\nI0131 01:29:03.138116 2052 log.go:181] (0xc0003b8c80) (5) Data frame handling\nI0131 01:29:03.138127 2052 log.go:181] (0xc0003b8c80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30030/\nI0131 01:29:03.138140 2052 log.go:181] (0xc00040a320) (3) Data frame sent\nI0131 01:29:03.142197 2052 log.go:181] (0xc00001f4a0) Data frame received for 3\nI0131 01:29:03.142214 2052 log.go:181] (0xc00040a320) (3) Data frame handling\nI0131 01:29:03.142227 2052 log.go:181] (0xc00040a320) (3) Data frame sent\nI0131 01:29:03.143136 2052 log.go:181] (0xc00001f4a0) Data frame received for 3\nI0131 01:29:03.143161 2052 log.go:181] (0xc00040a320) (3) Data frame handling\nI0131 01:29:03.143172 2052 log.go:181] (0xc00040a320) (3) Data frame sent\nI0131 01:29:03.143187 2052 log.go:181] (0xc00001f4a0) Data frame received for 5\nI0131 01:29:03.143195 2052 log.go:181] (0xc0003b8c80) (5) Data frame handling\nI0131 01:29:03.143204 2052 log.go:181] (0xc0003b8c80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30030/\nI0131 01:29:03.147607 2052 log.go:181] (0xc00001f4a0) Data frame received for 3\nI0131 01:29:03.147632 2052 log.go:181] (0xc00040a320) (3) Data frame handling\nI0131 01:29:03.147656 2052 log.go:181] (0xc00040a320) (3) Data frame sent\nI0131 01:29:03.148575 2052 log.go:181] (0xc00001f4a0) Data frame received for 3\nI0131 01:29:03.148618 2052 log.go:181] (0xc00040a320) (3) Data frame handling\nI0131 01:29:03.148641 2052 log.go:181] (0xc00040a320) (3) Data frame sent\nI0131 01:29:03.148667 2052 log.go:181] (0xc00001f4a0) Data frame received for 5\nI0131 01:29:03.148681 2052 log.go:181] (0xc0003b8c80) (5) Data frame handling\nI0131 01:29:03.148703 2052 log.go:181] (0xc0003b8c80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30030/\nI0131 01:29:03.152812 2052 log.go:181] (0xc00001f4a0) Data frame received for 3\nI0131 01:29:03.152932 2052 log.go:181] (0xc00040a320) (3) Data frame handling\nI0131 01:29:03.152965 2052 log.go:181] (0xc00040a320) (3) Data frame sent\nI0131 01:29:03.153757 2052 log.go:181] (0xc00001f4a0) Data frame received for 3\nI0131 01:29:03.153783 2052 log.go:181] (0xc00040a320) (3) Data frame handling\nI0131 01:29:03.153795 2052 log.go:181] (0xc00040a320) (3) Data frame sent\nI0131 01:29:03.153819 2052 log.go:181] (0xc00001f4a0) Data frame received for 5\nI0131 01:29:03.153831 2052 log.go:181] (0xc0003b8c80) (5) Data frame handling\nI0131 01:29:03.153839 2052 log.go:181] (0xc0003b8c80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30030/\nI0131 01:29:03.157692 2052 log.go:181] (0xc00001f4a0) Data frame received for 3\nI0131 01:29:03.157721 2052 log.go:181] (0xc00040a320) (3) Data frame handling\nI0131 01:29:03.157737 2052 log.go:181] (0xc00040a320) (3) Data frame sent\nI0131 01:29:03.158639 2052 log.go:181] (0xc00001f4a0) Data frame received for 3\nI0131 01:29:03.158664 2052 log.go:181] (0xc00040a320) (3) Data frame handling\nI0131 01:29:03.158676 2052 log.go:181] (0xc00040a320) (3) Data frame sent\nI0131 01:29:03.158711 2052 log.go:181] (0xc00001f4a0) Data frame received for 5\nI0131 01:29:03.158745 2052 log.go:181] (0xc0003b8c80) (5) Data frame handling\nI0131 01:29:03.158772 2052 log.go:181] (0xc0003b8c80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30030/\nI0131 01:29:03.164784 2052 log.go:181] (0xc00001f4a0) Data frame received for 3\nI0131 01:29:03.164814 2052 log.go:181] (0xc00040a320) (3) Data frame handling\nI0131 01:29:03.164956 2052 log.go:181] (0xc00040a320) (3) Data frame sent\nI0131 01:29:03.165753 2052 log.go:181] (0xc00001f4a0) Data frame received for 5\nI0131 01:29:03.165804 2052 log.go:181] (0xc0003b8c80) (5) Data frame handling\nI0131 01:29:03.165817 2052 log.go:181] (0xc0003b8c80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30030/\nI0131 01:29:03.165836 2052 log.go:181] (0xc00001f4a0) Data frame received for 3\nI0131 01:29:03.165846 2052 log.go:181] (0xc00040a320) (3) Data frame handling\nI0131 01:29:03.165863 2052 log.go:181] (0xc00040a320) (3) Data frame sent\nI0131 01:29:03.169636 2052 log.go:181] (0xc00001f4a0) Data frame received for 3\nI0131 01:29:03.169664 2052 log.go:181] (0xc00040a320) (3) Data frame handling\nI0131 01:29:03.169685 2052 log.go:181] (0xc00040a320) (3) Data frame sent\nI0131 01:29:03.170361 2052 log.go:181] (0xc00001f4a0) Data frame received for 5\nI0131 01:29:03.170408 2052 log.go:181] (0xc0003b8c80) (5) Data frame handling\nI0131 01:29:03.170513 2052 log.go:181] (0xc00001f4a0) Data frame received for 3\nI0131 01:29:03.170555 2052 log.go:181] (0xc00040a320) (3) Data frame handling\nI0131 01:29:03.172720 2052 log.go:181] (0xc00001f4a0) Data frame received for 1\nI0131 01:29:03.172821 2052 log.go:181] (0xc0006a2780) (1) Data frame handling\nI0131 01:29:03.173063 2052 log.go:181] (0xc0006a2780) (1) Data frame sent\nI0131 01:29:03.173091 2052 log.go:181] (0xc00001f4a0) (0xc0006a2780) Stream removed, broadcasting: 1\nI0131 01:29:03.173118 2052 log.go:181] (0xc00001f4a0) Go away received\nI0131 01:29:03.175281 2052 log.go:181] (0xc00001f4a0) (0xc0006a2780) Stream removed, broadcasting: 1\nI0131 01:29:03.175314 2052 log.go:181] (0xc00001f4a0) (0xc00040a320) Stream removed, broadcasting: 3\nI0131 01:29:03.175331 2052 log.go:181] (0xc00001f4a0) (0xc0003b8c80) Stream removed, broadcasting: 5\n" Jan 31 01:29:03.180: INFO: stdout: "\naffinity-nodeport-transition-zpfgw\naffinity-nodeport-transition-zpfgw\naffinity-nodeport-transition-zpfgw\naffinity-nodeport-transition-zpfgw\naffinity-nodeport-transition-zpfgw\naffinity-nodeport-transition-zpfgw\naffinity-nodeport-transition-zpfgw\naffinity-nodeport-transition-zpfgw\naffinity-nodeport-transition-zpfgw\naffinity-nodeport-transition-zpfgw\naffinity-nodeport-transition-zpfgw\naffinity-nodeport-transition-zpfgw\naffinity-nodeport-transition-zpfgw\naffinity-nodeport-transition-zpfgw\naffinity-nodeport-transition-zpfgw\naffinity-nodeport-transition-zpfgw" Jan 31 01:29:03.180: INFO: Received response from host: affinity-nodeport-transition-zpfgw Jan 31 01:29:03.180: INFO: Received response from host: affinity-nodeport-transition-zpfgw Jan 31 01:29:03.180: INFO: Received response from host: affinity-nodeport-transition-zpfgw Jan 31 01:29:03.180: INFO: Received response from host: affinity-nodeport-transition-zpfgw Jan 31 01:29:03.180: INFO: Received response from host: affinity-nodeport-transition-zpfgw Jan 31 01:29:03.180: INFO: Received response from host: affinity-nodeport-transition-zpfgw Jan 31 01:29:03.180: INFO: Received response from host: affinity-nodeport-transition-zpfgw Jan 31 01:29:03.180: INFO: Received response from host: affinity-nodeport-transition-zpfgw Jan 31 01:29:03.180: INFO: Received response from host: affinity-nodeport-transition-zpfgw Jan 31 01:29:03.180: INFO: Received response from host: affinity-nodeport-transition-zpfgw Jan 31 01:29:03.180: INFO: Received response from host: affinity-nodeport-transition-zpfgw Jan 31 01:29:03.180: INFO: Received response from host: affinity-nodeport-transition-zpfgw Jan 31 01:29:03.180: INFO: Received response from host: affinity-nodeport-transition-zpfgw Jan 31 01:29:03.180: INFO: Received response from host: affinity-nodeport-transition-zpfgw Jan 31 01:29:03.180: INFO: Received response from host: affinity-nodeport-transition-zpfgw Jan 31 01:29:03.180: INFO: Received response from host: affinity-nodeport-transition-zpfgw Jan 31 01:29:03.180: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-4404, will wait for the garbage collector to delete the pods Jan 31 01:29:03.378: INFO: Deleting ReplicationController affinity-nodeport-transition took: 100.366979ms Jan 31 01:29:03.978: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 600.232213ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:30:10.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4404" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 • [SLOW TEST:80.564 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":311,"completed":210,"skipped":3972,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:30:10.868: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:30:11.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-605" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":311,"completed":211,"skipped":3993,"failed":0} SSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:30:11.067: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:30:11.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-1351" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":311,"completed":212,"skipped":3997,"failed":0} ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:30:11.167: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:30:39.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7898" for this suite. • [SLOW TEST:28.145 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":311,"completed":213,"skipped":3997,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:30:39.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Jan 31 01:30:39.397: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3417 457579fb-12ac-464d-a4ab-8af2a6c16b2a 1130400 0 2021-01-31 01:30:39 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-01-31 01:30:39 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jan 31 01:30:39.397: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3417 457579fb-12ac-464d-a4ab-8af2a6c16b2a 1130400 0 2021-01-31 01:30:39 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-01-31 01:30:39 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification Jan 31 01:30:49.410: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3417 457579fb-12ac-464d-a4ab-8af2a6c16b2a 1130427 0 2021-01-31 01:30:39 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-01-31 01:30:49 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Jan 31 01:30:49.411: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3417 457579fb-12ac-464d-a4ab-8af2a6c16b2a 1130427 0 2021-01-31 01:30:39 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-01-31 01:30:49 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Jan 31 01:30:59.424: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3417 457579fb-12ac-464d-a4ab-8af2a6c16b2a 1130447 0 2021-01-31 01:30:39 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-01-31 01:30:49 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jan 31 01:30:59.424: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3417 457579fb-12ac-464d-a4ab-8af2a6c16b2a 1130447 0 2021-01-31 01:30:39 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-01-31 01:30:49 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification Jan 31 01:31:09.434: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3417 457579fb-12ac-464d-a4ab-8af2a6c16b2a 1130467 0 2021-01-31 01:30:39 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-01-31 01:30:49 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jan 31 01:31:09.435: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3417 457579fb-12ac-464d-a4ab-8af2a6c16b2a 1130467 0 2021-01-31 01:30:39 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-01-31 01:30:49 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Jan 31 01:31:19.446: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3417 04621eb4-91bf-40d0-951a-71fcb82494ff 1130487 0 2021-01-31 01:31:19 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-01-31 01:31:19 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jan 31 01:31:19.446: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3417 04621eb4-91bf-40d0-951a-71fcb82494ff 1130487 0 2021-01-31 01:31:19 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-01-31 01:31:19 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification Jan 31 01:31:29.453: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3417 04621eb4-91bf-40d0-951a-71fcb82494ff 1130507 0 2021-01-31 01:31:19 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-01-31 01:31:19 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jan 31 01:31:29.453: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3417 04621eb4-91bf-40d0-951a-71fcb82494ff 1130507 0 2021-01-31 01:31:19 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-01-31 01:31:19 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:31:39.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3417" for this suite. • [SLOW TEST:60.158 seconds] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":311,"completed":214,"skipped":4048,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] [sig-node] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:31:39.471: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Jan 31 01:31:43.627: INFO: &Pod{ObjectMeta:{send-events-24b997d3-5efc-49c8-a076-b2bf6dc80266 events-8964 a8785fcd-7515-46c0-b9ba-4577c54ace80 1130547 0 2021-01-31 01:31:39 +0000 UTC map[name:foo time:593985985] map[] [] [] [{e2e.test Update v1 2021-01-31 01:31:39 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"p\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-31 01:31:43 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.7\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-s9fz9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-s9fz9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:p,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-s9fz9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 01:31:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 01:31:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 01:31:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 01:31:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.2.7,StartTime:2021-01-31 01:31:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-01-31 01:31:42 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a,ContainerID:containerd://19e6d4bd7900e420dc7189b3cdfe7f00649cc2b57d1eb543835d709fc3771dc4,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.7,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Jan 31 01:31:45.633: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Jan 31 01:31:47.638: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:31:47.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-8964" for this suite. • [SLOW TEST:8.209 seconds] [k8s.io] [sig-node] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:635 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":311,"completed":215,"skipped":4069,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:31:47.681: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating configMap with name projected-configmap-test-volume-8cbf3fb3-c8f0-40af-9538-68f298a44744 STEP: Creating a pod to test consume configMaps Jan 31 01:31:47.827: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1922ee7c-6125-4d39-936a-356df5697e5e" in namespace "projected-2968" to be "Succeeded or Failed" Jan 31 01:31:47.835: INFO: Pod "pod-projected-configmaps-1922ee7c-6125-4d39-936a-356df5697e5e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.271077ms Jan 31 01:31:49.839: INFO: Pod "pod-projected-configmaps-1922ee7c-6125-4d39-936a-356df5697e5e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012318366s Jan 31 01:31:51.844: INFO: Pod "pod-projected-configmaps-1922ee7c-6125-4d39-936a-356df5697e5e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016687049s STEP: Saw pod success Jan 31 01:31:51.844: INFO: Pod "pod-projected-configmaps-1922ee7c-6125-4d39-936a-356df5697e5e" satisfied condition "Succeeded or Failed" Jan 31 01:31:51.847: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-1922ee7c-6125-4d39-936a-356df5697e5e container agnhost-container: STEP: delete the pod Jan 31 01:31:51.906: INFO: Waiting for pod pod-projected-configmaps-1922ee7c-6125-4d39-936a-356df5697e5e to disappear Jan 31 01:31:51.932: INFO: Pod pod-projected-configmaps-1922ee7c-6125-4d39-936a-356df5697e5e no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:31:51.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2968" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":311,"completed":216,"skipped":4119,"failed":0} SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:31:51.940: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [BeforeEach] Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1392 STEP: creating an pod Jan 31 01:31:52.004: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-7124 run logs-generator --image=k8s.gcr.io/e2e-test-images/agnhost:2.21 --restart=Never -- logs-generator --log-lines-total 100 --run-duration 20s' Jan 31 01:31:55.426: INFO: stderr: "" Jan 31 01:31:55.426: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Waiting for log generator to start. Jan 31 01:31:55.426: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Jan 31 01:31:55.426: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-7124" to be "running and ready, or succeeded" Jan 31 01:31:55.474: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 47.742085ms Jan 31 01:31:57.490: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063228909s Jan 31 01:31:59.495: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.068206055s Jan 31 01:31:59.495: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Jan 31 01:31:59.495: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Jan 31 01:31:59.495: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-7124 logs logs-generator logs-generator' Jan 31 01:31:59.599: INFO: stderr: "" Jan 31 01:31:59.599: INFO: stdout: "I0131 01:31:58.278163 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/kube-system/pods/8qw 572\nI0131 01:31:58.478371 1 logs_generator.go:76] 1 POST /api/v1/namespaces/kube-system/pods/pmw 590\nI0131 01:31:58.678318 1 logs_generator.go:76] 2 GET /api/v1/namespaces/default/pods/b78h 203\nI0131 01:31:58.878323 1 logs_generator.go:76] 3 GET /api/v1/namespaces/ns/pods/p29 297\nI0131 01:31:59.078358 1 logs_generator.go:76] 4 POST /api/v1/namespaces/kube-system/pods/fmz8 510\nI0131 01:31:59.278365 1 logs_generator.go:76] 5 GET /api/v1/namespaces/kube-system/pods/zlw6 311\nI0131 01:31:59.478302 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/default/pods/mwd 253\n" STEP: limiting log lines Jan 31 01:31:59.599: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-7124 logs logs-generator logs-generator --tail=1' Jan 31 01:31:59.708: INFO: stderr: "" Jan 31 01:31:59.708: INFO: stdout: "I0131 01:31:59.678304 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/default/pods/kcfl 363\n" Jan 31 01:31:59.708: INFO: got output "I0131 01:31:59.678304 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/default/pods/kcfl 363\n" STEP: limiting log bytes Jan 31 01:31:59.708: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-7124 logs logs-generator logs-generator --limit-bytes=1' Jan 31 01:31:59.824: INFO: stderr: "" Jan 31 01:31:59.824: INFO: stdout: "I" Jan 31 01:31:59.824: INFO: got output "I" STEP: exposing timestamps Jan 31 01:31:59.824: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-7124 logs logs-generator logs-generator --tail=1 --timestamps' Jan 31 01:31:59.943: INFO: stderr: "" Jan 31 01:31:59.943: INFO: stdout: "2021-01-31T01:31:59.878491754Z I0131 01:31:59.878291 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/ns/pods/2jsj 537\n" Jan 31 01:31:59.943: INFO: got output "2021-01-31T01:31:59.878491754Z I0131 01:31:59.878291 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/ns/pods/2jsj 537\n" STEP: restricting to a time range Jan 31 01:32:02.443: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-7124 logs logs-generator logs-generator --since=1s' Jan 31 01:32:02.564: INFO: stderr: "" Jan 31 01:32:02.564: INFO: stdout: "I0131 01:32:01.678320 1 logs_generator.go:76] 17 PUT /api/v1/namespaces/ns/pods/xgx 248\nI0131 01:32:01.878341 1 logs_generator.go:76] 18 GET /api/v1/namespaces/ns/pods/v9n 556\nI0131 01:32:02.078261 1 logs_generator.go:76] 19 POST /api/v1/namespaces/default/pods/txdq 388\nI0131 01:32:02.278359 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/default/pods/w54 316\nI0131 01:32:02.478352 1 logs_generator.go:76] 21 GET /api/v1/namespaces/default/pods/7pm 372\n" Jan 31 01:32:02.564: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-7124 logs logs-generator logs-generator --since=24h' Jan 31 01:32:02.682: INFO: stderr: "" Jan 31 01:32:02.682: INFO: stdout: "I0131 01:31:58.278163 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/kube-system/pods/8qw 572\nI0131 01:31:58.478371 1 logs_generator.go:76] 1 POST /api/v1/namespaces/kube-system/pods/pmw 590\nI0131 01:31:58.678318 1 logs_generator.go:76] 2 GET /api/v1/namespaces/default/pods/b78h 203\nI0131 01:31:58.878323 1 logs_generator.go:76] 3 GET /api/v1/namespaces/ns/pods/p29 297\nI0131 01:31:59.078358 1 logs_generator.go:76] 4 POST /api/v1/namespaces/kube-system/pods/fmz8 510\nI0131 01:31:59.278365 1 logs_generator.go:76] 5 GET /api/v1/namespaces/kube-system/pods/zlw6 311\nI0131 01:31:59.478302 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/default/pods/mwd 253\nI0131 01:31:59.678304 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/default/pods/kcfl 363\nI0131 01:31:59.878291 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/ns/pods/2jsj 537\nI0131 01:32:00.078324 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/kube-system/pods/c9tw 555\nI0131 01:32:00.278276 1 logs_generator.go:76] 10 GET /api/v1/namespaces/ns/pods/5tbm 354\nI0131 01:32:00.478345 1 logs_generator.go:76] 11 GET /api/v1/namespaces/ns/pods/l9g 533\nI0131 01:32:00.678301 1 logs_generator.go:76] 12 GET /api/v1/namespaces/ns/pods/46dk 390\nI0131 01:32:00.878311 1 logs_generator.go:76] 13 POST /api/v1/namespaces/default/pods/hkm 405\nI0131 01:32:01.078306 1 logs_generator.go:76] 14 GET /api/v1/namespaces/default/pods/vp9 471\nI0131 01:32:01.278305 1 logs_generator.go:76] 15 POST /api/v1/namespaces/kube-system/pods/wrkp 431\nI0131 01:32:01.478380 1 logs_generator.go:76] 16 GET /api/v1/namespaces/ns/pods/lk4d 452\nI0131 01:32:01.678320 1 logs_generator.go:76] 17 PUT /api/v1/namespaces/ns/pods/xgx 248\nI0131 01:32:01.878341 1 logs_generator.go:76] 18 GET /api/v1/namespaces/ns/pods/v9n 556\nI0131 01:32:02.078261 1 logs_generator.go:76] 19 POST /api/v1/namespaces/default/pods/txdq 388\nI0131 01:32:02.278359 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/default/pods/w54 316\nI0131 01:32:02.478352 1 logs_generator.go:76] 21 GET /api/v1/namespaces/default/pods/7pm 372\n" [AfterEach] Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1397 Jan 31 01:32:02.682: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-7124 delete pod logs-generator' Jan 31 01:32:21.097: INFO: stderr: "" Jan 31 01:32:21.097: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:32:21.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7124" for this suite. • [SLOW TEST:29.183 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1389 should be able to retrieve and filter logs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":311,"completed":217,"skipped":4130,"failed":0} SSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:32:21.124: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [AfterEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:32:25.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2698" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":311,"completed":218,"skipped":4137,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:32:25.281: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: set up a multi version CRD Jan 31 01:32:25.366: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:32:44.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6174" for this suite. • [SLOW TEST:19.702 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":311,"completed":219,"skipped":4148,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:32:44.983: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 Jan 31 01:32:45.067: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Jan 31 01:32:45.649: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-01-31T01:32:45Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-01-31T01:32:45Z]] name:name1 resourceVersion:1130798 uid:2e5b0ba2-829d-4c65-a74d-4d5521316e96] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Jan 31 01:32:55.658: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-01-31T01:32:55Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-01-31T01:32:55Z]] name:name2 resourceVersion:1130824 uid:07432888-6bf2-41b9-9a11-60a40572bec2] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Jan 31 01:33:05.667: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-01-31T01:32:45Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-01-31T01:33:05Z]] name:name1 resourceVersion:1130845 uid:2e5b0ba2-829d-4c65-a74d-4d5521316e96] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Jan 31 01:33:15.676: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-01-31T01:32:55Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-01-31T01:33:15Z]] name:name2 resourceVersion:1130866 uid:07432888-6bf2-41b9-9a11-60a40572bec2] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Jan 31 01:33:25.686: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-01-31T01:32:45Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-01-31T01:33:05Z]] name:name1 resourceVersion:1130890 uid:2e5b0ba2-829d-4c65-a74d-4d5521316e96] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Jan 31 01:33:35.694: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-01-31T01:32:55Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-01-31T01:33:15Z]] name:name2 resourceVersion:1130912 uid:07432888-6bf2-41b9-9a11-60a40572bec2] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:33:46.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-1141" for this suite. • [SLOW TEST:61.233 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":311,"completed":220,"skipped":4159,"failed":0} SS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:33:46.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should be able to create a functioning NodePort service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: creating service nodeport-test with type=NodePort in namespace services-3927 STEP: creating replication controller nodeport-test in namespace services-3927 I0131 01:33:46.371128 7 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-3927, replica count: 2 I0131 01:33:49.421525 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0131 01:33:52.421785 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 31 01:33:52.421: INFO: Creating new exec pod Jan 31 01:33:57.444: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=services-3927 exec execpodmfhnr -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Jan 31 01:33:57.632: INFO: stderr: "I0131 01:33:57.578540 2214 log.go:181] (0xc00003afd0) (0xc0008063c0) Create stream\nI0131 01:33:57.578611 2214 log.go:181] (0xc00003afd0) (0xc0008063c0) Stream added, broadcasting: 1\nI0131 01:33:57.581110 2214 log.go:181] (0xc00003afd0) Reply frame received for 1\nI0131 01:33:57.581143 2214 log.go:181] (0xc00003afd0) (0xc0004a8be0) Create stream\nI0131 01:33:57.581150 2214 log.go:181] (0xc00003afd0) (0xc0004a8be0) Stream added, broadcasting: 3\nI0131 01:33:57.582083 2214 log.go:181] (0xc00003afd0) Reply frame received for 3\nI0131 01:33:57.582116 2214 log.go:181] (0xc00003afd0) (0xc0004a8c80) Create stream\nI0131 01:33:57.582124 2214 log.go:181] (0xc00003afd0) (0xc0004a8c80) Stream added, broadcasting: 5\nI0131 01:33:57.583028 2214 log.go:181] (0xc00003afd0) Reply frame received for 5\nI0131 01:33:57.624954 2214 log.go:181] (0xc00003afd0) Data frame received for 5\nI0131 01:33:57.624989 2214 log.go:181] (0xc0004a8c80) (5) Data frame handling\nI0131 01:33:57.624999 2214 log.go:181] (0xc0004a8c80) (5) Data frame sent\nI0131 01:33:57.625007 2214 log.go:181] (0xc00003afd0) Data frame received for 5\n+ nc -zv -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0131 01:33:57.625013 2214 log.go:181] (0xc0004a8c80) (5) Data frame handling\nI0131 01:33:57.625096 2214 log.go:181] (0xc00003afd0) Data frame received for 3\nI0131 01:33:57.625107 2214 log.go:181] (0xc0004a8be0) (3) Data frame handling\nI0131 01:33:57.626933 2214 log.go:181] (0xc00003afd0) Data frame received for 1\nI0131 01:33:57.626949 2214 log.go:181] (0xc0008063c0) (1) Data frame handling\nI0131 01:33:57.626957 2214 log.go:181] (0xc0008063c0) (1) Data frame sent\nI0131 01:33:57.626965 2214 log.go:181] (0xc00003afd0) (0xc0008063c0) Stream removed, broadcasting: 1\nI0131 01:33:57.627003 2214 log.go:181] (0xc00003afd0) Go away received\nI0131 01:33:57.627229 2214 log.go:181] (0xc00003afd0) (0xc0008063c0) Stream removed, broadcasting: 1\nI0131 01:33:57.627242 2214 log.go:181] (0xc00003afd0) (0xc0004a8be0) Stream removed, broadcasting: 3\nI0131 01:33:57.627247 2214 log.go:181] (0xc00003afd0) (0xc0004a8c80) Stream removed, broadcasting: 5\n" Jan 31 01:33:57.632: INFO: stdout: "" Jan 31 01:33:57.633: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=services-3927 exec execpodmfhnr -- /bin/sh -x -c nc -zv -t -w 2 10.96.62.192 80' Jan 31 01:33:57.831: INFO: stderr: "I0131 01:33:57.755656 2232 log.go:181] (0xc00003a0b0) (0xc0005a0000) Create stream\nI0131 01:33:57.755741 2232 log.go:181] (0xc00003a0b0) (0xc0005a0000) Stream added, broadcasting: 1\nI0131 01:33:57.757974 2232 log.go:181] (0xc00003a0b0) Reply frame received for 1\nI0131 01:33:57.758017 2232 log.go:181] (0xc00003a0b0) (0xc0005a00a0) Create stream\nI0131 01:33:57.758030 2232 log.go:181] (0xc00003a0b0) (0xc0005a00a0) Stream added, broadcasting: 3\nI0131 01:33:57.758926 2232 log.go:181] (0xc00003a0b0) Reply frame received for 3\nI0131 01:33:57.758958 2232 log.go:181] (0xc00003a0b0) (0xc000a16140) Create stream\nI0131 01:33:57.758967 2232 log.go:181] (0xc00003a0b0) (0xc000a16140) Stream added, broadcasting: 5\nI0131 01:33:57.759730 2232 log.go:181] (0xc00003a0b0) Reply frame received for 5\nI0131 01:33:57.825161 2232 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0131 01:33:57.825187 2232 log.go:181] (0xc0005a00a0) (3) Data frame handling\nI0131 01:33:57.825232 2232 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0131 01:33:57.825239 2232 log.go:181] (0xc000a16140) (5) Data frame handling\nI0131 01:33:57.825244 2232 log.go:181] (0xc000a16140) (5) Data frame sent\nI0131 01:33:57.825249 2232 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0131 01:33:57.825253 2232 log.go:181] (0xc000a16140) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.62.192 80\nConnection to 10.96.62.192 80 port [tcp/http] succeeded!\nI0131 01:33:57.826477 2232 log.go:181] (0xc00003a0b0) Data frame received for 1\nI0131 01:33:57.826507 2232 log.go:181] (0xc0005a0000) (1) Data frame handling\nI0131 01:33:57.826528 2232 log.go:181] (0xc0005a0000) (1) Data frame sent\nI0131 01:33:57.826550 2232 log.go:181] (0xc00003a0b0) (0xc0005a0000) Stream removed, broadcasting: 1\nI0131 01:33:57.826607 2232 log.go:181] (0xc00003a0b0) Go away received\nI0131 01:33:57.826954 2232 log.go:181] (0xc00003a0b0) (0xc0005a0000) Stream removed, broadcasting: 1\nI0131 01:33:57.826969 2232 log.go:181] (0xc00003a0b0) (0xc0005a00a0) Stream removed, broadcasting: 3\nI0131 01:33:57.826978 2232 log.go:181] (0xc00003a0b0) (0xc000a16140) Stream removed, broadcasting: 5\n" Jan 31 01:33:57.831: INFO: stdout: "" Jan 31 01:33:57.831: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=services-3927 exec execpodmfhnr -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.14 30617' Jan 31 01:33:58.044: INFO: stderr: "I0131 01:33:57.965718 2250 log.go:181] (0xc00003a420) (0xc0007a4d20) Create stream\nI0131 01:33:57.965780 2250 log.go:181] (0xc00003a420) (0xc0007a4d20) Stream added, broadcasting: 1\nI0131 01:33:57.968013 2250 log.go:181] (0xc00003a420) Reply frame received for 1\nI0131 01:33:57.968068 2250 log.go:181] (0xc00003a420) (0xc0005341e0) Create stream\nI0131 01:33:57.968089 2250 log.go:181] (0xc00003a420) (0xc0005341e0) Stream added, broadcasting: 3\nI0131 01:33:57.969206 2250 log.go:181] (0xc00003a420) Reply frame received for 3\nI0131 01:33:57.969234 2250 log.go:181] (0xc00003a420) (0xc000534dc0) Create stream\nI0131 01:33:57.969241 2250 log.go:181] (0xc00003a420) (0xc000534dc0) Stream added, broadcasting: 5\nI0131 01:33:57.970163 2250 log.go:181] (0xc00003a420) Reply frame received for 5\nI0131 01:33:58.036985 2250 log.go:181] (0xc00003a420) Data frame received for 3\nI0131 01:33:58.037022 2250 log.go:181] (0xc0005341e0) (3) Data frame handling\nI0131 01:33:58.037188 2250 log.go:181] (0xc00003a420) Data frame received for 5\nI0131 01:33:58.037226 2250 log.go:181] (0xc000534dc0) (5) Data frame handling\nI0131 01:33:58.037253 2250 log.go:181] (0xc000534dc0) (5) Data frame sent\nI0131 01:33:58.037266 2250 log.go:181] (0xc00003a420) Data frame received for 5\nI0131 01:33:58.037280 2250 log.go:181] (0xc000534dc0) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.14 30617\nConnection to 172.18.0.14 30617 port [tcp/30617] succeeded!\nI0131 01:33:58.038729 2250 log.go:181] (0xc00003a420) Data frame received for 1\nI0131 01:33:58.038751 2250 log.go:181] (0xc0007a4d20) (1) Data frame handling\nI0131 01:33:58.038766 2250 log.go:181] (0xc0007a4d20) (1) Data frame sent\nI0131 01:33:58.038781 2250 log.go:181] (0xc00003a420) (0xc0007a4d20) Stream removed, broadcasting: 1\nI0131 01:33:58.038796 2250 log.go:181] (0xc00003a420) Go away received\nI0131 01:33:58.039301 2250 log.go:181] (0xc00003a420) (0xc0007a4d20) Stream removed, broadcasting: 1\nI0131 01:33:58.039326 2250 log.go:181] (0xc00003a420) (0xc0005341e0) Stream removed, broadcasting: 3\nI0131 01:33:58.039338 2250 log.go:181] (0xc00003a420) (0xc000534dc0) Stream removed, broadcasting: 5\n" Jan 31 01:33:58.044: INFO: stdout: "" Jan 31 01:33:58.045: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=services-3927 exec execpodmfhnr -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.16 30617' Jan 31 01:33:58.272: INFO: stderr: "I0131 01:33:58.186620 2269 log.go:181] (0xc000141080) (0xc00072a3c0) Create stream\nI0131 01:33:58.186677 2269 log.go:181] (0xc000141080) (0xc00072a3c0) Stream added, broadcasting: 1\nI0131 01:33:58.192156 2269 log.go:181] (0xc000141080) Reply frame received for 1\nI0131 01:33:58.192207 2269 log.go:181] (0xc000141080) (0xc0009e6000) Create stream\nI0131 01:33:58.192228 2269 log.go:181] (0xc000141080) (0xc0009e6000) Stream added, broadcasting: 3\nI0131 01:33:58.193559 2269 log.go:181] (0xc000141080) Reply frame received for 3\nI0131 01:33:58.193607 2269 log.go:181] (0xc000141080) (0xc000afa000) Create stream\nI0131 01:33:58.193627 2269 log.go:181] (0xc000141080) (0xc000afa000) Stream added, broadcasting: 5\nI0131 01:33:58.195815 2269 log.go:181] (0xc000141080) Reply frame received for 5\nI0131 01:33:58.263296 2269 log.go:181] (0xc000141080) Data frame received for 5\nI0131 01:33:58.263324 2269 log.go:181] (0xc000afa000) (5) Data frame handling\nI0131 01:33:58.263341 2269 log.go:181] (0xc000afa000) (5) Data frame sent\nI0131 01:33:58.263354 2269 log.go:181] (0xc000141080) Data frame received for 5\nI0131 01:33:58.263366 2269 log.go:181] (0xc000afa000) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.16 30617\nConnection to 172.18.0.16 30617 port [tcp/30617] succeeded!\nI0131 01:33:58.263401 2269 log.go:181] (0xc000afa000) (5) Data frame sent\nI0131 01:33:58.263560 2269 log.go:181] (0xc000141080) Data frame received for 5\nI0131 01:33:58.263592 2269 log.go:181] (0xc000afa000) (5) Data frame handling\nI0131 01:33:58.263634 2269 log.go:181] (0xc000141080) Data frame received for 3\nI0131 01:33:58.263662 2269 log.go:181] (0xc0009e6000) (3) Data frame handling\nI0131 01:33:58.265319 2269 log.go:181] (0xc000141080) Data frame received for 1\nI0131 01:33:58.265337 2269 log.go:181] (0xc00072a3c0) (1) Data frame handling\nI0131 01:33:58.265347 2269 log.go:181] (0xc00072a3c0) (1) Data frame sent\nI0131 01:33:58.265359 2269 log.go:181] (0xc000141080) (0xc00072a3c0) Stream removed, broadcasting: 1\nI0131 01:33:58.265371 2269 log.go:181] (0xc000141080) Go away received\nI0131 01:33:58.265817 2269 log.go:181] (0xc000141080) (0xc00072a3c0) Stream removed, broadcasting: 1\nI0131 01:33:58.265841 2269 log.go:181] (0xc000141080) (0xc0009e6000) Stream removed, broadcasting: 3\nI0131 01:33:58.265856 2269 log.go:181] (0xc000141080) (0xc000afa000) Stream removed, broadcasting: 5\n" Jan 31 01:33:58.272: INFO: stdout: "" [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:33:58.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3927" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 • [SLOW TEST:12.066 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":311,"completed":221,"skipped":4161,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:33:58.283: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a pod to test downward api env vars Jan 31 01:33:58.392: INFO: Waiting up to 5m0s for pod "downward-api-c58112f8-8269-430a-b7cd-98d4dafb565c" in namespace "downward-api-915" to be "Succeeded or Failed" Jan 31 01:33:58.396: INFO: Pod "downward-api-c58112f8-8269-430a-b7cd-98d4dafb565c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.205846ms Jan 31 01:34:00.400: INFO: Pod "downward-api-c58112f8-8269-430a-b7cd-98d4dafb565c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007785733s Jan 31 01:34:02.406: INFO: Pod "downward-api-c58112f8-8269-430a-b7cd-98d4dafb565c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013812001s STEP: Saw pod success Jan 31 01:34:02.406: INFO: Pod "downward-api-c58112f8-8269-430a-b7cd-98d4dafb565c" satisfied condition "Succeeded or Failed" Jan 31 01:34:02.410: INFO: Trying to get logs from node latest-worker2 pod downward-api-c58112f8-8269-430a-b7cd-98d4dafb565c container dapi-container: STEP: delete the pod Jan 31 01:34:02.464: INFO: Waiting for pod downward-api-c58112f8-8269-430a-b7cd-98d4dafb565c to disappear Jan 31 01:34:02.477: INFO: Pod downward-api-c58112f8-8269-430a-b7cd-98d4dafb565c no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:34:02.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-915" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":311,"completed":222,"skipped":4203,"failed":0} SSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:34:02.485: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating configMap with name configmap-test-volume-e52705d4-3af2-4977-8061-e4c22268c8dc STEP: Creating a pod to test consume configMaps Jan 31 01:34:02.611: INFO: Waiting up to 5m0s for pod "pod-configmaps-1abce1cc-96b8-479e-8799-d2a4d508113a" in namespace "configmap-319" to be "Succeeded or Failed" Jan 31 01:34:02.627: INFO: Pod "pod-configmaps-1abce1cc-96b8-479e-8799-d2a4d508113a": Phase="Pending", Reason="", readiness=false. Elapsed: 16.069852ms Jan 31 01:34:04.646: INFO: Pod "pod-configmaps-1abce1cc-96b8-479e-8799-d2a4d508113a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034451484s Jan 31 01:34:06.651: INFO: Pod "pod-configmaps-1abce1cc-96b8-479e-8799-d2a4d508113a": Phase="Running", Reason="", readiness=true. Elapsed: 4.039862612s Jan 31 01:34:08.657: INFO: Pod "pod-configmaps-1abce1cc-96b8-479e-8799-d2a4d508113a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.04568137s STEP: Saw pod success Jan 31 01:34:08.657: INFO: Pod "pod-configmaps-1abce1cc-96b8-479e-8799-d2a4d508113a" satisfied condition "Succeeded or Failed" Jan 31 01:34:08.660: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-1abce1cc-96b8-479e-8799-d2a4d508113a container agnhost-container: STEP: delete the pod Jan 31 01:34:08.692: INFO: Waiting for pod pod-configmaps-1abce1cc-96b8-479e-8799-d2a4d508113a to disappear Jan 31 01:34:08.701: INFO: Pod pod-configmaps-1abce1cc-96b8-479e-8799-d2a4d508113a no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:34:08.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-319" for this suite. • [SLOW TEST:6.239 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":223,"skipped":4211,"failed":0} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:34:08.725: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 31 01:34:09.446: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 31 01:34:11.465: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747653649, loc:(*time.Location)(0x79bd420)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747653649, loc:(*time.Location)(0x79bd420)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747653649, loc:(*time.Location)(0x79bd420)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747653649, loc:(*time.Location)(0x79bd420)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 31 01:34:14.508: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Jan 31 01:34:14.531: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:34:14.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8836" for this suite. STEP: Destroying namespace "webhook-8836-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:6.078 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":311,"completed":224,"skipped":4214,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] server version should find the server version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] server version /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:34:14.804: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename server-version STEP: Waiting for a default service account to be provisioned in namespace [It] should find the server version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Request ServerVersion STEP: Confirm major version Jan 31 01:34:14.934: INFO: Major version: 1 STEP: Confirm minor version Jan 31 01:34:14.934: INFO: cleanMinorVersion: 21 Jan 31 01:34:14.934: INFO: Minor version: 21+ [AfterEach] [sig-api-machinery] server version /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:34:14.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "server-version-3600" for this suite. •{"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":311,"completed":225,"skipped":4229,"failed":0} S ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:34:14.947: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-5744 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-5744 STEP: creating replication controller externalsvc in namespace services-5744 I0131 01:34:15.932293 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-5744, replica count: 2 I0131 01:34:18.982722 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0131 01:34:21.983003 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Jan 31 01:34:22.011: INFO: Creating new exec pod Jan 31 01:34:26.052: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=services-5744 exec execpodkcpmm -- /bin/sh -x -c nslookup clusterip-service.services-5744.svc.cluster.local' Jan 31 01:34:26.348: INFO: stderr: "I0131 01:34:26.248116 2287 log.go:181] (0xc0006ed810) (0xc000836be0) Create stream\nI0131 01:34:26.248204 2287 log.go:181] (0xc0006ed810) (0xc000836be0) Stream added, broadcasting: 1\nI0131 01:34:26.250956 2287 log.go:181] (0xc0006ed810) Reply frame received for 1\nI0131 01:34:26.250997 2287 log.go:181] (0xc0006ed810) (0xc0005441e0) Create stream\nI0131 01:34:26.251017 2287 log.go:181] (0xc0006ed810) (0xc0005441e0) Stream added, broadcasting: 3\nI0131 01:34:26.252131 2287 log.go:181] (0xc0006ed810) Reply frame received for 3\nI0131 01:34:26.252185 2287 log.go:181] (0xc0006ed810) (0xc0006e41e0) Create stream\nI0131 01:34:26.252228 2287 log.go:181] (0xc0006ed810) (0xc0006e41e0) Stream added, broadcasting: 5\nI0131 01:34:26.253315 2287 log.go:181] (0xc0006ed810) Reply frame received for 5\nI0131 01:34:26.327855 2287 log.go:181] (0xc0006ed810) Data frame received for 5\nI0131 01:34:26.327878 2287 log.go:181] (0xc0006e41e0) (5) Data frame handling\nI0131 01:34:26.327888 2287 log.go:181] (0xc0006e41e0) (5) Data frame sent\n+ nslookup clusterip-service.services-5744.svc.cluster.local\nI0131 01:34:26.335520 2287 log.go:181] (0xc0006ed810) Data frame received for 3\nI0131 01:34:26.335553 2287 log.go:181] (0xc0005441e0) (3) Data frame handling\nI0131 01:34:26.335581 2287 log.go:181] (0xc0005441e0) (3) Data frame sent\nI0131 01:34:26.336443 2287 log.go:181] (0xc0006ed810) Data frame received for 3\nI0131 01:34:26.336479 2287 log.go:181] (0xc0005441e0) (3) Data frame handling\nI0131 01:34:26.336512 2287 log.go:181] (0xc0005441e0) (3) Data frame sent\nI0131 01:34:26.337084 2287 log.go:181] (0xc0006ed810) Data frame received for 5\nI0131 01:34:26.337182 2287 log.go:181] (0xc0006e41e0) (5) Data frame handling\nI0131 01:34:26.337599 2287 log.go:181] (0xc0006ed810) Data frame received for 3\nI0131 01:34:26.337622 2287 log.go:181] (0xc0005441e0) (3) Data frame handling\nI0131 01:34:26.343418 2287 log.go:181] (0xc0006ed810) Data frame received for 1\nI0131 01:34:26.343437 2287 log.go:181] (0xc000836be0) (1) Data frame handling\nI0131 01:34:26.343452 2287 log.go:181] (0xc000836be0) (1) Data frame sent\nI0131 01:34:26.343462 2287 log.go:181] (0xc0006ed810) (0xc000836be0) Stream removed, broadcasting: 1\nI0131 01:34:26.343475 2287 log.go:181] (0xc0006ed810) Go away received\nI0131 01:34:26.343992 2287 log.go:181] (0xc0006ed810) (0xc000836be0) Stream removed, broadcasting: 1\nI0131 01:34:26.344012 2287 log.go:181] (0xc0006ed810) (0xc0005441e0) Stream removed, broadcasting: 3\nI0131 01:34:26.344026 2287 log.go:181] (0xc0006ed810) (0xc0006e41e0) Stream removed, broadcasting: 5\n" Jan 31 01:34:26.348: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-5744.svc.cluster.local\tcanonical name = externalsvc.services-5744.svc.cluster.local.\nName:\texternalsvc.services-5744.svc.cluster.local\nAddress: 10.96.64.162\n\n" STEP: deleting ReplicationController externalsvc in namespace services-5744, will wait for the garbage collector to delete the pods Jan 31 01:34:26.408: INFO: Deleting ReplicationController externalsvc took: 6.652692ms Jan 31 01:34:27.008: INFO: Terminating ReplicationController externalsvc pods took: 600.202378ms Jan 31 01:35:21.186: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:35:21.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5744" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 • [SLOW TEST:66.299 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":311,"completed":226,"skipped":4230,"failed":0} SS ------------------------------ [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:35:21.246: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should test the lifecycle of a ReplicationController [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: creating a ReplicationController STEP: waiting for RC to be added STEP: waiting for available Replicas STEP: patching ReplicationController STEP: waiting for RC to be modified STEP: patching ReplicationController status STEP: waiting for RC to be modified STEP: waiting for available Replicas STEP: fetching ReplicationController status STEP: patching ReplicationController scale STEP: waiting for RC to be modified STEP: waiting for ReplicationController's scale to be the max amount STEP: fetching ReplicationController; ensuring that it's patched STEP: updating ReplicationController status STEP: waiting for RC to be modified STEP: listing all ReplicationControllers STEP: checking that ReplicationController has expected values STEP: deleting ReplicationControllers by collection STEP: waiting for ReplicationController to have a DELETED watchEvent [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:35:29.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-1658" for this suite. • [SLOW TEST:8.218 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should test the lifecycle of a ReplicationController [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]","total":311,"completed":227,"skipped":4232,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:35:29.464: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a pod to test emptydir 0777 on tmpfs Jan 31 01:35:29.585: INFO: Waiting up to 5m0s for pod "pod-73add44a-74c7-4e03-b6f3-ac3bce16c105" in namespace "emptydir-8902" to be "Succeeded or Failed" Jan 31 01:35:29.588: INFO: Pod "pod-73add44a-74c7-4e03-b6f3-ac3bce16c105": Phase="Pending", Reason="", readiness=false. Elapsed: 2.839146ms Jan 31 01:35:31.592: INFO: Pod "pod-73add44a-74c7-4e03-b6f3-ac3bce16c105": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006869214s Jan 31 01:35:33.597: INFO: Pod "pod-73add44a-74c7-4e03-b6f3-ac3bce16c105": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011444946s STEP: Saw pod success Jan 31 01:35:33.597: INFO: Pod "pod-73add44a-74c7-4e03-b6f3-ac3bce16c105" satisfied condition "Succeeded or Failed" Jan 31 01:35:33.600: INFO: Trying to get logs from node latest-worker pod pod-73add44a-74c7-4e03-b6f3-ac3bce16c105 container test-container: STEP: delete the pod Jan 31 01:35:33.643: INFO: Waiting for pod pod-73add44a-74c7-4e03-b6f3-ac3bce16c105 to disappear Jan 31 01:35:33.647: INFO: Pod pod-73add44a-74c7-4e03-b6f3-ac3bce16c105 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:35:33.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8902" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":228,"skipped":4246,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:35:33.655: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating configMap with name projected-configmap-test-volume-map-008a56fa-8738-4919-a5c5-192b5150a3b4 STEP: Creating a pod to test consume configMaps Jan 31 01:35:33.787: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3f5f9139-9090-446f-9533-9bea2c954cce" in namespace "projected-7006" to be "Succeeded or Failed" Jan 31 01:35:33.803: INFO: Pod "pod-projected-configmaps-3f5f9139-9090-446f-9533-9bea2c954cce": Phase="Pending", Reason="", readiness=false. Elapsed: 15.622248ms Jan 31 01:35:35.834: INFO: Pod "pod-projected-configmaps-3f5f9139-9090-446f-9533-9bea2c954cce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046586087s Jan 31 01:35:37.922: INFO: Pod "pod-projected-configmaps-3f5f9139-9090-446f-9533-9bea2c954cce": Phase="Pending", Reason="", readiness=false. Elapsed: 4.135223323s Jan 31 01:35:39.927: INFO: Pod "pod-projected-configmaps-3f5f9139-9090-446f-9533-9bea2c954cce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.140104278s STEP: Saw pod success Jan 31 01:35:39.927: INFO: Pod "pod-projected-configmaps-3f5f9139-9090-446f-9533-9bea2c954cce" satisfied condition "Succeeded or Failed" Jan 31 01:35:39.930: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-3f5f9139-9090-446f-9533-9bea2c954cce container agnhost-container: STEP: delete the pod Jan 31 01:35:39.951: INFO: Waiting for pod pod-projected-configmaps-3f5f9139-9090-446f-9533-9bea2c954cce to disappear Jan 31 01:35:39.973: INFO: Pod pod-projected-configmaps-3f5f9139-9090-446f-9533-9bea2c954cce no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:35:39.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7006" for this suite. • [SLOW TEST:6.326 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":311,"completed":229,"skipped":4255,"failed":0} S ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:35:39.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a pod to test downward API volume plugin Jan 31 01:35:40.074: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ddaa8709-2ef2-4200-abf2-3e8ccc7db168" in namespace "downward-api-6616" to be "Succeeded or Failed" Jan 31 01:35:40.090: INFO: Pod "downwardapi-volume-ddaa8709-2ef2-4200-abf2-3e8ccc7db168": Phase="Pending", Reason="", readiness=false. Elapsed: 15.981765ms Jan 31 01:35:42.096: INFO: Pod "downwardapi-volume-ddaa8709-2ef2-4200-abf2-3e8ccc7db168": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021458197s Jan 31 01:35:44.099: INFO: Pod "downwardapi-volume-ddaa8709-2ef2-4200-abf2-3e8ccc7db168": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024990066s STEP: Saw pod success Jan 31 01:35:44.099: INFO: Pod "downwardapi-volume-ddaa8709-2ef2-4200-abf2-3e8ccc7db168" satisfied condition "Succeeded or Failed" Jan 31 01:35:44.102: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-ddaa8709-2ef2-4200-abf2-3e8ccc7db168 container client-container: STEP: delete the pod Jan 31 01:35:44.262: INFO: Waiting for pod downwardapi-volume-ddaa8709-2ef2-4200-abf2-3e8ccc7db168 to disappear Jan 31 01:35:44.294: INFO: Pod downwardapi-volume-ddaa8709-2ef2-4200-abf2-3e8ccc7db168 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:35:44.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6616" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":230,"skipped":4256,"failed":0} SS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:35:44.305: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating pod liveness-6f6a9ecb-9352-450e-b3d0-b035ce59748c in namespace container-probe-861 Jan 31 01:35:48.479: INFO: Started pod liveness-6f6a9ecb-9352-450e-b3d0-b035ce59748c in namespace container-probe-861 STEP: checking the pod's current state and verifying that restartCount is present Jan 31 01:35:48.482: INFO: Initial restart count of pod liveness-6f6a9ecb-9352-450e-b3d0-b035ce59748c is 0 Jan 31 01:36:08.536: INFO: Restart count of pod container-probe-861/liveness-6f6a9ecb-9352-450e-b3d0-b035ce59748c is now 1 (20.054402143s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:36:08.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-861" for this suite. • [SLOW TEST:24.281 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:635 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":311,"completed":231,"skipped":4258,"failed":0} S ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:36:08.586: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 [It] should be submitted and removed [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Jan 31 01:36:08.667: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:36:21.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7190" for this suite. • [SLOW TEST:12.602 seconds] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:635 should be submitted and removed [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":311,"completed":232,"skipped":4259,"failed":0} [sig-network] Service endpoints latency should not be very high [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:36:21.189: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 Jan 31 01:36:21.294: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-6014 I0131 01:36:21.313146 7 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-6014, replica count: 1 I0131 01:36:22.363574 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0131 01:36:23.363818 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0131 01:36:24.363976 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0131 01:36:25.364224 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 31 01:36:25.517: INFO: Created: latency-svc-c4pzr Jan 31 01:36:25.548: INFO: Got endpoints: latency-svc-c4pzr [83.924205ms] Jan 31 01:36:25.677: INFO: Created: latency-svc-k9pcj Jan 31 01:36:25.686: INFO: Got endpoints: latency-svc-k9pcj [137.931942ms] Jan 31 01:36:25.727: INFO: Created: latency-svc-4rl6p Jan 31 01:36:25.755: INFO: Got endpoints: latency-svc-4rl6p [206.779033ms] Jan 31 01:36:25.808: INFO: Created: latency-svc-wz9lq Jan 31 01:36:25.831: INFO: Got endpoints: latency-svc-wz9lq [283.142612ms] Jan 31 01:36:25.879: INFO: Created: latency-svc-gbg5q Jan 31 01:36:25.904: INFO: Got endpoints: latency-svc-gbg5q [355.839614ms] Jan 31 01:36:25.979: INFO: Created: latency-svc-sf2m7 Jan 31 01:36:25.994: INFO: Got endpoints: latency-svc-sf2m7 [445.528268ms] Jan 31 01:36:26.009: INFO: Created: latency-svc-h5jjx Jan 31 01:36:26.024: INFO: Got endpoints: latency-svc-h5jjx [475.599249ms] Jan 31 01:36:26.103: INFO: Created: latency-svc-lqhxq Jan 31 01:36:26.149: INFO: Got endpoints: latency-svc-lqhxq [601.049551ms] Jan 31 01:36:26.191: INFO: Created: latency-svc-gv7cn Jan 31 01:36:26.243: INFO: Got endpoints: latency-svc-gv7cn [694.476021ms] Jan 31 01:36:26.273: INFO: Created: latency-svc-l6cjr Jan 31 01:36:26.283: INFO: Got endpoints: latency-svc-l6cjr [735.022337ms] Jan 31 01:36:26.366: INFO: Created: latency-svc-w2xfj Jan 31 01:36:26.396: INFO: Created: latency-svc-7pnr5 Jan 31 01:36:26.396: INFO: Got endpoints: latency-svc-w2xfj [848.153838ms] Jan 31 01:36:26.422: INFO: Got endpoints: latency-svc-7pnr5 [873.573886ms] Jan 31 01:36:26.449: INFO: Created: latency-svc-q425z Jan 31 01:36:26.516: INFO: Got endpoints: latency-svc-q425z [967.418945ms] Jan 31 01:36:26.564: INFO: Created: latency-svc-nfscf Jan 31 01:36:26.574: INFO: Got endpoints: latency-svc-nfscf [1.026470659s] Jan 31 01:36:26.593: INFO: Created: latency-svc-dwkmh Jan 31 01:36:26.605: INFO: Got endpoints: latency-svc-dwkmh [1.05617963s] Jan 31 01:36:26.693: INFO: Created: latency-svc-8w8l9 Jan 31 01:36:26.701: INFO: Got endpoints: latency-svc-8w8l9 [1.152214766s] Jan 31 01:36:26.722: INFO: Created: latency-svc-cfftr Jan 31 01:36:26.743: INFO: Got endpoints: latency-svc-cfftr [1.056678026s] Jan 31 01:36:26.799: INFO: Created: latency-svc-2xhgq Jan 31 01:36:26.848: INFO: Got endpoints: latency-svc-2xhgq [1.093065302s] Jan 31 01:36:26.850: INFO: Created: latency-svc-wxq2c Jan 31 01:36:26.875: INFO: Got endpoints: latency-svc-wxq2c [1.043703068s] Jan 31 01:36:26.960: INFO: Created: latency-svc-d5jsd Jan 31 01:36:26.973: INFO: Got endpoints: latency-svc-d5jsd [1.068736718s] Jan 31 01:36:26.996: INFO: Created: latency-svc-xr9ch Jan 31 01:36:27.091: INFO: Got endpoints: latency-svc-xr9ch [1.097026875s] Jan 31 01:36:27.118: INFO: Created: latency-svc-z6sws Jan 31 01:36:27.148: INFO: Got endpoints: latency-svc-z6sws [1.124503594s] Jan 31 01:36:27.184: INFO: Created: latency-svc-nt2mq Jan 31 01:36:27.216: INFO: Got endpoints: latency-svc-nt2mq [1.066432652s] Jan 31 01:36:27.256: INFO: Created: latency-svc-st44g Jan 31 01:36:27.266: INFO: Got endpoints: latency-svc-st44g [1.023808054s] Jan 31 01:36:27.310: INFO: Created: latency-svc-l2vf7 Jan 31 01:36:27.342: INFO: Got endpoints: latency-svc-l2vf7 [1.058322931s] Jan 31 01:36:27.364: INFO: Created: latency-svc-v5gj5 Jan 31 01:36:27.388: INFO: Got endpoints: latency-svc-v5gj5 [991.458982ms] Jan 31 01:36:27.433: INFO: Created: latency-svc-fs2p2 Jan 31 01:36:27.467: INFO: Got endpoints: latency-svc-fs2p2 [1.045306625s] Jan 31 01:36:27.506: INFO: Created: latency-svc-v9xxp Jan 31 01:36:27.521: INFO: Got endpoints: latency-svc-v9xxp [1.00576788s] Jan 31 01:36:27.606: INFO: Created: latency-svc-48bvk Jan 31 01:36:27.635: INFO: Created: latency-svc-wrcsp Jan 31 01:36:27.636: INFO: Got endpoints: latency-svc-48bvk [1.06106431s] Jan 31 01:36:27.661: INFO: Got endpoints: latency-svc-wrcsp [1.056024692s] Jan 31 01:36:27.703: INFO: Created: latency-svc-s7qkq Jan 31 01:36:27.731: INFO: Got endpoints: latency-svc-s7qkq [1.030367754s] Jan 31 01:36:27.772: INFO: Created: latency-svc-bsgss Jan 31 01:36:27.788: INFO: Got endpoints: latency-svc-bsgss [1.045276177s] Jan 31 01:36:27.808: INFO: Created: latency-svc-sxdsk Jan 31 01:36:27.824: INFO: Got endpoints: latency-svc-sxdsk [975.591668ms] Jan 31 01:36:27.875: INFO: Created: latency-svc-pxzgr Jan 31 01:36:27.895: INFO: Got endpoints: latency-svc-pxzgr [1.020072184s] Jan 31 01:36:27.928: INFO: Created: latency-svc-6ggx2 Jan 31 01:36:27.944: INFO: Got endpoints: latency-svc-6ggx2 [970.528621ms] Jan 31 01:36:28.013: INFO: Created: latency-svc-vcdgv Jan 31 01:36:28.022: INFO: Got endpoints: latency-svc-vcdgv [930.694234ms] Jan 31 01:36:28.076: INFO: Created: latency-svc-rv8vg Jan 31 01:36:28.090: INFO: Got endpoints: latency-svc-rv8vg [942.051948ms] Jan 31 01:36:28.174: INFO: Created: latency-svc-zj8kb Jan 31 01:36:28.187: INFO: Got endpoints: latency-svc-zj8kb [970.692012ms] Jan 31 01:36:28.210: INFO: Created: latency-svc-j8fxp Jan 31 01:36:28.235: INFO: Got endpoints: latency-svc-j8fxp [968.224338ms] Jan 31 01:36:28.320: INFO: Created: latency-svc-9tvjk Jan 31 01:36:28.348: INFO: Got endpoints: latency-svc-9tvjk [1.006643127s] Jan 31 01:36:28.349: INFO: Created: latency-svc-jw6r2 Jan 31 01:36:28.378: INFO: Got endpoints: latency-svc-jw6r2 [990.294715ms] Jan 31 01:36:28.408: INFO: Created: latency-svc-mdsxl Jan 31 01:36:28.462: INFO: Got endpoints: latency-svc-mdsxl [994.522395ms] Jan 31 01:36:28.483: INFO: Created: latency-svc-56qwq Jan 31 01:36:28.495: INFO: Got endpoints: latency-svc-56qwq [973.388195ms] Jan 31 01:36:28.514: INFO: Created: latency-svc-f42cd Jan 31 01:36:28.525: INFO: Got endpoints: latency-svc-f42cd [889.077385ms] Jan 31 01:36:28.546: INFO: Created: latency-svc-q42kl Jan 31 01:36:28.561: INFO: Got endpoints: latency-svc-q42kl [900.06367ms] Jan 31 01:36:28.612: INFO: Created: latency-svc-kn8lz Jan 31 01:36:28.642: INFO: Got endpoints: latency-svc-kn8lz [911.201478ms] Jan 31 01:36:28.681: INFO: Created: latency-svc-5k5v6 Jan 31 01:36:28.693: INFO: Got endpoints: latency-svc-5k5v6 [904.737235ms] Jan 31 01:36:28.769: INFO: Created: latency-svc-rktvn Jan 31 01:36:28.770: INFO: Created: latency-svc-vbvz5 Jan 31 01:36:28.793: INFO: Got endpoints: latency-svc-vbvz5 [897.722311ms] Jan 31 01:36:28.793: INFO: Got endpoints: latency-svc-rktvn [969.441691ms] Jan 31 01:36:28.793: INFO: Created: latency-svc-s9vfp Jan 31 01:36:28.816: INFO: Got endpoints: latency-svc-s9vfp [872.086854ms] Jan 31 01:36:28.918: INFO: Created: latency-svc-64n2r Jan 31 01:36:28.940: INFO: Got endpoints: latency-svc-64n2r [917.98452ms] Jan 31 01:36:28.940: INFO: Created: latency-svc-j895r Jan 31 01:36:28.970: INFO: Got endpoints: latency-svc-j895r [879.151789ms] Jan 31 01:36:28.997: INFO: Created: latency-svc-d86bw Jan 31 01:36:29.007: INFO: Got endpoints: latency-svc-d86bw [820.441722ms] Jan 31 01:36:29.057: INFO: Created: latency-svc-x99rb Jan 31 01:36:29.070: INFO: Got endpoints: latency-svc-x99rb [835.054541ms] Jan 31 01:36:29.125: INFO: Created: latency-svc-lrmt9 Jan 31 01:36:29.193: INFO: Got endpoints: latency-svc-lrmt9 [844.043065ms] Jan 31 01:36:29.215: INFO: Created: latency-svc-xxfdd Jan 31 01:36:29.226: INFO: Got endpoints: latency-svc-xxfdd [847.776621ms] Jan 31 01:36:29.266: INFO: Created: latency-svc-pgjhq Jan 31 01:36:29.280: INFO: Got endpoints: latency-svc-pgjhq [817.517652ms] Jan 31 01:36:29.324: INFO: Created: latency-svc-ng5r4 Jan 31 01:36:29.347: INFO: Got endpoints: latency-svc-ng5r4 [852.164152ms] Jan 31 01:36:29.347: INFO: Created: latency-svc-f7sd7 Jan 31 01:36:29.377: INFO: Got endpoints: latency-svc-f7sd7 [852.263468ms] Jan 31 01:36:29.407: INFO: Created: latency-svc-7sgqw Jan 31 01:36:29.421: INFO: Got endpoints: latency-svc-7sgqw [860.457374ms] Jan 31 01:36:29.463: INFO: Created: latency-svc-7sjgs Jan 31 01:36:29.483: INFO: Got endpoints: latency-svc-7sjgs [840.223881ms] Jan 31 01:36:29.483: INFO: Created: latency-svc-l6sng Jan 31 01:36:29.500: INFO: Got endpoints: latency-svc-l6sng [806.943964ms] Jan 31 01:36:29.521: INFO: Created: latency-svc-kxnp5 Jan 31 01:36:29.535: INFO: Got endpoints: latency-svc-kxnp5 [741.427709ms] Jan 31 01:36:29.617: INFO: Created: latency-svc-gz5hn Jan 31 01:36:29.662: INFO: Got endpoints: latency-svc-gz5hn [869.140676ms] Jan 31 01:36:29.664: INFO: Created: latency-svc-ltvn8 Jan 31 01:36:29.704: INFO: Got endpoints: latency-svc-ltvn8 [887.897971ms] Jan 31 01:36:29.769: INFO: Created: latency-svc-2ldz4 Jan 31 01:36:29.810: INFO: Got endpoints: latency-svc-2ldz4 [869.90669ms] Jan 31 01:36:29.836: INFO: Created: latency-svc-cmgqg Jan 31 01:36:29.848: INFO: Got endpoints: latency-svc-cmgqg [878.619032ms] Jan 31 01:36:29.930: INFO: Created: latency-svc-5r76d Jan 31 01:36:29.945: INFO: Got endpoints: latency-svc-5r76d [938.022721ms] Jan 31 01:36:29.959: INFO: Created: latency-svc-kc6w9 Jan 31 01:36:29.969: INFO: Got endpoints: latency-svc-kc6w9 [899.258755ms] Jan 31 01:36:29.982: INFO: Created: latency-svc-f448b Jan 31 01:36:30.001: INFO: Got endpoints: latency-svc-f448b [808.051659ms] Jan 31 01:36:30.065: INFO: Created: latency-svc-jjjxz Jan 31 01:36:30.100: INFO: Created: latency-svc-mlvhf Jan 31 01:36:30.101: INFO: Got endpoints: latency-svc-jjjxz [874.699632ms] Jan 31 01:36:30.115: INFO: Got endpoints: latency-svc-mlvhf [835.822907ms] Jan 31 01:36:30.206: INFO: Created: latency-svc-xhmff Jan 31 01:36:30.218: INFO: Got endpoints: latency-svc-xhmff [870.73045ms] Jan 31 01:36:30.262: INFO: Created: latency-svc-qxj26 Jan 31 01:36:30.294: INFO: Got endpoints: latency-svc-qxj26 [916.969444ms] Jan 31 01:36:30.310: INFO: Created: latency-svc-ln6kh Jan 31 01:36:30.355: INFO: Got endpoints: latency-svc-ln6kh [933.535044ms] Jan 31 01:36:30.391: INFO: Created: latency-svc-vbvfs Jan 31 01:36:30.437: INFO: Got endpoints: latency-svc-vbvfs [954.706053ms] Jan 31 01:36:30.503: INFO: Created: latency-svc-84px8 Jan 31 01:36:30.520: INFO: Got endpoints: latency-svc-84px8 [1.020049402s] Jan 31 01:36:30.570: INFO: Created: latency-svc-wnhtq Jan 31 01:36:30.613: INFO: Got endpoints: latency-svc-wnhtq [1.078148377s] Jan 31 01:36:30.613: INFO: Created: latency-svc-vn8sk Jan 31 01:36:30.630: INFO: Got endpoints: latency-svc-vn8sk [967.745231ms] Jan 31 01:36:30.734: INFO: Created: latency-svc-27nwc Jan 31 01:36:30.745: INFO: Got endpoints: latency-svc-27nwc [1.041191941s] Jan 31 01:36:30.769: INFO: Created: latency-svc-gdfp5 Jan 31 01:36:30.784: INFO: Got endpoints: latency-svc-gdfp5 [973.961106ms] Jan 31 01:36:30.818: INFO: Created: latency-svc-7bkl2 Jan 31 01:36:30.905: INFO: Got endpoints: latency-svc-7bkl2 [1.056869583s] Jan 31 01:36:30.913: INFO: Created: latency-svc-ck9v7 Jan 31 01:36:30.918: INFO: Got endpoints: latency-svc-ck9v7 [972.973672ms] Jan 31 01:36:30.955: INFO: Created: latency-svc-lkpbb Jan 31 01:36:30.978: INFO: Got endpoints: latency-svc-lkpbb [1.009273215s] Jan 31 01:36:31.004: INFO: Created: latency-svc-zq5sg Jan 31 01:36:31.061: INFO: Got endpoints: latency-svc-zq5sg [1.059907994s] Jan 31 01:36:31.063: INFO: Created: latency-svc-fhhzg Jan 31 01:36:31.068: INFO: Got endpoints: latency-svc-fhhzg [967.023774ms] Jan 31 01:36:31.102: INFO: Created: latency-svc-48mlk Jan 31 01:36:31.110: INFO: Got endpoints: latency-svc-48mlk [994.877674ms] Jan 31 01:36:31.129: INFO: Created: latency-svc-7tg7z Jan 31 01:36:31.134: INFO: Got endpoints: latency-svc-7tg7z [915.960052ms] Jan 31 01:36:31.160: INFO: Created: latency-svc-t87qf Jan 31 01:36:31.216: INFO: Got endpoints: latency-svc-t87qf [921.455051ms] Jan 31 01:36:31.234: INFO: Created: latency-svc-mhkvq Jan 31 01:36:31.251: INFO: Got endpoints: latency-svc-mhkvq [895.650529ms] Jan 31 01:36:31.270: INFO: Created: latency-svc-xpmk6 Jan 31 01:36:31.300: INFO: Got endpoints: latency-svc-xpmk6 [862.395805ms] Jan 31 01:36:31.358: INFO: Created: latency-svc-ncz2r Jan 31 01:36:31.365: INFO: Got endpoints: latency-svc-ncz2r [845.113948ms] Jan 31 01:36:31.388: INFO: Created: latency-svc-k4kmr Jan 31 01:36:31.400: INFO: Got endpoints: latency-svc-k4kmr [787.509079ms] Jan 31 01:36:31.424: INFO: Created: latency-svc-2l2nt Jan 31 01:36:31.452: INFO: Got endpoints: latency-svc-2l2nt [821.674157ms] Jan 31 01:36:31.498: INFO: Created: latency-svc-mxv45 Jan 31 01:36:31.510: INFO: Got endpoints: latency-svc-mxv45 [765.212528ms] Jan 31 01:36:31.535: INFO: Created: latency-svc-tf48h Jan 31 01:36:31.547: INFO: Got endpoints: latency-svc-tf48h [763.811077ms] Jan 31 01:36:31.579: INFO: Created: latency-svc-cvvx5 Jan 31 01:36:31.597: INFO: Got endpoints: latency-svc-cvvx5 [691.696012ms] Jan 31 01:36:31.647: INFO: Created: latency-svc-6qv7t Jan 31 01:36:31.674: INFO: Got endpoints: latency-svc-6qv7t [755.365962ms] Jan 31 01:36:31.702: INFO: Created: latency-svc-ssk48 Jan 31 01:36:31.738: INFO: Got endpoints: latency-svc-ssk48 [759.524397ms] Jan 31 01:36:31.797: INFO: Created: latency-svc-mwrwq Jan 31 01:36:31.832: INFO: Got endpoints: latency-svc-mwrwq [771.023719ms] Jan 31 01:36:31.832: INFO: Created: latency-svc-gzv9c Jan 31 01:36:31.855: INFO: Got endpoints: latency-svc-gzv9c [787.065088ms] Jan 31 01:36:31.947: INFO: Created: latency-svc-l4wq5 Jan 31 01:36:31.975: INFO: Got endpoints: latency-svc-l4wq5 [864.247613ms] Jan 31 01:36:31.976: INFO: Created: latency-svc-9x6qv Jan 31 01:36:31.999: INFO: Got endpoints: latency-svc-9x6qv [864.997762ms] Jan 31 01:36:32.024: INFO: Created: latency-svc-hhr7h Jan 31 01:36:32.036: INFO: Got endpoints: latency-svc-hhr7h [820.008274ms] Jan 31 01:36:32.128: INFO: Created: latency-svc-4smmj Jan 31 01:36:32.176: INFO: Got endpoints: latency-svc-4smmj [925.009698ms] Jan 31 01:36:32.177: INFO: Created: latency-svc-nvlgb Jan 31 01:36:32.215: INFO: Got endpoints: latency-svc-nvlgb [915.369456ms] Jan 31 01:36:32.271: INFO: Created: latency-svc-s8mn6 Jan 31 01:36:32.296: INFO: Created: latency-svc-f88qr Jan 31 01:36:32.297: INFO: Got endpoints: latency-svc-s8mn6 [931.659157ms] Jan 31 01:36:32.317: INFO: Got endpoints: latency-svc-f88qr [916.537011ms] Jan 31 01:36:32.338: INFO: Created: latency-svc-zqls4 Jan 31 01:36:32.350: INFO: Got endpoints: latency-svc-zqls4 [898.58155ms] Jan 31 01:36:32.368: INFO: Created: latency-svc-mjg44 Jan 31 01:36:32.396: INFO: Got endpoints: latency-svc-mjg44 [885.352783ms] Jan 31 01:36:32.408: INFO: Created: latency-svc-p85bg Jan 31 01:36:32.429: INFO: Got endpoints: latency-svc-p85bg [881.17344ms] Jan 31 01:36:32.449: INFO: Created: latency-svc-xnvs4 Jan 31 01:36:32.460: INFO: Got endpoints: latency-svc-xnvs4 [863.26555ms] Jan 31 01:36:32.479: INFO: Created: latency-svc-zcs8q Jan 31 01:36:32.539: INFO: Got endpoints: latency-svc-zcs8q [865.126928ms] Jan 31 01:36:32.554: INFO: Created: latency-svc-x2l89 Jan 31 01:36:32.569: INFO: Got endpoints: latency-svc-x2l89 [830.919276ms] Jan 31 01:36:32.591: INFO: Created: latency-svc-vlwb2 Jan 31 01:36:32.605: INFO: Got endpoints: latency-svc-vlwb2 [772.854566ms] Jan 31 01:36:32.623: INFO: Created: latency-svc-xlxxk Jan 31 01:36:32.677: INFO: Got endpoints: latency-svc-xlxxk [821.671077ms] Jan 31 01:36:32.695: INFO: Created: latency-svc-fpkt8 Jan 31 01:36:32.710: INFO: Got endpoints: latency-svc-fpkt8 [735.228063ms] Jan 31 01:36:32.734: INFO: Created: latency-svc-s2bdh Jan 31 01:36:32.758: INFO: Got endpoints: latency-svc-s2bdh [758.577027ms] Jan 31 01:36:32.839: INFO: Created: latency-svc-kmjfq Jan 31 01:36:32.863: INFO: Got endpoints: latency-svc-kmjfq [827.688517ms] Jan 31 01:36:32.864: INFO: Created: latency-svc-zj6x4 Jan 31 01:36:32.877: INFO: Got endpoints: latency-svc-zj6x4 [701.647826ms] Jan 31 01:36:32.902: INFO: Created: latency-svc-z7z8s Jan 31 01:36:32.926: INFO: Got endpoints: latency-svc-z7z8s [710.424901ms] Jan 31 01:36:32.970: INFO: Created: latency-svc-vwgph Jan 31 01:36:33.004: INFO: Got endpoints: latency-svc-vwgph [707.293734ms] Jan 31 01:36:33.005: INFO: Created: latency-svc-ms5rn Jan 31 01:36:33.037: INFO: Got endpoints: latency-svc-ms5rn [719.947709ms] Jan 31 01:36:33.103: INFO: Created: latency-svc-98n7p Jan 31 01:36:33.110: INFO: Got endpoints: latency-svc-98n7p [759.081379ms] Jan 31 01:36:33.135: INFO: Created: latency-svc-d8bv5 Jan 31 01:36:33.150: INFO: Got endpoints: latency-svc-d8bv5 [754.565406ms] Jan 31 01:36:33.173: INFO: Created: latency-svc-5s6xz Jan 31 01:36:33.186: INFO: Got endpoints: latency-svc-5s6xz [757.688555ms] Jan 31 01:36:33.202: INFO: Created: latency-svc-vh2cz Jan 31 01:36:33.234: INFO: Got endpoints: latency-svc-vh2cz [773.228267ms] Jan 31 01:36:33.247: INFO: Created: latency-svc-jc7vp Jan 31 01:36:33.264: INFO: Got endpoints: latency-svc-jc7vp [724.972138ms] Jan 31 01:36:33.289: INFO: Created: latency-svc-5lcr2 Jan 31 01:36:33.319: INFO: Got endpoints: latency-svc-5lcr2 [749.974354ms] Jan 31 01:36:33.373: INFO: Created: latency-svc-thrjb Jan 31 01:36:33.394: INFO: Got endpoints: latency-svc-thrjb [789.124858ms] Jan 31 01:36:33.395: INFO: Created: latency-svc-dqnsb Jan 31 01:36:33.405: INFO: Got endpoints: latency-svc-dqnsb [728.270941ms] Jan 31 01:36:33.423: INFO: Created: latency-svc-slbtc Jan 31 01:36:33.435: INFO: Got endpoints: latency-svc-slbtc [725.110844ms] Jan 31 01:36:33.522: INFO: Created: latency-svc-vksv4 Jan 31 01:36:33.544: INFO: Got endpoints: latency-svc-vksv4 [786.582366ms] Jan 31 01:36:33.545: INFO: Created: latency-svc-pjwfz Jan 31 01:36:33.574: INFO: Got endpoints: latency-svc-pjwfz [710.768379ms] Jan 31 01:36:33.610: INFO: Created: latency-svc-vlf9k Jan 31 01:36:33.665: INFO: Got endpoints: latency-svc-vlf9k [787.884177ms] Jan 31 01:36:33.679: INFO: Created: latency-svc-htnkz Jan 31 01:36:33.689: INFO: Got endpoints: latency-svc-htnkz [763.377768ms] Jan 31 01:36:33.706: INFO: Created: latency-svc-4r9jp Jan 31 01:36:33.723: INFO: Got endpoints: latency-svc-4r9jp [718.905504ms] Jan 31 01:36:33.803: INFO: Created: latency-svc-nmn7f Jan 31 01:36:33.829: INFO: Got endpoints: latency-svc-nmn7f [791.825012ms] Jan 31 01:36:33.829: INFO: Created: latency-svc-7lrwg Jan 31 01:36:33.871: INFO: Got endpoints: latency-svc-7lrwg [761.511263ms] Jan 31 01:36:33.941: INFO: Created: latency-svc-kx2xw Jan 31 01:36:33.976: INFO: Got endpoints: latency-svc-kx2xw [825.358472ms] Jan 31 01:36:34.012: INFO: Created: latency-svc-sprbw Jan 31 01:36:34.028: INFO: Got endpoints: latency-svc-sprbw [841.44058ms] Jan 31 01:36:34.115: INFO: Created: latency-svc-96whj Jan 31 01:36:34.156: INFO: Got endpoints: latency-svc-96whj [922.06388ms] Jan 31 01:36:34.156: INFO: Created: latency-svc-qd22b Jan 31 01:36:34.180: INFO: Got endpoints: latency-svc-qd22b [915.558019ms] Jan 31 01:36:34.207: INFO: Created: latency-svc-pjsgf Jan 31 01:36:34.246: INFO: Got endpoints: latency-svc-pjsgf [927.097218ms] Jan 31 01:36:34.267: INFO: Created: latency-svc-bnqc2 Jan 31 01:36:34.306: INFO: Got endpoints: latency-svc-bnqc2 [912.167003ms] Jan 31 01:36:34.336: INFO: Created: latency-svc-ptdr5 Jan 31 01:36:34.372: INFO: Got endpoints: latency-svc-ptdr5 [966.519313ms] Jan 31 01:36:34.384: INFO: Created: latency-svc-7h8d7 Jan 31 01:36:34.396: INFO: Got endpoints: latency-svc-7h8d7 [960.836393ms] Jan 31 01:36:34.454: INFO: Created: latency-svc-87npl Jan 31 01:36:34.468: INFO: Got endpoints: latency-svc-87npl [923.951372ms] Jan 31 01:36:34.510: INFO: Created: latency-svc-rnmls Jan 31 01:36:34.516: INFO: Got endpoints: latency-svc-rnmls [941.832241ms] Jan 31 01:36:34.534: INFO: Created: latency-svc-mr5ts Jan 31 01:36:34.558: INFO: Got endpoints: latency-svc-mr5ts [892.458103ms] Jan 31 01:36:34.588: INFO: Created: latency-svc-5vtlz Jan 31 01:36:34.641: INFO: Got endpoints: latency-svc-5vtlz [951.857731ms] Jan 31 01:36:34.645: INFO: Created: latency-svc-rvwwp Jan 31 01:36:34.681: INFO: Got endpoints: latency-svc-rvwwp [958.064833ms] Jan 31 01:36:34.727: INFO: Created: latency-svc-cpgxg Jan 31 01:36:34.773: INFO: Got endpoints: latency-svc-cpgxg [944.183066ms] Jan 31 01:36:34.786: INFO: Created: latency-svc-sxqpq Jan 31 01:36:34.795: INFO: Got endpoints: latency-svc-sxqpq [923.621859ms] Jan 31 01:36:34.819: INFO: Created: latency-svc-nf2ht Jan 31 01:36:34.831: INFO: Got endpoints: latency-svc-nf2ht [855.125748ms] Jan 31 01:36:34.867: INFO: Created: latency-svc-8kwsn Jan 31 01:36:34.910: INFO: Got endpoints: latency-svc-8kwsn [882.498343ms] Jan 31 01:36:34.930: INFO: Created: latency-svc-85hpj Jan 31 01:36:34.966: INFO: Got endpoints: latency-svc-85hpj [810.380141ms] Jan 31 01:36:34.997: INFO: Created: latency-svc-mr6hw Jan 31 01:36:35.037: INFO: Got endpoints: latency-svc-mr6hw [856.910728ms] Jan 31 01:36:35.053: INFO: Created: latency-svc-c9g6z Jan 31 01:36:35.068: INFO: Got endpoints: latency-svc-c9g6z [821.294167ms] Jan 31 01:36:35.083: INFO: Created: latency-svc-rp2ck Jan 31 01:36:35.104: INFO: Got endpoints: latency-svc-rp2ck [797.948699ms] Jan 31 01:36:35.187: INFO: Created: latency-svc-4wt2v Jan 31 01:36:35.212: INFO: Got endpoints: latency-svc-4wt2v [840.080727ms] Jan 31 01:36:35.212: INFO: Created: latency-svc-qg7lh Jan 31 01:36:35.242: INFO: Got endpoints: latency-svc-qg7lh [845.962135ms] Jan 31 01:36:35.269: INFO: Created: latency-svc-dgfz6 Jan 31 01:36:35.281: INFO: Got endpoints: latency-svc-dgfz6 [812.553881ms] Jan 31 01:36:35.325: INFO: Created: latency-svc-ns4ww Jan 31 01:36:35.347: INFO: Got endpoints: latency-svc-ns4ww [830.965139ms] Jan 31 01:36:35.348: INFO: Created: latency-svc-ntgs9 Jan 31 01:36:35.404: INFO: Got endpoints: latency-svc-ntgs9 [846.277282ms] Jan 31 01:36:35.472: INFO: Created: latency-svc-mcl6j Jan 31 01:36:35.490: INFO: Got endpoints: latency-svc-mcl6j [849.157286ms] Jan 31 01:36:35.492: INFO: Created: latency-svc-4ngsk Jan 31 01:36:35.527: INFO: Got endpoints: latency-svc-4ngsk [845.345882ms] Jan 31 01:36:35.557: INFO: Created: latency-svc-qxw2h Jan 31 01:36:35.593: INFO: Got endpoints: latency-svc-qxw2h [820.153122ms] Jan 31 01:36:35.614: INFO: Created: latency-svc-8ll7s Jan 31 01:36:35.649: INFO: Got endpoints: latency-svc-8ll7s [853.862624ms] Jan 31 01:36:35.725: INFO: Created: latency-svc-7lfmn Jan 31 01:36:35.732: INFO: Got endpoints: latency-svc-7lfmn [901.147767ms] Jan 31 01:36:35.755: INFO: Created: latency-svc-dm4dw Jan 31 01:36:35.776: INFO: Got endpoints: latency-svc-dm4dw [865.180647ms] Jan 31 01:36:35.806: INFO: Created: latency-svc-l6t84 Jan 31 01:36:35.857: INFO: Got endpoints: latency-svc-l6t84 [891.244824ms] Jan 31 01:36:35.899: INFO: Created: latency-svc-gg57p Jan 31 01:36:35.918: INFO: Got endpoints: latency-svc-gg57p [881.303084ms] Jan 31 01:36:35.989: INFO: Created: latency-svc-48dq4 Jan 31 01:36:36.010: INFO: Got endpoints: latency-svc-48dq4 [942.196993ms] Jan 31 01:36:36.010: INFO: Created: latency-svc-lmp4g Jan 31 01:36:36.046: INFO: Got endpoints: latency-svc-lmp4g [941.824494ms] Jan 31 01:36:36.077: INFO: Created: latency-svc-5f98m Jan 31 01:36:36.144: INFO: Got endpoints: latency-svc-5f98m [932.653996ms] Jan 31 01:36:36.146: INFO: Created: latency-svc-957m6 Jan 31 01:36:36.155: INFO: Got endpoints: latency-svc-957m6 [912.514497ms] Jan 31 01:36:36.232: INFO: Created: latency-svc-lksdc Jan 31 01:36:36.282: INFO: Got endpoints: latency-svc-lksdc [1.0009062s] Jan 31 01:36:36.283: INFO: Created: latency-svc-pskzn Jan 31 01:36:36.307: INFO: Got endpoints: latency-svc-pskzn [959.745378ms] Jan 31 01:36:36.338: INFO: Created: latency-svc-xjvb6 Jan 31 01:36:36.349: INFO: Got endpoints: latency-svc-xjvb6 [945.065841ms] Jan 31 01:36:36.367: INFO: Created: latency-svc-4hx57 Jan 31 01:36:36.379: INFO: Got endpoints: latency-svc-4hx57 [888.586114ms] Jan 31 01:36:36.445: INFO: Created: latency-svc-b4zrg Jan 31 01:36:36.469: INFO: Got endpoints: latency-svc-b4zrg [941.868939ms] Jan 31 01:36:36.523: INFO: Created: latency-svc-tm8s6 Jan 31 01:36:36.535: INFO: Got endpoints: latency-svc-tm8s6 [942.024469ms] Jan 31 01:36:36.588: INFO: Created: latency-svc-chlj7 Jan 31 01:36:36.616: INFO: Got endpoints: latency-svc-chlj7 [966.974522ms] Jan 31 01:36:36.617: INFO: Created: latency-svc-q7742 Jan 31 01:36:36.658: INFO: Got endpoints: latency-svc-q7742 [925.522966ms] Jan 31 01:36:36.720: INFO: Created: latency-svc-64wm4 Jan 31 01:36:36.738: INFO: Got endpoints: latency-svc-64wm4 [962.640575ms] Jan 31 01:36:36.741: INFO: Created: latency-svc-pq8bv Jan 31 01:36:36.801: INFO: Got endpoints: latency-svc-pq8bv [944.030964ms] Jan 31 01:36:36.802: INFO: Created: latency-svc-wvsdn Jan 31 01:36:36.857: INFO: Got endpoints: latency-svc-wvsdn [938.757269ms] Jan 31 01:36:36.875: INFO: Created: latency-svc-bzbxz Jan 31 01:36:36.887: INFO: Got endpoints: latency-svc-bzbxz [876.724492ms] Jan 31 01:36:36.931: INFO: Created: latency-svc-4h2vv Jan 31 01:36:36.946: INFO: Got endpoints: latency-svc-4h2vv [900.014912ms] Jan 31 01:36:36.989: INFO: Created: latency-svc-lhmr6 Jan 31 01:36:36.996: INFO: Got endpoints: latency-svc-lhmr6 [851.838029ms] Jan 31 01:36:37.018: INFO: Created: latency-svc-whhzp Jan 31 01:36:37.067: INFO: Got endpoints: latency-svc-whhzp [911.841395ms] Jan 31 01:36:37.127: INFO: Created: latency-svc-svr75 Jan 31 01:36:37.153: INFO: Got endpoints: latency-svc-svr75 [871.510512ms] Jan 31 01:36:37.154: INFO: Created: latency-svc-fp4hz Jan 31 01:36:37.164: INFO: Got endpoints: latency-svc-fp4hz [857.203524ms] Jan 31 01:36:37.177: INFO: Created: latency-svc-tq8l8 Jan 31 01:36:37.189: INFO: Got endpoints: latency-svc-tq8l8 [839.166256ms] Jan 31 01:36:37.210: INFO: Created: latency-svc-4k4q7 Jan 31 01:36:37.224: INFO: Got endpoints: latency-svc-4k4q7 [845.081308ms] Jan 31 01:36:37.277: INFO: Created: latency-svc-k8jp2 Jan 31 01:36:37.294: INFO: Got endpoints: latency-svc-k8jp2 [824.846041ms] Jan 31 01:36:37.321: INFO: Created: latency-svc-7dc5x Jan 31 01:36:37.345: INFO: Got endpoints: latency-svc-7dc5x [809.914224ms] Jan 31 01:36:37.376: INFO: Created: latency-svc-7vth4 Jan 31 01:36:37.426: INFO: Got endpoints: latency-svc-7vth4 [809.908538ms] Jan 31 01:36:37.449: INFO: Created: latency-svc-mzlsd Jan 31 01:36:37.467: INFO: Got endpoints: latency-svc-mzlsd [809.107128ms] Jan 31 01:36:37.492: INFO: Created: latency-svc-d7srh Jan 31 01:36:37.503: INFO: Got endpoints: latency-svc-d7srh [764.260045ms] Jan 31 01:36:37.503: INFO: Latencies: [137.931942ms 206.779033ms 283.142612ms 355.839614ms 445.528268ms 475.599249ms 601.049551ms 691.696012ms 694.476021ms 701.647826ms 707.293734ms 710.424901ms 710.768379ms 718.905504ms 719.947709ms 724.972138ms 725.110844ms 728.270941ms 735.022337ms 735.228063ms 741.427709ms 749.974354ms 754.565406ms 755.365962ms 757.688555ms 758.577027ms 759.081379ms 759.524397ms 761.511263ms 763.377768ms 763.811077ms 764.260045ms 765.212528ms 771.023719ms 772.854566ms 773.228267ms 786.582366ms 787.065088ms 787.509079ms 787.884177ms 789.124858ms 791.825012ms 797.948699ms 806.943964ms 808.051659ms 809.107128ms 809.908538ms 809.914224ms 810.380141ms 812.553881ms 817.517652ms 820.008274ms 820.153122ms 820.441722ms 821.294167ms 821.671077ms 821.674157ms 824.846041ms 825.358472ms 827.688517ms 830.919276ms 830.965139ms 835.054541ms 835.822907ms 839.166256ms 840.080727ms 840.223881ms 841.44058ms 844.043065ms 845.081308ms 845.113948ms 845.345882ms 845.962135ms 846.277282ms 847.776621ms 848.153838ms 849.157286ms 851.838029ms 852.164152ms 852.263468ms 853.862624ms 855.125748ms 856.910728ms 857.203524ms 860.457374ms 862.395805ms 863.26555ms 864.247613ms 864.997762ms 865.126928ms 865.180647ms 869.140676ms 869.90669ms 870.73045ms 871.510512ms 872.086854ms 873.573886ms 874.699632ms 876.724492ms 878.619032ms 879.151789ms 881.17344ms 881.303084ms 882.498343ms 885.352783ms 887.897971ms 888.586114ms 889.077385ms 891.244824ms 892.458103ms 895.650529ms 897.722311ms 898.58155ms 899.258755ms 900.014912ms 900.06367ms 901.147767ms 904.737235ms 911.201478ms 911.841395ms 912.167003ms 912.514497ms 915.369456ms 915.558019ms 915.960052ms 916.537011ms 916.969444ms 917.98452ms 921.455051ms 922.06388ms 923.621859ms 923.951372ms 925.009698ms 925.522966ms 927.097218ms 930.694234ms 931.659157ms 932.653996ms 933.535044ms 938.022721ms 938.757269ms 941.824494ms 941.832241ms 941.868939ms 942.024469ms 942.051948ms 942.196993ms 944.030964ms 944.183066ms 945.065841ms 951.857731ms 954.706053ms 958.064833ms 959.745378ms 960.836393ms 962.640575ms 966.519313ms 966.974522ms 967.023774ms 967.418945ms 967.745231ms 968.224338ms 969.441691ms 970.528621ms 970.692012ms 972.973672ms 973.388195ms 973.961106ms 975.591668ms 990.294715ms 991.458982ms 994.522395ms 994.877674ms 1.0009062s 1.00576788s 1.006643127s 1.009273215s 1.020049402s 1.020072184s 1.023808054s 1.026470659s 1.030367754s 1.041191941s 1.043703068s 1.045276177s 1.045306625s 1.056024692s 1.05617963s 1.056678026s 1.056869583s 1.058322931s 1.059907994s 1.06106431s 1.066432652s 1.068736718s 1.078148377s 1.093065302s 1.097026875s 1.124503594s 1.152214766s] Jan 31 01:36:37.503: INFO: 50 %ile: 879.151789ms Jan 31 01:36:37.503: INFO: 90 %ile: 1.026470659s Jan 31 01:36:37.503: INFO: 99 %ile: 1.124503594s Jan 31 01:36:37.503: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:36:37.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-6014" for this suite. • [SLOW TEST:16.328 seconds] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":311,"completed":233,"skipped":4259,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:36:37.517: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 Jan 31 01:36:37.731: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"967132ae-5940-41bd-b743-78b359b1dd5d", Controller:(*bool)(0xc0046c7e22), BlockOwnerDeletion:(*bool)(0xc0046c7e23)}} Jan 31 01:36:37.745: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"80caa5ca-065a-47fe-946c-d751e469e3d2", Controller:(*bool)(0xc002c69016), BlockOwnerDeletion:(*bool)(0xc002c69017)}} Jan 31 01:36:37.762: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"f089f74f-cb66-4c6d-9626-c6e040d29537", Controller:(*bool)(0xc003453812), BlockOwnerDeletion:(*bool)(0xc003453813)}} [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:36:43.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-528" for this suite. • [SLOW TEST:5.817 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":311,"completed":234,"skipped":4264,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:36:43.335: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:36:57.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6936" for this suite. • [SLOW TEST:13.826 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":311,"completed":235,"skipped":4284,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:36:57.162: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should check if kubectl can dry-run update Pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jan 31 01:36:57.286: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-3016 run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod' Jan 31 01:36:57.426: INFO: stderr: "" Jan 31 01:36:57.426: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: replace the image in the pod with server-side dry-run Jan 31 01:36:57.426: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-3016 patch pod e2e-test-httpd-pod -p {"spec":{"containers":[{"name": "e2e-test-httpd-pod","image": "docker.io/library/busybox:1.29"}]}} --dry-run=server' Jan 31 01:36:57.826: INFO: stderr: "" Jan 31 01:36:57.826: INFO: stdout: "pod/e2e-test-httpd-pod patched\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/httpd:2.4.38-alpine Jan 31 01:36:57.841: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-3016 delete pods e2e-test-httpd-pod' Jan 31 01:37:21.090: INFO: stderr: "" Jan 31 01:37:21.090: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:37:21.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3016" for this suite. • [SLOW TEST:23.946 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl server-side dry-run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:909 should check if kubectl can dry-run update Pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":311,"completed":236,"skipped":4311,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:37:21.109: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating configMap with name configmap-test-volume-map-d9a9b16b-df0a-42d9-a4d4-1bc0e3112682 STEP: Creating a pod to test consume configMaps Jan 31 01:37:21.288: INFO: Waiting up to 5m0s for pod "pod-configmaps-712ac994-c3cc-4796-8a3d-5551ebe187ef" in namespace "configmap-6342" to be "Succeeded or Failed" Jan 31 01:37:21.294: INFO: Pod "pod-configmaps-712ac994-c3cc-4796-8a3d-5551ebe187ef": Phase="Pending", Reason="", readiness=false. Elapsed: 5.940159ms Jan 31 01:37:23.354: INFO: Pod "pod-configmaps-712ac994-c3cc-4796-8a3d-5551ebe187ef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065640723s Jan 31 01:37:25.358: INFO: Pod "pod-configmaps-712ac994-c3cc-4796-8a3d-5551ebe187ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.069918554s STEP: Saw pod success Jan 31 01:37:25.358: INFO: Pod "pod-configmaps-712ac994-c3cc-4796-8a3d-5551ebe187ef" satisfied condition "Succeeded or Failed" Jan 31 01:37:25.361: INFO: Trying to get logs from node latest-worker pod pod-configmaps-712ac994-c3cc-4796-8a3d-5551ebe187ef container agnhost-container: STEP: delete the pod Jan 31 01:37:25.488: INFO: Waiting for pod pod-configmaps-712ac994-c3cc-4796-8a3d-5551ebe187ef to disappear Jan 31 01:37:25.519: INFO: Pod pod-configmaps-712ac994-c3cc-4796-8a3d-5551ebe187ef no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:37:25.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6342" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":311,"completed":237,"skipped":4349,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:37:25.527: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating projection with secret that has name projected-secret-test-bead6334-79b6-4f92-9982-0723199c259f STEP: Creating a pod to test consume secrets Jan 31 01:37:25.709: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8ffda4fa-cfd8-437d-bc02-4e6e5cc709d3" in namespace "projected-4591" to be "Succeeded or Failed" Jan 31 01:37:25.737: INFO: Pod "pod-projected-secrets-8ffda4fa-cfd8-437d-bc02-4e6e5cc709d3": Phase="Pending", Reason="", readiness=false. Elapsed: 28.155065ms Jan 31 01:37:27.740: INFO: Pod "pod-projected-secrets-8ffda4fa-cfd8-437d-bc02-4e6e5cc709d3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031797708s Jan 31 01:37:29.746: INFO: Pod "pod-projected-secrets-8ffda4fa-cfd8-437d-bc02-4e6e5cc709d3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037581857s STEP: Saw pod success Jan 31 01:37:29.746: INFO: Pod "pod-projected-secrets-8ffda4fa-cfd8-437d-bc02-4e6e5cc709d3" satisfied condition "Succeeded or Failed" Jan 31 01:37:29.749: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-8ffda4fa-cfd8-437d-bc02-4e6e5cc709d3 container projected-secret-volume-test: STEP: delete the pod Jan 31 01:37:29.793: INFO: Waiting for pod pod-projected-secrets-8ffda4fa-cfd8-437d-bc02-4e6e5cc709d3 to disappear Jan 31 01:37:29.818: INFO: Pod pod-projected-secrets-8ffda4fa-cfd8-437d-bc02-4e6e5cc709d3 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:37:29.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4591" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":238,"skipped":4358,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:37:29.827: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a pod to test downward API volume plugin Jan 31 01:37:29.931: INFO: Waiting up to 5m0s for pod "downwardapi-volume-87274809-198f-4c68-8f87-8d5367f2b25b" in namespace "downward-api-7602" to be "Succeeded or Failed" Jan 31 01:37:29.967: INFO: Pod "downwardapi-volume-87274809-198f-4c68-8f87-8d5367f2b25b": Phase="Pending", Reason="", readiness=false. Elapsed: 35.57349ms Jan 31 01:37:31.972: INFO: Pod "downwardapi-volume-87274809-198f-4c68-8f87-8d5367f2b25b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040881783s Jan 31 01:37:33.977: INFO: Pod "downwardapi-volume-87274809-198f-4c68-8f87-8d5367f2b25b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045441363s STEP: Saw pod success Jan 31 01:37:33.977: INFO: Pod "downwardapi-volume-87274809-198f-4c68-8f87-8d5367f2b25b" satisfied condition "Succeeded or Failed" Jan 31 01:37:33.979: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-87274809-198f-4c68-8f87-8d5367f2b25b container client-container: STEP: delete the pod Jan 31 01:37:34.062: INFO: Waiting for pod downwardapi-volume-87274809-198f-4c68-8f87-8d5367f2b25b to disappear Jan 31 01:37:34.079: INFO: Pod downwardapi-volume-87274809-198f-4c68-8f87-8d5367f2b25b no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:37:34.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7602" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":239,"skipped":4370,"failed":0} SSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:37:34.087: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:85 [It] deployment should support rollover [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 Jan 31 01:37:34.231: INFO: Pod name rollover-pod: Found 0 pods out of 1 Jan 31 01:37:39.238: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jan 31 01:37:39.238: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Jan 31 01:37:41.243: INFO: Creating deployment "test-rollover-deployment" Jan 31 01:37:41.268: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Jan 31 01:37:43.275: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Jan 31 01:37:43.283: INFO: Ensure that both replica sets have 1 created replica Jan 31 01:37:43.288: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Jan 31 01:37:43.297: INFO: Updating deployment test-rollover-deployment Jan 31 01:37:43.297: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Jan 31 01:37:45.329: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Jan 31 01:37:45.334: INFO: Make sure deployment "test-rollover-deployment" is complete Jan 31 01:37:45.400: INFO: all replica sets need to contain the pod-template-hash label Jan 31 01:37:45.400: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747653861, loc:(*time.Location)(0x79bd420)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747653861, loc:(*time.Location)(0x79bd420)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747653863, loc:(*time.Location)(0x79bd420)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747653861, loc:(*time.Location)(0x79bd420)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-668db69979\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 31 01:37:47.409: INFO: all replica sets need to contain the pod-template-hash label Jan 31 01:37:47.409: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747653861, loc:(*time.Location)(0x79bd420)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747653861, loc:(*time.Location)(0x79bd420)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747653867, loc:(*time.Location)(0x79bd420)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747653861, loc:(*time.Location)(0x79bd420)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-668db69979\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 31 01:37:49.411: INFO: all replica sets need to contain the pod-template-hash label Jan 31 01:37:49.411: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747653861, loc:(*time.Location)(0x79bd420)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747653861, loc:(*time.Location)(0x79bd420)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747653867, loc:(*time.Location)(0x79bd420)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747653861, loc:(*time.Location)(0x79bd420)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-668db69979\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 31 01:37:51.409: INFO: all replica sets need to contain the pod-template-hash label Jan 31 01:37:51.409: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747653861, loc:(*time.Location)(0x79bd420)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747653861, loc:(*time.Location)(0x79bd420)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747653867, loc:(*time.Location)(0x79bd420)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747653861, loc:(*time.Location)(0x79bd420)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-668db69979\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 31 01:37:53.408: INFO: all replica sets need to contain the pod-template-hash label Jan 31 01:37:53.408: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747653861, loc:(*time.Location)(0x79bd420)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747653861, loc:(*time.Location)(0x79bd420)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747653867, loc:(*time.Location)(0x79bd420)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747653861, loc:(*time.Location)(0x79bd420)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-668db69979\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 31 01:37:55.409: INFO: all replica sets need to contain the pod-template-hash label Jan 31 01:37:55.409: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747653861, loc:(*time.Location)(0x79bd420)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747653861, loc:(*time.Location)(0x79bd420)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747653867, loc:(*time.Location)(0x79bd420)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747653861, loc:(*time.Location)(0x79bd420)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-668db69979\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 31 01:37:57.409: INFO: Jan 31 01:37:57.409: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:79 Jan 31 01:37:57.417: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-3024 4af1be23-709d-4d4c-9f3b-c8a8b64048a5 1134143 2 2021-01-31 01:37:41 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2021-01-31 01:37:43 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-01-31 01:37:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.21 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004c67b68 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2021-01-31 01:37:41 +0000 UTC,LastTransitionTime:2021-01-31 01:37:41 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-668db69979" has successfully progressed.,LastUpdateTime:2021-01-31 01:37:57 +0000 UTC,LastTransitionTime:2021-01-31 01:37:41 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Jan 31 01:37:57.420: INFO: New ReplicaSet "test-rollover-deployment-668db69979" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-668db69979 deployment-3024 c9aeac6f-d08d-4094-9536-c8a650cae6a3 1134132 2 2021-01-31 01:37:43 +0000 UTC map[name:rollover-pod pod-template-hash:668db69979] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 4af1be23-709d-4d4c-9f3b-c8a8b64048a5 0xc006a209f7 0xc006a209f8}] [] [{kube-controller-manager Update apps/v1 2021-01-31 01:37:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4af1be23-709d-4d4c-9f3b-c8a8b64048a5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 668db69979,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:668db69979] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.21 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc006a20a88 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jan 31 01:37:57.420: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Jan 31 01:37:57.421: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-3024 0d2094e4-851b-4940-9e17-0561f4a3b659 1134141 2 2021-01-31 01:37:34 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 4af1be23-709d-4d4c-9f3b-c8a8b64048a5 0xc006a208e7 0xc006a208e8}] [] [{e2e.test Update apps/v1 2021-01-31 01:37:34 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-01-31 01:37:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4af1be23-709d-4d4c-9f3b-c8a8b64048a5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc006a20988 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 31 01:37:57.421: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-78bc8b888c deployment-3024 4a7bbe76-c641-4ab2-a3b8-43bfd65827d8 1134094 2 2021-01-31 01:37:41 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 4af1be23-709d-4d4c-9f3b-c8a8b64048a5 0xc006a20af7 0xc006a20af8}] [] [{kube-controller-manager Update apps/v1 2021-01-31 01:37:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4af1be23-709d-4d4c-9f3b-c8a8b64048a5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 78bc8b888c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc006a20b88 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 31 01:37:57.423: INFO: Pod "test-rollover-deployment-668db69979-jg6xc" is available: &Pod{ObjectMeta:{test-rollover-deployment-668db69979-jg6xc test-rollover-deployment-668db69979- deployment-3024 bb6f497e-21af-4d3f-a517-f59047a4d468 1134110 0 2021-01-31 01:37:43 +0000 UTC map[name:rollover-pod pod-template-hash:668db69979] map[] [{apps/v1 ReplicaSet test-rollover-deployment-668db69979 c9aeac6f-d08d-4094-9536-c8a650cae6a3 0xc004c67ea7 0xc004c67ea8}] [] [{kube-controller-manager Update v1 2021-01-31 01:37:43 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c9aeac6f-d08d-4094-9536-c8a650cae6a3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-31 01:37:47 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.34\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hk5k4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hk5k4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hk5k4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 01:37:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 01:37:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 01:37:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-31 01:37:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.2.34,StartTime:2021-01-31 01:37:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-01-31 01:37:46 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a,ContainerID:containerd://812adf0964af82f1dddff3114358708f18911a5c2e59aa9dd65c7476506f70d1,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.34,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:37:57.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3024" for this suite. • [SLOW TEST:23.344 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":311,"completed":240,"skipped":4375,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:37:57.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Jan 31 01:37:58.922: INFO: Pod name wrapped-volume-race-5b929991-fd17-4921-b4d8-2f0b3e177505: Found 0 pods out of 5 Jan 31 01:38:03.930: INFO: Pod name wrapped-volume-race-5b929991-fd17-4921-b4d8-2f0b3e177505: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-5b929991-fd17-4921-b4d8-2f0b3e177505 in namespace emptydir-wrapper-387, will wait for the garbage collector to delete the pods Jan 31 01:38:20.074: INFO: Deleting ReplicationController wrapped-volume-race-5b929991-fd17-4921-b4d8-2f0b3e177505 took: 67.333395ms Jan 31 01:38:20.274: INFO: Terminating ReplicationController wrapped-volume-race-5b929991-fd17-4921-b4d8-2f0b3e177505 pods took: 200.270985ms STEP: Creating RC which spawns configmap-volume pods Jan 31 01:39:31.413: INFO: Pod name wrapped-volume-race-c487451c-2ffc-454c-a232-5f6a8e03e1ba: Found 0 pods out of 5 Jan 31 01:39:36.419: INFO: Pod name wrapped-volume-race-c487451c-2ffc-454c-a232-5f6a8e03e1ba: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-c487451c-2ffc-454c-a232-5f6a8e03e1ba in namespace emptydir-wrapper-387, will wait for the garbage collector to delete the pods Jan 31 01:39:52.531: INFO: Deleting ReplicationController wrapped-volume-race-c487451c-2ffc-454c-a232-5f6a8e03e1ba took: 15.629882ms Jan 31 01:39:53.132: INFO: Terminating ReplicationController wrapped-volume-race-c487451c-2ffc-454c-a232-5f6a8e03e1ba pods took: 600.175568ms STEP: Creating RC which spawns configmap-volume pods Jan 31 01:40:31.499: INFO: Pod name wrapped-volume-race-37498d56-f39d-4e03-a103-61770f700596: Found 0 pods out of 5 Jan 31 01:40:36.509: INFO: Pod name wrapped-volume-race-37498d56-f39d-4e03-a103-61770f700596: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-37498d56-f39d-4e03-a103-61770f700596 in namespace emptydir-wrapper-387, will wait for the garbage collector to delete the pods Jan 31 01:40:52.595: INFO: Deleting ReplicationController wrapped-volume-race-37498d56-f39d-4e03-a103-61770f700596 took: 7.425829ms Jan 31 01:40:53.196: INFO: Terminating ReplicationController wrapped-volume-race-37498d56-f39d-4e03-a103-61770f700596 pods took: 600.442542ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:41:11.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-387" for this suite. • [SLOW TEST:194.338 seconds] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":311,"completed":241,"skipped":4387,"failed":0} SSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:41:11.769: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 Jan 31 01:41:11.844: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-2076 create -f -' Jan 31 01:41:12.139: INFO: stderr: "" Jan 31 01:41:12.139: INFO: stdout: "replicationcontroller/agnhost-primary created\n" Jan 31 01:41:12.139: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-2076 create -f -' Jan 31 01:41:12.493: INFO: stderr: "" Jan 31 01:41:12.493: INFO: stdout: "service/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Jan 31 01:41:13.498: INFO: Selector matched 1 pods for map[app:agnhost] Jan 31 01:41:13.498: INFO: Found 0 / 1 Jan 31 01:41:14.511: INFO: Selector matched 1 pods for map[app:agnhost] Jan 31 01:41:14.511: INFO: Found 0 / 1 Jan 31 01:41:15.498: INFO: Selector matched 1 pods for map[app:agnhost] Jan 31 01:41:15.498: INFO: Found 0 / 1 Jan 31 01:41:16.498: INFO: Selector matched 1 pods for map[app:agnhost] Jan 31 01:41:16.498: INFO: Found 1 / 1 Jan 31 01:41:16.498: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jan 31 01:41:16.502: INFO: Selector matched 1 pods for map[app:agnhost] Jan 31 01:41:16.502: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 31 01:41:16.502: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-2076 describe pod agnhost-primary-zfhch' Jan 31 01:41:16.627: INFO: stderr: "" Jan 31 01:41:16.627: INFO: stdout: "Name: agnhost-primary-zfhch\nNamespace: kubectl-2076\nPriority: 0\nNode: latest-worker/172.18.0.14\nStart Time: Sun, 31 Jan 2021 01:41:12 +0000\nLabels: app=agnhost\n role=primary\nAnnotations: \nStatus: Running\nIP: 10.244.2.45\nIPs:\n IP: 10.244.2.45\nControlled By: ReplicationController/agnhost-primary\nContainers:\n agnhost-primary:\n Container ID: containerd://49a7255a0c71575af1490ee088a26631dba33c4266edf50ed8b3d2c967d56cd4\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.21\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Sun, 31 Jan 2021 01:41:15 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-dnkkl (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-dnkkl:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-dnkkl\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 4s default-scheduler Successfully assigned kubectl-2076/agnhost-primary-zfhch to latest-worker\n Normal Pulled 3s kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.21\" already present on machine\n Normal Created 1s kubelet Created container agnhost-primary\n Normal Started 1s kubelet Started container agnhost-primary\n" Jan 31 01:41:16.627: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-2076 describe rc agnhost-primary' Jan 31 01:41:16.745: INFO: stderr: "" Jan 31 01:41:16.745: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-2076\nSelector: app=agnhost,role=primary\nLabels: app=agnhost\n role=primary\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=primary\n Containers:\n agnhost-primary:\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.21\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s replication-controller Created pod: agnhost-primary-zfhch\n" Jan 31 01:41:16.745: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-2076 describe service agnhost-primary' Jan 31 01:41:16.890: INFO: stderr: "" Jan 31 01:41:16.891: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-2076\nLabels: app=agnhost\n role=primary\nAnnotations: \nSelector: app=agnhost,role=primary\nType: ClusterIP\nIP Families: \nIP: 10.96.40.148\nIPs: 10.96.40.148\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.2.45:6379\nSession Affinity: None\nEvents: \n" Jan 31 01:41:16.894: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-2076 describe node latest-control-plane' Jan 31 01:41:17.077: INFO: stderr: "" Jan 31 01:41:17.077: INFO: stdout: "Name: latest-control-plane\nRoles: control-plane,master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=latest-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/control-plane=\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Tue, 26 Jan 2021 08:08:11 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: latest-control-plane\n AcquireTime: \n RenewTime: Sun, 31 Jan 2021 01:41:09 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Sun, 31 Jan 2021 01:40:18 +0000 Tue, 26 Jan 2021 08:08:07 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Sun, 31 Jan 2021 01:40:18 +0000 Tue, 26 Jan 2021 08:08:07 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Sun, 31 Jan 2021 01:40:18 +0000 Tue, 26 Jan 2021 08:08:07 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Sun, 31 Jan 2021 01:40:18 +0000 Tue, 26 Jan 2021 08:08:53 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.18.0.15\n Hostname: latest-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759868Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759868Ki\n pods: 110\nSystem Info:\n Machine ID: 5453bb7ec42e49739c7b6d2228bc8f1f\n System UUID: db0e74ae-46ee-4a01-9695-58430d8d48f2\n Boot ID: b267d78b-f69b-4338-80e8-3f4944338e5d\n Kernel Version: 4.15.0-118-generic\n OS Image: Ubuntu 20.04 LTS\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.3-14-g449e9269\n Kubelet Version: v1.21.0-alpha.0\n Kube-Proxy Version: v1.21.0-alpha.0\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nProviderID: kind://docker/latest/latest-control-plane\nNon-terminated Pods: (7 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system etcd-latest-control-plane 100m (0%) 0 (0%) 100Mi (0%) 0 (0%) 4d17h\n kube-system kindnet-xpx2l 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 4d17h\n kube-system kube-apiserver-latest-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 4d17h\n kube-system kube-controller-manager-latest-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 4d17h\n kube-system kube-proxy-vgxkf 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4d17h\n kube-system kube-scheduler-latest-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 4d17h\n local-path-storage local-path-provisioner-8b46957d4-j852z 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4d17h\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 750m (4%) 100m (0%)\n memory 150Mi (0%) 50Mi (0%)\n ephemeral-storage 100Mi (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" Jan 31 01:41:17.077: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-2076 describe namespace kubectl-2076' Jan 31 01:41:17.231: INFO: stderr: "" Jan 31 01:41:17.231: INFO: stdout: "Name: kubectl-2076\nLabels: e2e-framework=kubectl\n e2e-run=79177e60-0ba8-472f-857f-d62460482b66\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:41:17.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2076" for this suite. • [SLOW TEST:5.516 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1090 should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":311,"completed":242,"skipped":4396,"failed":0} [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:41:17.285: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating configMap with name configmap-test-volume-af45a21e-8cac-4073-92a6-daaf2eec0872 STEP: Creating a pod to test consume configMaps Jan 31 01:41:17.458: INFO: Waiting up to 5m0s for pod "pod-configmaps-a5d87587-76e5-4d1d-8039-02e40e510ddb" in namespace "configmap-4265" to be "Succeeded or Failed" Jan 31 01:41:17.484: INFO: Pod "pod-configmaps-a5d87587-76e5-4d1d-8039-02e40e510ddb": Phase="Pending", Reason="", readiness=false. Elapsed: 26.054135ms Jan 31 01:41:19.495: INFO: Pod "pod-configmaps-a5d87587-76e5-4d1d-8039-02e40e510ddb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036984825s Jan 31 01:41:21.507: INFO: Pod "pod-configmaps-a5d87587-76e5-4d1d-8039-02e40e510ddb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048715173s Jan 31 01:41:23.511: INFO: Pod "pod-configmaps-a5d87587-76e5-4d1d-8039-02e40e510ddb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.052527189s STEP: Saw pod success Jan 31 01:41:23.511: INFO: Pod "pod-configmaps-a5d87587-76e5-4d1d-8039-02e40e510ddb" satisfied condition "Succeeded or Failed" Jan 31 01:41:23.514: INFO: Trying to get logs from node latest-worker pod pod-configmaps-a5d87587-76e5-4d1d-8039-02e40e510ddb container agnhost-container: STEP: delete the pod Jan 31 01:41:23.563: INFO: Waiting for pod pod-configmaps-a5d87587-76e5-4d1d-8039-02e40e510ddb to disappear Jan 31 01:41:23.584: INFO: Pod pod-configmaps-a5d87587-76e5-4d1d-8039-02e40e510ddb no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:41:23.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4265" for this suite. • [SLOW TEST:6.373 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":311,"completed":243,"skipped":4396,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:41:23.659: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: creating the pod Jan 31 01:41:23.720: INFO: PodSpec: initContainers in spec.initContainers Jan 31 01:42:14.658: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-368a90d2-42c4-422f-88c1-7cbc5b773b1a", GenerateName:"", Namespace:"init-container-1318", SelfLink:"", UID:"634fe8d2-a88d-4cbd-80a0-658f356cf85f", ResourceVersion:"1135607", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63747654083, loc:(*time.Location)(0x79bd420)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"720568813"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002e740a0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002e740c0)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002e740e0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002e74100)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-hdtn4", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc003dea000), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-hdtn4", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-hdtn4", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-hdtn4", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc006a74098), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"latest-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002308000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc006a74120)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc006a74140)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc006a74148), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc006a7414c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc00311c020), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747654083, loc:(*time.Location)(0x79bd420)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747654083, loc:(*time.Location)(0x79bd420)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747654083, loc:(*time.Location)(0x79bd420)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747654083, loc:(*time.Location)(0x79bd420)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.14", PodIP:"10.244.2.47", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.47"}}, StartTime:(*v1.Time)(0xc002e74140), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0023081c0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002308230)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://24c1e6e2b50fa225560abc560e1c6161370835cfa67bd291604840b0a413888a", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002e74220), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002e741e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc006a741cf)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:42:14.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1318" for this suite. • [SLOW TEST:51.080 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:635 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":311,"completed":244,"skipped":4457,"failed":0} [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:42:14.740: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: creating service in namespace services-4501 Jan 31 01:42:18.903: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=services-4501 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Jan 31 01:42:23.083: INFO: stderr: "I0131 01:42:22.979466 2485 log.go:181] (0xc00003a0b0) (0xc000da0140) Create stream\nI0131 01:42:22.979549 2485 log.go:181] (0xc00003a0b0) (0xc000da0140) Stream added, broadcasting: 1\nI0131 01:42:22.981844 2485 log.go:181] (0xc00003a0b0) Reply frame received for 1\nI0131 01:42:22.981920 2485 log.go:181] (0xc00003a0b0) (0xc000da01e0) Create stream\nI0131 01:42:22.981947 2485 log.go:181] (0xc00003a0b0) (0xc000da01e0) Stream added, broadcasting: 3\nI0131 01:42:22.983052 2485 log.go:181] (0xc00003a0b0) Reply frame received for 3\nI0131 01:42:22.983109 2485 log.go:181] (0xc00003a0b0) (0xc0009245a0) Create stream\nI0131 01:42:22.983129 2485 log.go:181] (0xc00003a0b0) (0xc0009245a0) Stream added, broadcasting: 5\nI0131 01:42:22.984186 2485 log.go:181] (0xc00003a0b0) Reply frame received for 5\nI0131 01:42:23.068745 2485 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0131 01:42:23.068776 2485 log.go:181] (0xc0009245a0) (5) Data frame handling\nI0131 01:42:23.068794 2485 log.go:181] (0xc0009245a0) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI0131 01:42:23.074081 2485 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0131 01:42:23.074103 2485 log.go:181] (0xc000da01e0) (3) Data frame handling\nI0131 01:42:23.074143 2485 log.go:181] (0xc000da01e0) (3) Data frame sent\nI0131 01:42:23.074734 2485 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0131 01:42:23.074764 2485 log.go:181] (0xc0009245a0) (5) Data frame handling\nI0131 01:42:23.074801 2485 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0131 01:42:23.074816 2485 log.go:181] (0xc000da01e0) (3) Data frame handling\nI0131 01:42:23.077161 2485 log.go:181] (0xc00003a0b0) Data frame received for 1\nI0131 01:42:23.077187 2485 log.go:181] (0xc000da0140) (1) Data frame handling\nI0131 01:42:23.077206 2485 log.go:181] (0xc000da0140) (1) Data frame sent\nI0131 01:42:23.077232 2485 log.go:181] (0xc00003a0b0) (0xc000da0140) Stream removed, broadcasting: 1\nI0131 01:42:23.077249 2485 log.go:181] (0xc00003a0b0) Go away received\nI0131 01:42:23.077690 2485 log.go:181] (0xc00003a0b0) (0xc000da0140) Stream removed, broadcasting: 1\nI0131 01:42:23.077723 2485 log.go:181] (0xc00003a0b0) (0xc000da01e0) Stream removed, broadcasting: 3\nI0131 01:42:23.077742 2485 log.go:181] (0xc00003a0b0) (0xc0009245a0) Stream removed, broadcasting: 5\n" Jan 31 01:42:23.083: INFO: stdout: "iptables" Jan 31 01:42:23.083: INFO: proxyMode: iptables Jan 31 01:42:23.107: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jan 31 01:42:23.126: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-clusterip-timeout in namespace services-4501 STEP: creating replication controller affinity-clusterip-timeout in namespace services-4501 I0131 01:42:23.243885 7 runners.go:190] Created replication controller with name: affinity-clusterip-timeout, namespace: services-4501, replica count: 3 I0131 01:42:26.294289 7 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0131 01:42:29.294553 7 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 31 01:42:29.347: INFO: Creating new exec pod Jan 31 01:42:34.365: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=services-4501 exec execpod-affinitykx6hf -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' Jan 31 01:42:34.608: INFO: stderr: "I0131 01:42:34.495526 2504 log.go:181] (0xc000d0ae70) (0xc00087a5a0) Create stream\nI0131 01:42:34.495588 2504 log.go:181] (0xc000d0ae70) (0xc00087a5a0) Stream added, broadcasting: 1\nI0131 01:42:34.498229 2504 log.go:181] (0xc000d0ae70) Reply frame received for 1\nI0131 01:42:34.498329 2504 log.go:181] (0xc000d0ae70) (0xc000c82000) Create stream\nI0131 01:42:34.498372 2504 log.go:181] (0xc000d0ae70) (0xc000c82000) Stream added, broadcasting: 3\nI0131 01:42:34.499915 2504 log.go:181] (0xc000d0ae70) Reply frame received for 3\nI0131 01:42:34.499945 2504 log.go:181] (0xc000d0ae70) (0xc000d98000) Create stream\nI0131 01:42:34.499956 2504 log.go:181] (0xc000d0ae70) (0xc000d98000) Stream added, broadcasting: 5\nI0131 01:42:34.501257 2504 log.go:181] (0xc000d0ae70) Reply frame received for 5\nI0131 01:42:34.600266 2504 log.go:181] (0xc000d0ae70) Data frame received for 5\nI0131 01:42:34.600320 2504 log.go:181] (0xc000d98000) (5) Data frame handling\nI0131 01:42:34.600341 2504 log.go:181] (0xc000d98000) (5) Data frame sent\nI0131 01:42:34.600352 2504 log.go:181] (0xc000d0ae70) Data frame received for 5\nI0131 01:42:34.600360 2504 log.go:181] (0xc000d98000) (5) Data frame handling\n+ nc -zv -t -w 2 affinity-clusterip-timeout 80\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\nI0131 01:42:34.600386 2504 log.go:181] (0xc000d98000) (5) Data frame sent\nI0131 01:42:34.601091 2504 log.go:181] (0xc000d0ae70) Data frame received for 3\nI0131 01:42:34.601123 2504 log.go:181] (0xc000c82000) (3) Data frame handling\nI0131 01:42:34.601139 2504 log.go:181] (0xc000d0ae70) Data frame received for 5\nI0131 01:42:34.601156 2504 log.go:181] (0xc000d98000) (5) Data frame handling\nI0131 01:42:34.603121 2504 log.go:181] (0xc000d0ae70) Data frame received for 1\nI0131 01:42:34.603135 2504 log.go:181] (0xc00087a5a0) (1) Data frame handling\nI0131 01:42:34.603143 2504 log.go:181] (0xc00087a5a0) (1) Data frame sent\nI0131 01:42:34.603151 2504 log.go:181] (0xc000d0ae70) (0xc00087a5a0) Stream removed, broadcasting: 1\nI0131 01:42:34.603189 2504 log.go:181] (0xc000d0ae70) Go away received\nI0131 01:42:34.603391 2504 log.go:181] (0xc000d0ae70) (0xc00087a5a0) Stream removed, broadcasting: 1\nI0131 01:42:34.603401 2504 log.go:181] (0xc000d0ae70) (0xc000c82000) Stream removed, broadcasting: 3\nI0131 01:42:34.603407 2504 log.go:181] (0xc000d0ae70) (0xc000d98000) Stream removed, broadcasting: 5\n" Jan 31 01:42:34.608: INFO: stdout: "" Jan 31 01:42:34.609: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=services-4501 exec execpod-affinitykx6hf -- /bin/sh -x -c nc -zv -t -w 2 10.96.126.154 80' Jan 31 01:42:34.820: INFO: stderr: "I0131 01:42:34.743713 2522 log.go:181] (0xc00003a0b0) (0xc000f9c000) Create stream\nI0131 01:42:34.743783 2522 log.go:181] (0xc00003a0b0) (0xc000f9c000) Stream added, broadcasting: 1\nI0131 01:42:34.746177 2522 log.go:181] (0xc00003a0b0) Reply frame received for 1\nI0131 01:42:34.746216 2522 log.go:181] (0xc00003a0b0) (0xc0007a3ae0) Create stream\nI0131 01:42:34.746224 2522 log.go:181] (0xc00003a0b0) (0xc0007a3ae0) Stream added, broadcasting: 3\nI0131 01:42:34.747342 2522 log.go:181] (0xc00003a0b0) Reply frame received for 3\nI0131 01:42:34.747377 2522 log.go:181] (0xc00003a0b0) (0xc000f9c0a0) Create stream\nI0131 01:42:34.747388 2522 log.go:181] (0xc00003a0b0) (0xc000f9c0a0) Stream added, broadcasting: 5\nI0131 01:42:34.748428 2522 log.go:181] (0xc00003a0b0) Reply frame received for 5\nI0131 01:42:34.811103 2522 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0131 01:42:34.811138 2522 log.go:181] (0xc0007a3ae0) (3) Data frame handling\nI0131 01:42:34.811217 2522 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0131 01:42:34.811241 2522 log.go:181] (0xc000f9c0a0) (5) Data frame handling\nI0131 01:42:34.811258 2522 log.go:181] (0xc000f9c0a0) (5) Data frame sent\nI0131 01:42:34.811267 2522 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0131 01:42:34.811273 2522 log.go:181] (0xc000f9c0a0) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.126.154 80\nConnection to 10.96.126.154 80 port [tcp/http] succeeded!\nI0131 01:42:34.813066 2522 log.go:181] (0xc00003a0b0) Data frame received for 1\nI0131 01:42:34.813095 2522 log.go:181] (0xc000f9c000) (1) Data frame handling\nI0131 01:42:34.813110 2522 log.go:181] (0xc000f9c000) (1) Data frame sent\nI0131 01:42:34.813124 2522 log.go:181] (0xc00003a0b0) (0xc000f9c000) Stream removed, broadcasting: 1\nI0131 01:42:34.813140 2522 log.go:181] (0xc00003a0b0) Go away received\nI0131 01:42:34.813502 2522 log.go:181] (0xc00003a0b0) (0xc000f9c000) Stream removed, broadcasting: 1\nI0131 01:42:34.813517 2522 log.go:181] (0xc00003a0b0) (0xc0007a3ae0) Stream removed, broadcasting: 3\nI0131 01:42:34.813527 2522 log.go:181] (0xc00003a0b0) (0xc000f9c0a0) Stream removed, broadcasting: 5\n" Jan 31 01:42:34.820: INFO: stdout: "" Jan 31 01:42:34.820: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=services-4501 exec execpod-affinitykx6hf -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.96.126.154:80/ ; done' Jan 31 01:42:35.136: INFO: stderr: "I0131 01:42:34.955404 2540 log.go:181] (0xc000c29a20) (0xc000c248c0) Create stream\nI0131 01:42:34.955484 2540 log.go:181] (0xc000c29a20) (0xc000c248c0) Stream added, broadcasting: 1\nI0131 01:42:34.960355 2540 log.go:181] (0xc000c29a20) Reply frame received for 1\nI0131 01:42:34.960400 2540 log.go:181] (0xc000c29a20) (0xc000c24000) Create stream\nI0131 01:42:34.960419 2540 log.go:181] (0xc000c29a20) (0xc000c24000) Stream added, broadcasting: 3\nI0131 01:42:34.961259 2540 log.go:181] (0xc000c29a20) Reply frame received for 3\nI0131 01:42:34.961309 2540 log.go:181] (0xc000c29a20) (0xc000ade000) Create stream\nI0131 01:42:34.961320 2540 log.go:181] (0xc000c29a20) (0xc000ade000) Stream added, broadcasting: 5\nI0131 01:42:34.962165 2540 log.go:181] (0xc000c29a20) Reply frame received for 5\nI0131 01:42:35.032000 2540 log.go:181] (0xc000c29a20) Data frame received for 5\nI0131 01:42:35.032046 2540 log.go:181] (0xc000ade000) (5) Data frame handling\nI0131 01:42:35.032058 2540 log.go:181] (0xc000ade000) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.126.154:80/\nI0131 01:42:35.032073 2540 log.go:181] (0xc000c29a20) Data frame received for 3\nI0131 01:42:35.032080 2540 log.go:181] (0xc000c24000) (3) Data frame handling\nI0131 01:42:35.032087 2540 log.go:181] (0xc000c24000) (3) Data frame sent\nI0131 01:42:35.036970 2540 log.go:181] (0xc000c29a20) Data frame received for 5\nI0131 01:42:35.036991 2540 log.go:181] (0xc000ade000) (5) Data frame handling\nI0131 01:42:35.036998 2540 log.go:181] (0xc000ade000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.126.154:80/\nI0131 01:42:35.037017 2540 log.go:181] (0xc000c29a20) Data frame received for 3\nI0131 01:42:35.037025 2540 log.go:181] (0xc000c24000) (3) Data frame handling\nI0131 01:42:35.037031 2540 log.go:181] (0xc000c24000) (3) Data frame sent\nI0131 01:42:35.037038 2540 log.go:181] (0xc000c29a20) Data frame received for 3\nI0131 01:42:35.037052 2540 log.go:181] (0xc000c24000) (3) Data frame handling\nI0131 01:42:35.037136 2540 log.go:181] (0xc000c24000) (3) Data frame sent\nI0131 01:42:35.041604 2540 log.go:181] (0xc000c29a20) Data frame received for 3\nI0131 01:42:35.041620 2540 log.go:181] (0xc000c24000) (3) Data frame handling\nI0131 01:42:35.041631 2540 log.go:181] (0xc000c24000) (3) Data frame sent\nI0131 01:42:35.042243 2540 log.go:181] (0xc000c29a20) Data frame received for 3\nI0131 01:42:35.042264 2540 log.go:181] (0xc000c29a20) Data frame received for 5\nI0131 01:42:35.042289 2540 log.go:181] (0xc000ade000) (5) Data frame handling\nI0131 01:42:35.042376 2540 log.go:181] (0xc000ade000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.126.154:80/\nI0131 01:42:35.042390 2540 log.go:181] (0xc000c24000) (3) Data frame handling\nI0131 01:42:35.042401 2540 log.go:181] (0xc000c24000) (3) Data frame sent\nI0131 01:42:35.046405 2540 log.go:181] (0xc000c29a20) Data frame received for 3\nI0131 01:42:35.046439 2540 log.go:181] (0xc000c24000) (3) Data frame handling\nI0131 01:42:35.046456 2540 log.go:181] (0xc000c24000) (3) Data frame sent\nI0131 01:42:35.046870 2540 log.go:181] (0xc000c29a20) Data frame received for 3\nI0131 01:42:35.046899 2540 log.go:181] (0xc000c24000) (3) Data frame handling\nI0131 01:42:35.046912 2540 log.go:181] (0xc000c24000) (3) Data frame sent\nI0131 01:42:35.046925 2540 log.go:181] (0xc000c29a20) Data frame received for 5\nI0131 01:42:35.046932 2540 log.go:181] (0xc000ade000) (5) Data frame handling\nI0131 01:42:35.046938 2540 log.go:181] (0xc000ade000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.126.154:80/\nI0131 01:42:35.051503 2540 log.go:181] (0xc000c29a20) Data frame received for 3\nI0131 01:42:35.051520 2540 log.go:181] (0xc000c24000) (3) Data frame handling\nI0131 01:42:35.051527 2540 log.go:181] (0xc000c24000) (3) Data frame sent\nI0131 01:42:35.052673 2540 log.go:181] (0xc000c29a20) Data frame received for 3\nI0131 01:42:35.052717 2540 log.go:181] (0xc000c24000) (3) Data frame handling\nI0131 01:42:35.052737 2540 log.go:181] (0xc000c24000) (3) Data frame sent\nI0131 01:42:35.052761 2540 log.go:181] (0xc000c29a20) Data frame received for 5\nI0131 01:42:35.052775 2540 log.go:181] (0xc000ade000) (5) Data frame handling\nI0131 01:42:35.052795 2540 log.go:181] (0xc000ade000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.126.154:80/\nI0131 01:42:35.057791 2540 log.go:181] (0xc000c29a20) Data frame received for 3\nI0131 01:42:35.057804 2540 log.go:181] (0xc000c24000) (3) Data frame handling\nI0131 01:42:35.057811 2540 log.go:181] (0xc000c24000) (3) Data frame sent\nI0131 01:42:35.058516 2540 log.go:181] (0xc000c29a20) Data frame received for 5\nI0131 01:42:35.058548 2540 log.go:181] (0xc000ade000) (5) Data frame handling\nI0131 01:42:35.058562 2540 log.go:181] (0xc000ade000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.126.154:80/\nI0131 01:42:35.058577 2540 log.go:181] (0xc000c29a20) Data frame received for 3\nI0131 01:42:35.058586 2540 log.go:181] (0xc000c24000) (3) Data frame handling\nI0131 01:42:35.058594 2540 log.go:181] (0xc000c24000) (3) Data frame sent\nI0131 01:42:35.065029 2540 log.go:181] (0xc000c29a20) Data frame received for 3\nI0131 01:42:35.065060 2540 log.go:181] (0xc000c24000) (3) Data frame handling\nI0131 01:42:35.065082 2540 log.go:181] (0xc000c24000) (3) Data frame sent\nI0131 01:42:35.065739 2540 log.go:181] (0xc000c29a20) Data frame received for 5\nI0131 01:42:35.065771 2540 log.go:181] (0xc000ade000) (5) Data frame handling\nI0131 01:42:35.065788 2540 log.go:181] (0xc000ade000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.126.154:80/\nI0131 01:42:35.065811 2540 log.go:181] (0xc000c29a20) Data frame received for 3\nI0131 01:42:35.065833 2540 log.go:181] (0xc000c24000) (3) Data frame handling\nI0131 01:42:35.065854 2540 log.go:181] (0xc000c24000) (3) Data frame sent\nI0131 01:42:35.071734 2540 log.go:181] (0xc000c29a20) Data frame received for 3\nI0131 01:42:35.071760 2540 log.go:181] (0xc000c24000) (3) Data frame handling\nI0131 01:42:35.071791 2540 log.go:181] (0xc000c24000) (3) Data frame sent\nI0131 01:42:35.072151 2540 log.go:181] (0xc000c29a20) Data frame received for 3\nI0131 01:42:35.072178 2540 log.go:181] (0xc000c24000) (3) Data frame handling\nI0131 01:42:35.072192 2540 log.go:181] (0xc000c24000) (3) Data frame sent\nI0131 01:42:35.072215 2540 log.go:181] (0xc000c29a20) Data frame received for 5\nI0131 01:42:35.072225 2540 log.go:181] (0xc000ade000) (5) Data frame handling\nI0131 01:42:35.072233 2540 log.go:181] (0xc000ade000) (5) Data frame sent\nI0131 01:42:35.072241 2540 log.go:181] (0xc000c29a20) Data frame received for 5\nI0131 01:42:35.072247 2540 log.go:181] (0xc000ade000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.126.154:80/\nI0131 01:42:35.072262 2540 log.go:181] (0xc000ade000) (5) Data frame sent\nI0131 01:42:35.078505 2540 log.go:181] (0xc000c29a20) Data frame received for 3\nI0131 01:42:35.078533 2540 log.go:181] (0xc000c24000) (3) Data frame handling\nI0131 01:42:35.078551 2540 log.go:181] (0xc000c24000) (3) Data frame sent\nI0131 01:42:35.079123 2540 log.go:181] (0xc000c29a20) Data frame received for 3\nI0131 01:42:35.079171 2540 log.go:181] (0xc000c24000) (3) Data frame handling\nI0131 01:42:35.079201 2540 log.go:181] (0xc000c24000) (3) Data frame sent\nI0131 01:42:35.079223 2540 log.go:181] (0xc000c29a20) Data frame received for 5\nI0131 01:42:35.079232 2540 log.go:181] (0xc000ade000) (5) Data frame handling\nI0131 01:42:35.079252 2540 log.go:181] (0xc000ade000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.126.154:80/\nI0131 01:42:35.082895 2540 log.go:181] (0xc000c29a20) Data frame received for 3\nI0131 01:42:35.082919 2540 log.go:181] (0xc000c24000) (3) Data frame handling\nI0131 01:42:35.082933 2540 log.go:181] (0xc000c24000) (3) Data frame sent\nI0131 01:42:35.083695 2540 log.go:181] (0xc000c29a20) Data frame received for 3\nI0131 01:42:35.083741 2540 log.go:181] (0xc000c24000) (3) Data frame handling\nI0131 01:42:35.083761 2540 log.go:181] (0xc000c24000) (3) Data frame sent\nI0131 01:42:35.083796 2540 log.go:181] (0xc000c29a20) Data frame received for 5\nI0131 01:42:35.083817 2540 log.go:181] (0xc000ade000) (5) Data frame handling\nI0131 01:42:35.083850 2540 log.go:181] (0xc000ade000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.126.154:80/\nI0131 01:42:35.088066 2540 log.go:181] (0xc000c29a20) Data frame received for 3\nI0131 01:42:35.088098 2540 log.go:181] (0xc000c24000) (3) Data frame handling\nI0131 01:42:35.088129 2540 log.go:181] (0xc000c24000) (3) Data frame sent\nI0131 01:42:35.088435 2540 log.go:181] (0xc000c29a20) Data frame received for 5\nI0131 01:42:35.088462 2540 log.go:181] (0xc000ade000) (5) Data frame handling\nI0131 01:42:35.088472 2540 log.go:181] (0xc000ade000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.126.154:80/\nI0131 01:42:35.088492 2540 log.go:181] (0xc000c29a20) Data frame received for 3\nI0131 01:42:35.088520 2540 log.go:181] (0xc000c24000) (3) Data frame handling\nI0131 01:42:35.088550 2540 log.go:181] (0xc000c24000) (3) Data frame sent\nI0131 01:42:35.095039 2540 log.go:181] (0xc000c29a20) Data frame received for 3\nI0131 01:42:35.095072 2540 log.go:181] (0xc000c24000) (3) Data frame handling\nI0131 01:42:35.095103 2540 log.go:181] (0xc000c24000) (3) Data frame sent\nI0131 01:42:35.095705 2540 log.go:181] (0xc000c29a20) Data frame received for 5\nI0131 01:42:35.095727 2540 log.go:181] (0xc000ade000) (5) Data frame handling\nI0131 01:42:35.095732 2540 log.go:181] (0xc000ade000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.126.154:80/\nI0131 01:42:35.095758 2540 log.go:181] (0xc000c29a20) Data frame received for 3\nI0131 01:42:35.095792 2540 log.go:181] (0xc000c24000) (3) Data frame handling\nI0131 01:42:35.095825 2540 log.go:181] (0xc000c24000) (3) Data frame sent\nI0131 01:42:35.100736 2540 log.go:181] (0xc000c29a20) Data frame received for 3\nI0131 01:42:35.100758 2540 log.go:181] (0xc000c24000) (3) Data frame handling\nI0131 01:42:35.100769 2540 log.go:181] (0xc000c24000) (3) Data frame sent\nI0131 01:42:35.101346 2540 log.go:181] (0xc000c29a20) Data frame received for 5\nI0131 01:42:35.101359 2540 log.go:181] (0xc000ade000) (5) Data frame handling\nI0131 01:42:35.101364 2540 log.go:181] (0xc000ade000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.126.154:80/\nI0131 01:42:35.101372 2540 log.go:181] (0xc000c29a20) Data frame received for 3\nI0131 01:42:35.101377 2540 log.go:181] (0xc000c24000) (3) Data frame handling\nI0131 01:42:35.101381 2540 log.go:181] (0xc000c24000) (3) Data frame sent\nI0131 01:42:35.107448 2540 log.go:181] (0xc000c29a20) Data frame received for 3\nI0131 01:42:35.107472 2540 log.go:181] (0xc000c24000) (3) Data frame handling\nI0131 01:42:35.107490 2540 log.go:181] (0xc000c24000) (3) Data frame sent\nI0131 01:42:35.108547 2540 log.go:181] (0xc000c29a20) Data frame received for 3\nI0131 01:42:35.108577 2540 log.go:181] (0xc000c24000) (3) Data frame handling\nI0131 01:42:35.108588 2540 log.go:181] (0xc000c24000) (3) Data frame sent\nI0131 01:42:35.108600 2540 log.go:181] (0xc000c29a20) Data frame received for 5\nI0131 01:42:35.108607 2540 log.go:181] (0xc000ade000) (5) Data frame handling\nI0131 01:42:35.108614 2540 log.go:181] (0xc000ade000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.126.154:80/\nI0131 01:42:35.113891 2540 log.go:181] (0xc000c29a20) Data frame received for 3\nI0131 01:42:35.113915 2540 log.go:181] (0xc000c24000) (3) Data frame handling\nI0131 01:42:35.113937 2540 log.go:181] (0xc000c24000) (3) Data frame sent\nI0131 01:42:35.114643 2540 log.go:181] (0xc000c29a20) Data frame received for 5\nI0131 01:42:35.114668 2540 log.go:181] (0xc000ade000) (5) Data frame handling\nI0131 01:42:35.114698 2540 log.go:181] (0xc000ade000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.126.154:80/\nI0131 01:42:35.114717 2540 log.go:181] (0xc000c29a20) Data frame received for 3\nI0131 01:42:35.114740 2540 log.go:181] (0xc000c24000) (3) Data frame handling\nI0131 01:42:35.114754 2540 log.go:181] (0xc000c24000) (3) Data frame sent\nI0131 01:42:35.119804 2540 log.go:181] (0xc000c29a20) Data frame received for 3\nI0131 01:42:35.119827 2540 log.go:181] (0xc000c24000) (3) Data frame handling\nI0131 01:42:35.119838 2540 log.go:181] (0xc000c24000) (3) Data frame sent\nI0131 01:42:35.120569 2540 log.go:181] (0xc000c29a20) Data frame received for 3\nI0131 01:42:35.120618 2540 log.go:181] (0xc000c24000) (3) Data frame handling\nI0131 01:42:35.120636 2540 log.go:181] (0xc000c24000) (3) Data frame sent\nI0131 01:42:35.120651 2540 log.go:181] (0xc000c29a20) Data frame received for 5\nI0131 01:42:35.120660 2540 log.go:181] (0xc000ade000) (5) Data frame handling\nI0131 01:42:35.120667 2540 log.go:181] (0xc000ade000) (5) Data frame sent\nI0131 01:42:35.120675 2540 log.go:181] (0xc000c29a20) Data frame received for 5\nI0131 01:42:35.120682 2540 log.go:181] (0xc000ade000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.126.154:80/\nI0131 01:42:35.120705 2540 log.go:181] (0xc000ade000) (5) Data frame sent\nI0131 01:42:35.126411 2540 log.go:181] (0xc000c29a20) Data frame received for 3\nI0131 01:42:35.126436 2540 log.go:181] (0xc000c24000) (3) Data frame handling\nI0131 01:42:35.126467 2540 log.go:181] (0xc000c24000) (3) Data frame sent\nI0131 01:42:35.127186 2540 log.go:181] (0xc000c29a20) Data frame received for 3\nI0131 01:42:35.127207 2540 log.go:181] (0xc000c24000) (3) Data frame handling\nI0131 01:42:35.127227 2540 log.go:181] (0xc000c29a20) Data frame received for 5\nI0131 01:42:35.127244 2540 log.go:181] (0xc000ade000) (5) Data frame handling\nI0131 01:42:35.129380 2540 log.go:181] (0xc000c29a20) Data frame received for 1\nI0131 01:42:35.129435 2540 log.go:181] (0xc000c248c0) (1) Data frame handling\nI0131 01:42:35.129457 2540 log.go:181] (0xc000c248c0) (1) Data frame sent\nI0131 01:42:35.129484 2540 log.go:181] (0xc000c29a20) (0xc000c248c0) Stream removed, broadcasting: 1\nI0131 01:42:35.129513 2540 log.go:181] (0xc000c29a20) Go away received\nI0131 01:42:35.129967 2540 log.go:181] (0xc000c29a20) (0xc000c248c0) Stream removed, broadcasting: 1\nI0131 01:42:35.129997 2540 log.go:181] (0xc000c29a20) (0xc000c24000) Stream removed, broadcasting: 3\nI0131 01:42:35.130018 2540 log.go:181] (0xc000c29a20) (0xc000ade000) Stream removed, broadcasting: 5\n" Jan 31 01:42:35.137: INFO: stdout: "\naffinity-clusterip-timeout-df6jf\naffinity-clusterip-timeout-df6jf\naffinity-clusterip-timeout-df6jf\naffinity-clusterip-timeout-df6jf\naffinity-clusterip-timeout-df6jf\naffinity-clusterip-timeout-df6jf\naffinity-clusterip-timeout-df6jf\naffinity-clusterip-timeout-df6jf\naffinity-clusterip-timeout-df6jf\naffinity-clusterip-timeout-df6jf\naffinity-clusterip-timeout-df6jf\naffinity-clusterip-timeout-df6jf\naffinity-clusterip-timeout-df6jf\naffinity-clusterip-timeout-df6jf\naffinity-clusterip-timeout-df6jf\naffinity-clusterip-timeout-df6jf" Jan 31 01:42:35.137: INFO: Received response from host: affinity-clusterip-timeout-df6jf Jan 31 01:42:35.137: INFO: Received response from host: affinity-clusterip-timeout-df6jf Jan 31 01:42:35.137: INFO: Received response from host: affinity-clusterip-timeout-df6jf Jan 31 01:42:35.137: INFO: Received response from host: affinity-clusterip-timeout-df6jf Jan 31 01:42:35.137: INFO: Received response from host: affinity-clusterip-timeout-df6jf Jan 31 01:42:35.137: INFO: Received response from host: affinity-clusterip-timeout-df6jf Jan 31 01:42:35.137: INFO: Received response from host: affinity-clusterip-timeout-df6jf Jan 31 01:42:35.137: INFO: Received response from host: affinity-clusterip-timeout-df6jf Jan 31 01:42:35.137: INFO: Received response from host: affinity-clusterip-timeout-df6jf Jan 31 01:42:35.137: INFO: Received response from host: affinity-clusterip-timeout-df6jf Jan 31 01:42:35.137: INFO: Received response from host: affinity-clusterip-timeout-df6jf Jan 31 01:42:35.137: INFO: Received response from host: affinity-clusterip-timeout-df6jf Jan 31 01:42:35.137: INFO: Received response from host: affinity-clusterip-timeout-df6jf Jan 31 01:42:35.137: INFO: Received response from host: affinity-clusterip-timeout-df6jf Jan 31 01:42:35.137: INFO: Received response from host: affinity-clusterip-timeout-df6jf Jan 31 01:42:35.137: INFO: Received response from host: affinity-clusterip-timeout-df6jf Jan 31 01:42:35.137: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=services-4501 exec execpod-affinitykx6hf -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.96.126.154:80/' Jan 31 01:42:35.360: INFO: stderr: "I0131 01:42:35.277773 2557 log.go:181] (0xc00003a160) (0xc0007221e0) Create stream\nI0131 01:42:35.277840 2557 log.go:181] (0xc00003a160) (0xc0007221e0) Stream added, broadcasting: 1\nI0131 01:42:35.279630 2557 log.go:181] (0xc00003a160) Reply frame received for 1\nI0131 01:42:35.279680 2557 log.go:181] (0xc00003a160) (0xc000459a40) Create stream\nI0131 01:42:35.279692 2557 log.go:181] (0xc00003a160) (0xc000459a40) Stream added, broadcasting: 3\nI0131 01:42:35.280316 2557 log.go:181] (0xc00003a160) Reply frame received for 3\nI0131 01:42:35.280368 2557 log.go:181] (0xc00003a160) (0xc000459cc0) Create stream\nI0131 01:42:35.280382 2557 log.go:181] (0xc00003a160) (0xc000459cc0) Stream added, broadcasting: 5\nI0131 01:42:35.281201 2557 log.go:181] (0xc00003a160) Reply frame received for 5\nI0131 01:42:35.349696 2557 log.go:181] (0xc00003a160) Data frame received for 5\nI0131 01:42:35.349723 2557 log.go:181] (0xc000459cc0) (5) Data frame handling\nI0131 01:42:35.349737 2557 log.go:181] (0xc000459cc0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.96.126.154:80/\nI0131 01:42:35.352341 2557 log.go:181] (0xc00003a160) Data frame received for 3\nI0131 01:42:35.352354 2557 log.go:181] (0xc000459a40) (3) Data frame handling\nI0131 01:42:35.352367 2557 log.go:181] (0xc000459a40) (3) Data frame sent\nI0131 01:42:35.353363 2557 log.go:181] (0xc00003a160) Data frame received for 3\nI0131 01:42:35.353398 2557 log.go:181] (0xc000459a40) (3) Data frame handling\nI0131 01:42:35.353424 2557 log.go:181] (0xc00003a160) Data frame received for 5\nI0131 01:42:35.353449 2557 log.go:181] (0xc000459cc0) (5) Data frame handling\nI0131 01:42:35.354528 2557 log.go:181] (0xc00003a160) Data frame received for 1\nI0131 01:42:35.354546 2557 log.go:181] (0xc0007221e0) (1) Data frame handling\nI0131 01:42:35.354558 2557 log.go:181] (0xc0007221e0) (1) Data frame sent\nI0131 01:42:35.354570 2557 log.go:181] (0xc00003a160) (0xc0007221e0) Stream removed, broadcasting: 1\nI0131 01:42:35.354582 2557 log.go:181] (0xc00003a160) Go away received\nI0131 01:42:35.354932 2557 log.go:181] (0xc00003a160) (0xc0007221e0) Stream removed, broadcasting: 1\nI0131 01:42:35.354948 2557 log.go:181] (0xc00003a160) (0xc000459a40) Stream removed, broadcasting: 3\nI0131 01:42:35.354953 2557 log.go:181] (0xc00003a160) (0xc000459cc0) Stream removed, broadcasting: 5\n" Jan 31 01:42:35.361: INFO: stdout: "affinity-clusterip-timeout-df6jf" Jan 31 01:42:55.361: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=services-4501 exec execpod-affinitykx6hf -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.96.126.154:80/' Jan 31 01:42:55.599: INFO: stderr: "I0131 01:42:55.501701 2575 log.go:181] (0xc0009cd1e0) (0xc0007dc960) Create stream\nI0131 01:42:55.501756 2575 log.go:181] (0xc0009cd1e0) (0xc0007dc960) Stream added, broadcasting: 1\nI0131 01:42:55.504919 2575 log.go:181] (0xc0009cd1e0) Reply frame received for 1\nI0131 01:42:55.504959 2575 log.go:181] (0xc0009cd1e0) (0xc0007dc000) Create stream\nI0131 01:42:55.504974 2575 log.go:181] (0xc0009cd1e0) (0xc0007dc000) Stream added, broadcasting: 3\nI0131 01:42:55.506001 2575 log.go:181] (0xc0009cd1e0) Reply frame received for 3\nI0131 01:42:55.506042 2575 log.go:181] (0xc0009cd1e0) (0xc000728000) Create stream\nI0131 01:42:55.506065 2575 log.go:181] (0xc0009cd1e0) (0xc000728000) Stream added, broadcasting: 5\nI0131 01:42:55.507025 2575 log.go:181] (0xc0009cd1e0) Reply frame received for 5\nI0131 01:42:55.585734 2575 log.go:181] (0xc0009cd1e0) Data frame received for 5\nI0131 01:42:55.585776 2575 log.go:181] (0xc000728000) (5) Data frame handling\nI0131 01:42:55.585807 2575 log.go:181] (0xc000728000) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.96.126.154:80/\nI0131 01:42:55.591599 2575 log.go:181] (0xc0009cd1e0) Data frame received for 3\nI0131 01:42:55.591621 2575 log.go:181] (0xc0007dc000) (3) Data frame handling\nI0131 01:42:55.591639 2575 log.go:181] (0xc0007dc000) (3) Data frame sent\nI0131 01:42:55.592449 2575 log.go:181] (0xc0009cd1e0) Data frame received for 3\nI0131 01:42:55.592491 2575 log.go:181] (0xc0007dc000) (3) Data frame handling\nI0131 01:42:55.592589 2575 log.go:181] (0xc0009cd1e0) Data frame received for 5\nI0131 01:42:55.592622 2575 log.go:181] (0xc000728000) (5) Data frame handling\nI0131 01:42:55.594608 2575 log.go:181] (0xc0009cd1e0) Data frame received for 1\nI0131 01:42:55.594638 2575 log.go:181] (0xc0007dc960) (1) Data frame handling\nI0131 01:42:55.594648 2575 log.go:181] (0xc0007dc960) (1) Data frame sent\nI0131 01:42:55.594663 2575 log.go:181] (0xc0009cd1e0) (0xc0007dc960) Stream removed, broadcasting: 1\nI0131 01:42:55.594680 2575 log.go:181] (0xc0009cd1e0) Go away received\nI0131 01:42:55.595086 2575 log.go:181] (0xc0009cd1e0) (0xc0007dc960) Stream removed, broadcasting: 1\nI0131 01:42:55.595115 2575 log.go:181] (0xc0009cd1e0) (0xc0007dc000) Stream removed, broadcasting: 3\nI0131 01:42:55.595125 2575 log.go:181] (0xc0009cd1e0) (0xc000728000) Stream removed, broadcasting: 5\n" Jan 31 01:42:55.599: INFO: stdout: "affinity-clusterip-timeout-df6jf" Jan 31 01:43:15.600: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=services-4501 exec execpod-affinitykx6hf -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.96.126.154:80/' Jan 31 01:43:15.838: INFO: stderr: "I0131 01:43:15.740217 2594 log.go:181] (0xc00003a840) (0xc000ba4280) Create stream\nI0131 01:43:15.740273 2594 log.go:181] (0xc00003a840) (0xc000ba4280) Stream added, broadcasting: 1\nI0131 01:43:15.741937 2594 log.go:181] (0xc00003a840) Reply frame received for 1\nI0131 01:43:15.741990 2594 log.go:181] (0xc00003a840) (0xc000a503c0) Create stream\nI0131 01:43:15.742005 2594 log.go:181] (0xc00003a840) (0xc000a503c0) Stream added, broadcasting: 3\nI0131 01:43:15.742883 2594 log.go:181] (0xc00003a840) Reply frame received for 3\nI0131 01:43:15.742916 2594 log.go:181] (0xc00003a840) (0xc000728000) Create stream\nI0131 01:43:15.742926 2594 log.go:181] (0xc00003a840) (0xc000728000) Stream added, broadcasting: 5\nI0131 01:43:15.743751 2594 log.go:181] (0xc00003a840) Reply frame received for 5\nI0131 01:43:15.825628 2594 log.go:181] (0xc00003a840) Data frame received for 5\nI0131 01:43:15.825652 2594 log.go:181] (0xc000728000) (5) Data frame handling\nI0131 01:43:15.825665 2594 log.go:181] (0xc000728000) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.96.126.154:80/\nI0131 01:43:15.829895 2594 log.go:181] (0xc00003a840) Data frame received for 3\nI0131 01:43:15.829913 2594 log.go:181] (0xc000a503c0) (3) Data frame handling\nI0131 01:43:15.829926 2594 log.go:181] (0xc000a503c0) (3) Data frame sent\nI0131 01:43:15.830554 2594 log.go:181] (0xc00003a840) Data frame received for 3\nI0131 01:43:15.830579 2594 log.go:181] (0xc000a503c0) (3) Data frame handling\nI0131 01:43:15.830638 2594 log.go:181] (0xc00003a840) Data frame received for 5\nI0131 01:43:15.830650 2594 log.go:181] (0xc000728000) (5) Data frame handling\nI0131 01:43:15.832665 2594 log.go:181] (0xc00003a840) Data frame received for 1\nI0131 01:43:15.832707 2594 log.go:181] (0xc000ba4280) (1) Data frame handling\nI0131 01:43:15.832739 2594 log.go:181] (0xc000ba4280) (1) Data frame sent\nI0131 01:43:15.832763 2594 log.go:181] (0xc00003a840) (0xc000ba4280) Stream removed, broadcasting: 1\nI0131 01:43:15.832790 2594 log.go:181] (0xc00003a840) Go away received\nI0131 01:43:15.833309 2594 log.go:181] (0xc00003a840) (0xc000ba4280) Stream removed, broadcasting: 1\nI0131 01:43:15.833331 2594 log.go:181] (0xc00003a840) (0xc000a503c0) Stream removed, broadcasting: 3\nI0131 01:43:15.833340 2594 log.go:181] (0xc00003a840) (0xc000728000) Stream removed, broadcasting: 5\n" Jan 31 01:43:15.838: INFO: stdout: "affinity-clusterip-timeout-rvptp" Jan 31 01:43:15.838: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-timeout in namespace services-4501, will wait for the garbage collector to delete the pods Jan 31 01:43:15.958: INFO: Deleting ReplicationController affinity-clusterip-timeout took: 6.042967ms Jan 31 01:43:16.558: INFO: Terminating ReplicationController affinity-clusterip-timeout pods took: 600.205793ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:44:10.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4501" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 • [SLOW TEST:116.076 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":311,"completed":245,"skipped":4457,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:44:10.817: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 Jan 31 01:44:10.891: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Jan 31 01:44:14.441: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8541 --namespace=crd-publish-openapi-8541 create -f -' Jan 31 01:44:18.257: INFO: stderr: "" Jan 31 01:44:18.257: INFO: stdout: "e2e-test-crd-publish-openapi-7798-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Jan 31 01:44:18.257: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8541 --namespace=crd-publish-openapi-8541 delete e2e-test-crd-publish-openapi-7798-crds test-cr' Jan 31 01:44:18.378: INFO: stderr: "" Jan 31 01:44:18.378: INFO: stdout: "e2e-test-crd-publish-openapi-7798-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Jan 31 01:44:18.378: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8541 --namespace=crd-publish-openapi-8541 apply -f -' Jan 31 01:44:18.673: INFO: stderr: "" Jan 31 01:44:18.673: INFO: stdout: "e2e-test-crd-publish-openapi-7798-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Jan 31 01:44:18.673: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8541 --namespace=crd-publish-openapi-8541 delete e2e-test-crd-publish-openapi-7798-crds test-cr' Jan 31 01:44:18.778: INFO: stderr: "" Jan 31 01:44:18.778: INFO: stdout: "e2e-test-crd-publish-openapi-7798-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Jan 31 01:44:18.778: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8541 explain e2e-test-crd-publish-openapi-7798-crds' Jan 31 01:44:19.059: INFO: stderr: "" Jan 31 01:44:19.059: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7798-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:44:22.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8541" for this suite. • [SLOW TEST:11.804 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":311,"completed":246,"skipped":4482,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:44:22.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating the pod Jan 31 01:44:27.279: INFO: Successfully updated pod "annotationupdate4d593ab5-8274-4d6b-98a7-7e721a5f0206" [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:44:31.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3624" for this suite. • [SLOW TEST:8.763 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35 should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":311,"completed":247,"skipped":4495,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:44:31.384: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 Jan 31 01:44:31.555: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-251bd790-7269-48f2-96d2-22a5ba172556" in namespace "security-context-test-366" to be "Succeeded or Failed" Jan 31 01:44:31.558: INFO: Pod "alpine-nnp-false-251bd790-7269-48f2-96d2-22a5ba172556": Phase="Pending", Reason="", readiness=false. Elapsed: 2.821592ms Jan 31 01:44:33.563: INFO: Pod "alpine-nnp-false-251bd790-7269-48f2-96d2-22a5ba172556": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008094181s Jan 31 01:44:35.568: INFO: Pod "alpine-nnp-false-251bd790-7269-48f2-96d2-22a5ba172556": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013575697s Jan 31 01:44:35.569: INFO: Pod "alpine-nnp-false-251bd790-7269-48f2-96d2-22a5ba172556" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:44:35.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-366" for this suite. •{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":248,"skipped":4514,"failed":0} SS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:44:35.585: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7614.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-7614.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7614.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-7614.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7614.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-7614.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7614.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-7614.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7614.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-7614.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7614.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-7614.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7614.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 77.35.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.35.77_udp@PTR;check="$$(dig +tcp +noall +answer +search 77.35.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.35.77_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7614.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-7614.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7614.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-7614.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7614.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-7614.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7614.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-7614.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7614.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-7614.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7614.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-7614.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7614.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 77.35.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.35.77_udp@PTR;check="$$(dig +tcp +noall +answer +search 77.35.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.35.77_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 31 01:44:41.997: INFO: Unable to read wheezy_udp@dns-test-service.dns-7614.svc.cluster.local from pod dns-7614/dns-test-eb4de2f7-27eb-4674-9944-85481c8bdc8b: the server could not find the requested resource (get pods dns-test-eb4de2f7-27eb-4674-9944-85481c8bdc8b) Jan 31 01:44:42.002: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7614.svc.cluster.local from pod dns-7614/dns-test-eb4de2f7-27eb-4674-9944-85481c8bdc8b: the server could not find the requested resource (get pods dns-test-eb4de2f7-27eb-4674-9944-85481c8bdc8b) Jan 31 01:44:42.005: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7614.svc.cluster.local from pod dns-7614/dns-test-eb4de2f7-27eb-4674-9944-85481c8bdc8b: the server could not find the requested resource (get pods dns-test-eb4de2f7-27eb-4674-9944-85481c8bdc8b) Jan 31 01:44:42.007: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7614.svc.cluster.local from pod dns-7614/dns-test-eb4de2f7-27eb-4674-9944-85481c8bdc8b: the server could not find the requested resource (get pods dns-test-eb4de2f7-27eb-4674-9944-85481c8bdc8b) Jan 31 01:44:42.043: INFO: Unable to read jessie_udp@dns-test-service.dns-7614.svc.cluster.local from pod dns-7614/dns-test-eb4de2f7-27eb-4674-9944-85481c8bdc8b: the server could not find the requested resource (get pods dns-test-eb4de2f7-27eb-4674-9944-85481c8bdc8b) Jan 31 01:44:42.046: INFO: Unable to read jessie_tcp@dns-test-service.dns-7614.svc.cluster.local from pod dns-7614/dns-test-eb4de2f7-27eb-4674-9944-85481c8bdc8b: the server could not find the requested resource (get pods dns-test-eb4de2f7-27eb-4674-9944-85481c8bdc8b) Jan 31 01:44:42.049: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7614.svc.cluster.local from pod dns-7614/dns-test-eb4de2f7-27eb-4674-9944-85481c8bdc8b: the server could not find the requested resource (get pods dns-test-eb4de2f7-27eb-4674-9944-85481c8bdc8b) Jan 31 01:44:42.052: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7614.svc.cluster.local from pod dns-7614/dns-test-eb4de2f7-27eb-4674-9944-85481c8bdc8b: the server could not find the requested resource (get pods dns-test-eb4de2f7-27eb-4674-9944-85481c8bdc8b) Jan 31 01:44:42.086: INFO: Lookups using dns-7614/dns-test-eb4de2f7-27eb-4674-9944-85481c8bdc8b failed for: [wheezy_udp@dns-test-service.dns-7614.svc.cluster.local wheezy_tcp@dns-test-service.dns-7614.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7614.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7614.svc.cluster.local jessie_udp@dns-test-service.dns-7614.svc.cluster.local jessie_tcp@dns-test-service.dns-7614.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7614.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7614.svc.cluster.local] Jan 31 01:44:47.099: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7614.svc.cluster.local from pod dns-7614/dns-test-eb4de2f7-27eb-4674-9944-85481c8bdc8b: the server could not find the requested resource (get pods dns-test-eb4de2f7-27eb-4674-9944-85481c8bdc8b) Jan 31 01:44:47.102: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7614.svc.cluster.local from pod dns-7614/dns-test-eb4de2f7-27eb-4674-9944-85481c8bdc8b: the server could not find the requested resource (get pods dns-test-eb4de2f7-27eb-4674-9944-85481c8bdc8b) Jan 31 01:44:47.128: INFO: Unable to read jessie_tcp@dns-test-service.dns-7614.svc.cluster.local from pod dns-7614/dns-test-eb4de2f7-27eb-4674-9944-85481c8bdc8b: the server could not find the requested resource (get pods dns-test-eb4de2f7-27eb-4674-9944-85481c8bdc8b) Jan 31 01:44:47.131: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7614.svc.cluster.local from pod dns-7614/dns-test-eb4de2f7-27eb-4674-9944-85481c8bdc8b: the server could not find the requested resource (get pods dns-test-eb4de2f7-27eb-4674-9944-85481c8bdc8b) Jan 31 01:44:47.134: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7614.svc.cluster.local from pod dns-7614/dns-test-eb4de2f7-27eb-4674-9944-85481c8bdc8b: the server could not find the requested resource (get pods dns-test-eb4de2f7-27eb-4674-9944-85481c8bdc8b) Jan 31 01:44:47.155: INFO: Lookups using dns-7614/dns-test-eb4de2f7-27eb-4674-9944-85481c8bdc8b failed for: [wheezy_udp@_http._tcp.dns-test-service.dns-7614.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7614.svc.cluster.local jessie_tcp@dns-test-service.dns-7614.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7614.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7614.svc.cluster.local] Jan 31 01:44:52.096: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7614.svc.cluster.local from pod dns-7614/dns-test-eb4de2f7-27eb-4674-9944-85481c8bdc8b: the server could not find the requested resource (get pods dns-test-eb4de2f7-27eb-4674-9944-85481c8bdc8b) Jan 31 01:44:52.099: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7614.svc.cluster.local from pod dns-7614/dns-test-eb4de2f7-27eb-4674-9944-85481c8bdc8b: the server could not find the requested resource (get pods dns-test-eb4de2f7-27eb-4674-9944-85481c8bdc8b) Jan 31 01:44:52.125: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7614.svc.cluster.local from pod dns-7614/dns-test-eb4de2f7-27eb-4674-9944-85481c8bdc8b: the server could not find the requested resource (get pods dns-test-eb4de2f7-27eb-4674-9944-85481c8bdc8b) Jan 31 01:44:52.128: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7614.svc.cluster.local from pod dns-7614/dns-test-eb4de2f7-27eb-4674-9944-85481c8bdc8b: the server could not find the requested resource (get pods dns-test-eb4de2f7-27eb-4674-9944-85481c8bdc8b) Jan 31 01:44:52.147: INFO: Lookups using dns-7614/dns-test-eb4de2f7-27eb-4674-9944-85481c8bdc8b failed for: [wheezy_udp@_http._tcp.dns-test-service.dns-7614.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7614.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7614.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7614.svc.cluster.local] Jan 31 01:44:57.097: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7614.svc.cluster.local from pod dns-7614/dns-test-eb4de2f7-27eb-4674-9944-85481c8bdc8b: the server could not find the requested resource (get pods dns-test-eb4de2f7-27eb-4674-9944-85481c8bdc8b) Jan 31 01:44:57.100: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7614.svc.cluster.local from pod dns-7614/dns-test-eb4de2f7-27eb-4674-9944-85481c8bdc8b: the server could not find the requested resource (get pods dns-test-eb4de2f7-27eb-4674-9944-85481c8bdc8b) Jan 31 01:44:57.125: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7614.svc.cluster.local from pod dns-7614/dns-test-eb4de2f7-27eb-4674-9944-85481c8bdc8b: the server could not find the requested resource (get pods dns-test-eb4de2f7-27eb-4674-9944-85481c8bdc8b) Jan 31 01:44:57.128: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7614.svc.cluster.local from pod dns-7614/dns-test-eb4de2f7-27eb-4674-9944-85481c8bdc8b: the server could not find the requested resource (get pods dns-test-eb4de2f7-27eb-4674-9944-85481c8bdc8b) Jan 31 01:44:57.145: INFO: Lookups using dns-7614/dns-test-eb4de2f7-27eb-4674-9944-85481c8bdc8b failed for: [wheezy_udp@_http._tcp.dns-test-service.dns-7614.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7614.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7614.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7614.svc.cluster.local] Jan 31 01:45:02.098: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7614.svc.cluster.local from pod dns-7614/dns-test-eb4de2f7-27eb-4674-9944-85481c8bdc8b: the server could not find the requested resource (get pods dns-test-eb4de2f7-27eb-4674-9944-85481c8bdc8b) Jan 31 01:45:02.101: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7614.svc.cluster.local from pod dns-7614/dns-test-eb4de2f7-27eb-4674-9944-85481c8bdc8b: the server could not find the requested resource (get pods dns-test-eb4de2f7-27eb-4674-9944-85481c8bdc8b) Jan 31 01:45:02.131: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7614.svc.cluster.local from pod dns-7614/dns-test-eb4de2f7-27eb-4674-9944-85481c8bdc8b: the server could not find the requested resource (get pods dns-test-eb4de2f7-27eb-4674-9944-85481c8bdc8b) Jan 31 01:45:02.135: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7614.svc.cluster.local from pod dns-7614/dns-test-eb4de2f7-27eb-4674-9944-85481c8bdc8b: the server could not find the requested resource (get pods dns-test-eb4de2f7-27eb-4674-9944-85481c8bdc8b) Jan 31 01:45:02.182: INFO: Lookups using dns-7614/dns-test-eb4de2f7-27eb-4674-9944-85481c8bdc8b failed for: [wheezy_udp@_http._tcp.dns-test-service.dns-7614.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7614.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7614.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7614.svc.cluster.local] Jan 31 01:45:07.100: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7614.svc.cluster.local from pod dns-7614/dns-test-eb4de2f7-27eb-4674-9944-85481c8bdc8b: the server could not find the requested resource (get pods dns-test-eb4de2f7-27eb-4674-9944-85481c8bdc8b) Jan 31 01:45:07.103: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7614.svc.cluster.local from pod dns-7614/dns-test-eb4de2f7-27eb-4674-9944-85481c8bdc8b: the server could not find the requested resource (get pods dns-test-eb4de2f7-27eb-4674-9944-85481c8bdc8b) Jan 31 01:45:07.133: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7614.svc.cluster.local from pod dns-7614/dns-test-eb4de2f7-27eb-4674-9944-85481c8bdc8b: the server could not find the requested resource (get pods dns-test-eb4de2f7-27eb-4674-9944-85481c8bdc8b) Jan 31 01:45:07.135: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7614.svc.cluster.local from pod dns-7614/dns-test-eb4de2f7-27eb-4674-9944-85481c8bdc8b: the server could not find the requested resource (get pods dns-test-eb4de2f7-27eb-4674-9944-85481c8bdc8b) Jan 31 01:45:07.153: INFO: Lookups using dns-7614/dns-test-eb4de2f7-27eb-4674-9944-85481c8bdc8b failed for: [wheezy_udp@_http._tcp.dns-test-service.dns-7614.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7614.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7614.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7614.svc.cluster.local] Jan 31 01:45:12.148: INFO: DNS probes using dns-7614/dns-test-eb4de2f7-27eb-4674-9944-85481c8bdc8b succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:45:12.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7614" for this suite. • [SLOW TEST:37.287 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":311,"completed":249,"skipped":4516,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:45:12.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:45:29.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2735" for this suite. • [SLOW TEST:16.490 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":311,"completed":250,"skipped":4562,"failed":0} SSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:45:29.362: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating pod liveness-1f83d41a-6b28-4030-b267-184e205b9fbe in namespace container-probe-7706 Jan 31 01:45:33.496: INFO: Started pod liveness-1f83d41a-6b28-4030-b267-184e205b9fbe in namespace container-probe-7706 STEP: checking the pod's current state and verifying that restartCount is present Jan 31 01:45:33.499: INFO: Initial restart count of pod liveness-1f83d41a-6b28-4030-b267-184e205b9fbe is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:49:34.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7706" for this suite. • [SLOW TEST:244.862 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:635 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":311,"completed":251,"skipped":4568,"failed":0} SSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:49:34.225: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Jan 31 01:49:34.667: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-515 eaeecc41-eef4-4931-9ffc-83ef31858cf7 1136824 0 2021-01-31 01:49:34 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-01-31 01:49:34 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jan 31 01:49:34.667: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-515 eaeecc41-eef4-4931-9ffc-83ef31858cf7 1136825 0 2021-01-31 01:49:34 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-01-31 01:49:34 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Jan 31 01:49:34.741: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-515 eaeecc41-eef4-4931-9ffc-83ef31858cf7 1136826 0 2021-01-31 01:49:34 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-01-31 01:49:34 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jan 31 01:49:34.741: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-515 eaeecc41-eef4-4931-9ffc-83ef31858cf7 1136827 0 2021-01-31 01:49:34 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-01-31 01:49:34 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:49:34.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-515" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":311,"completed":252,"skipped":4571,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:49:34.784: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:49:45.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-547" for this suite. • [SLOW TEST:11.139 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":311,"completed":253,"skipped":4603,"failed":0} SSSS ------------------------------ [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:49:45.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: creating the pod with failed condition STEP: updating the pod Jan 31 01:51:46.565: INFO: Successfully updated pod "var-expansion-368ddb04-ed5e-414d-a5d8-8a4cb840f43b" STEP: waiting for pod running STEP: deleting the pod gracefully Jan 31 01:51:50.597: INFO: Deleting pod "var-expansion-368ddb04-ed5e-414d-a5d8-8a4cb840f43b" in namespace "var-expansion-9546" Jan 31 01:51:50.603: INFO: Wait up to 5m0s for pod "var-expansion-368ddb04-ed5e-414d-a5d8-8a4cb840f43b" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:52:32.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9546" for this suite. • [SLOW TEST:166.712 seconds] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:635 should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]","total":311,"completed":254,"skipped":4607,"failed":0} SS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:52:32.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating pod pod-subpath-test-downwardapi-82zw STEP: Creating a pod to test atomic-volume-subpath Jan 31 01:52:32.774: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-82zw" in namespace "subpath-6521" to be "Succeeded or Failed" Jan 31 01:52:32.786: INFO: Pod "pod-subpath-test-downwardapi-82zw": Phase="Pending", Reason="", readiness=false. Elapsed: 11.866062ms Jan 31 01:52:34.792: INFO: Pod "pod-subpath-test-downwardapi-82zw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017643382s Jan 31 01:52:36.798: INFO: Pod "pod-subpath-test-downwardapi-82zw": Phase="Running", Reason="", readiness=true. Elapsed: 4.024228708s Jan 31 01:52:38.804: INFO: Pod "pod-subpath-test-downwardapi-82zw": Phase="Running", Reason="", readiness=true. Elapsed: 6.029849787s Jan 31 01:52:40.808: INFO: Pod "pod-subpath-test-downwardapi-82zw": Phase="Running", Reason="", readiness=true. Elapsed: 8.033619246s Jan 31 01:52:42.811: INFO: Pod "pod-subpath-test-downwardapi-82zw": Phase="Running", Reason="", readiness=true. Elapsed: 10.037152611s Jan 31 01:52:44.815: INFO: Pod "pod-subpath-test-downwardapi-82zw": Phase="Running", Reason="", readiness=true. Elapsed: 12.041102742s Jan 31 01:52:46.819: INFO: Pod "pod-subpath-test-downwardapi-82zw": Phase="Running", Reason="", readiness=true. Elapsed: 14.044866644s Jan 31 01:52:48.823: INFO: Pod "pod-subpath-test-downwardapi-82zw": Phase="Running", Reason="", readiness=true. Elapsed: 16.048982475s Jan 31 01:52:50.827: INFO: Pod "pod-subpath-test-downwardapi-82zw": Phase="Running", Reason="", readiness=true. Elapsed: 18.053012755s Jan 31 01:52:52.830: INFO: Pod "pod-subpath-test-downwardapi-82zw": Phase="Running", Reason="", readiness=true. Elapsed: 20.056284488s Jan 31 01:52:54.834: INFO: Pod "pod-subpath-test-downwardapi-82zw": Phase="Running", Reason="", readiness=true. Elapsed: 22.059437992s Jan 31 01:52:56.838: INFO: Pod "pod-subpath-test-downwardapi-82zw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.06358915s STEP: Saw pod success Jan 31 01:52:56.838: INFO: Pod "pod-subpath-test-downwardapi-82zw" satisfied condition "Succeeded or Failed" Jan 31 01:52:56.841: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-downwardapi-82zw container test-container-subpath-downwardapi-82zw: STEP: delete the pod Jan 31 01:52:56.909: INFO: Waiting for pod pod-subpath-test-downwardapi-82zw to disappear Jan 31 01:52:56.916: INFO: Pod pod-subpath-test-downwardapi-82zw no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-82zw Jan 31 01:52:56.916: INFO: Deleting pod "pod-subpath-test-downwardapi-82zw" in namespace "subpath-6521" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:52:56.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6521" for this suite. • [SLOW TEST:24.298 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":311,"completed":255,"skipped":4609,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:52:56.934: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: creating a service externalname-service with the type=ExternalName in namespace services-6605 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-6605 I0131 01:52:57.516191 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-6605, replica count: 2 I0131 01:53:00.566589 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0131 01:53:03.566824 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 31 01:53:03.566: INFO: Creating new exec pod Jan 31 01:53:08.603: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=services-6605 exec execpodkkkf8 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Jan 31 01:53:08.871: INFO: stderr: "I0131 01:53:08.769528 2703 log.go:181] (0xc000737600) (0xc0006e0460) Create stream\nI0131 01:53:08.769580 2703 log.go:181] (0xc000737600) (0xc0006e0460) Stream added, broadcasting: 1\nI0131 01:53:08.775387 2703 log.go:181] (0xc000737600) Reply frame received for 1\nI0131 01:53:08.775428 2703 log.go:181] (0xc000737600) (0xc000bba000) Create stream\nI0131 01:53:08.775439 2703 log.go:181] (0xc000737600) (0xc000bba000) Stream added, broadcasting: 3\nI0131 01:53:08.776350 2703 log.go:181] (0xc000737600) Reply frame received for 3\nI0131 01:53:08.776400 2703 log.go:181] (0xc000737600) (0xc000bba0a0) Create stream\nI0131 01:53:08.776416 2703 log.go:181] (0xc000737600) (0xc000bba0a0) Stream added, broadcasting: 5\nI0131 01:53:08.777343 2703 log.go:181] (0xc000737600) Reply frame received for 5\nI0131 01:53:08.863630 2703 log.go:181] (0xc000737600) Data frame received for 3\nI0131 01:53:08.863693 2703 log.go:181] (0xc000bba000) (3) Data frame handling\nI0131 01:53:08.863802 2703 log.go:181] (0xc000737600) Data frame received for 5\nI0131 01:53:08.863839 2703 log.go:181] (0xc000bba0a0) (5) Data frame handling\nI0131 01:53:08.863864 2703 log.go:181] (0xc000bba0a0) (5) Data frame sent\nI0131 01:53:08.863880 2703 log.go:181] (0xc000737600) Data frame received for 5\nI0131 01:53:08.863895 2703 log.go:181] (0xc000bba0a0) (5) Data frame handling\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0131 01:53:08.865362 2703 log.go:181] (0xc000737600) Data frame received for 1\nI0131 01:53:08.865386 2703 log.go:181] (0xc0006e0460) (1) Data frame handling\nI0131 01:53:08.865403 2703 log.go:181] (0xc0006e0460) (1) Data frame sent\nI0131 01:53:08.865424 2703 log.go:181] (0xc000737600) (0xc0006e0460) Stream removed, broadcasting: 1\nI0131 01:53:08.865439 2703 log.go:181] (0xc000737600) Go away received\nI0131 01:53:08.865821 2703 log.go:181] (0xc000737600) (0xc0006e0460) Stream removed, broadcasting: 1\nI0131 01:53:08.865847 2703 log.go:181] (0xc000737600) (0xc000bba000) Stream removed, broadcasting: 3\nI0131 01:53:08.865865 2703 log.go:181] (0xc000737600) (0xc000bba0a0) Stream removed, broadcasting: 5\n" Jan 31 01:53:08.872: INFO: stdout: "" Jan 31 01:53:08.872: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=services-6605 exec execpodkkkf8 -- /bin/sh -x -c nc -zv -t -w 2 10.96.25.2 80' Jan 31 01:53:09.089: INFO: stderr: "I0131 01:53:09.017265 2721 log.go:181] (0xc0008a2bb0) (0xc000b3e3c0) Create stream\nI0131 01:53:09.017333 2721 log.go:181] (0xc0008a2bb0) (0xc000b3e3c0) Stream added, broadcasting: 1\nI0131 01:53:09.019498 2721 log.go:181] (0xc0008a2bb0) Reply frame received for 1\nI0131 01:53:09.019548 2721 log.go:181] (0xc0008a2bb0) (0xc0005400a0) Create stream\nI0131 01:53:09.019562 2721 log.go:181] (0xc0008a2bb0) (0xc0005400a0) Stream added, broadcasting: 3\nI0131 01:53:09.020612 2721 log.go:181] (0xc0008a2bb0) Reply frame received for 3\nI0131 01:53:09.020689 2721 log.go:181] (0xc0008a2bb0) (0xc000b3e460) Create stream\nI0131 01:53:09.020709 2721 log.go:181] (0xc0008a2bb0) (0xc000b3e460) Stream added, broadcasting: 5\nI0131 01:53:09.021875 2721 log.go:181] (0xc0008a2bb0) Reply frame received for 5\nI0131 01:53:09.080527 2721 log.go:181] (0xc0008a2bb0) Data frame received for 5\nI0131 01:53:09.080567 2721 log.go:181] (0xc000b3e460) (5) Data frame handling\nI0131 01:53:09.080582 2721 log.go:181] (0xc000b3e460) (5) Data frame sent\nI0131 01:53:09.080595 2721 log.go:181] (0xc0008a2bb0) Data frame received for 5\nI0131 01:53:09.080605 2721 log.go:181] (0xc000b3e460) (5) Data frame handling\nI0131 01:53:09.080621 2721 log.go:181] (0xc0008a2bb0) Data frame received for 3\nI0131 01:53:09.080632 2721 log.go:181] (0xc0005400a0) (3) Data frame handling\n+ nc -zv -t -w 2 10.96.25.2 80\nConnection to 10.96.25.2 80 port [tcp/http] succeeded!\nI0131 01:53:09.082527 2721 log.go:181] (0xc0008a2bb0) Data frame received for 1\nI0131 01:53:09.082562 2721 log.go:181] (0xc000b3e3c0) (1) Data frame handling\nI0131 01:53:09.082581 2721 log.go:181] (0xc000b3e3c0) (1) Data frame sent\nI0131 01:53:09.082596 2721 log.go:181] (0xc0008a2bb0) (0xc000b3e3c0) Stream removed, broadcasting: 1\nI0131 01:53:09.082634 2721 log.go:181] (0xc0008a2bb0) Go away received\nI0131 01:53:09.083150 2721 log.go:181] (0xc0008a2bb0) (0xc000b3e3c0) Stream removed, broadcasting: 1\nI0131 01:53:09.083182 2721 log.go:181] (0xc0008a2bb0) (0xc0005400a0) Stream removed, broadcasting: 3\nI0131 01:53:09.083195 2721 log.go:181] (0xc0008a2bb0) (0xc000b3e460) Stream removed, broadcasting: 5\n" Jan 31 01:53:09.089: INFO: stdout: "" Jan 31 01:53:09.089: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:53:09.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6605" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 • [SLOW TEST:12.198 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":311,"completed":256,"skipped":4620,"failed":0} SS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:53:09.132: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Performing setup for networking test in namespace pod-network-test-5118 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 31 01:53:09.228: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jan 31 01:53:09.320: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 31 01:53:11.379: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 31 01:53:13.324: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 31 01:53:15.338: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 31 01:53:17.324: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 31 01:53:19.325: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 31 01:53:21.325: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 31 01:53:23.324: INFO: The status of Pod netserver-0 is Running (Ready = true) Jan 31 01:53:23.329: INFO: The status of Pod netserver-1 is Running (Ready = false) Jan 31 01:53:25.334: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Jan 31 01:53:29.416: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 Jan 31 01:53:29.416: INFO: Going to poll 10.244.2.59 on port 8080 at least 0 times, with a maximum of 34 tries before failing Jan 31 01:53:29.418: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.59:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5118 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 31 01:53:29.419: INFO: >>> kubeConfig: /root/.kube/config I0131 01:53:29.458427 7 log.go:181] (0xc000927e40) (0xc002cfaaa0) Create stream I0131 01:53:29.458465 7 log.go:181] (0xc000927e40) (0xc002cfaaa0) Stream added, broadcasting: 1 I0131 01:53:29.460234 7 log.go:181] (0xc000927e40) Reply frame received for 1 I0131 01:53:29.460285 7 log.go:181] (0xc000927e40) (0xc0005328c0) Create stream I0131 01:53:29.460307 7 log.go:181] (0xc000927e40) (0xc0005328c0) Stream added, broadcasting: 3 I0131 01:53:29.461517 7 log.go:181] (0xc000927e40) Reply frame received for 3 I0131 01:53:29.461573 7 log.go:181] (0xc000927e40) (0xc002440640) Create stream I0131 01:53:29.461590 7 log.go:181] (0xc000927e40) (0xc002440640) Stream added, broadcasting: 5 I0131 01:53:29.462566 7 log.go:181] (0xc000927e40) Reply frame received for 5 I0131 01:53:29.527728 7 log.go:181] (0xc000927e40) Data frame received for 3 I0131 01:53:29.527759 7 log.go:181] (0xc0005328c0) (3) Data frame handling I0131 01:53:29.527774 7 log.go:181] (0xc0005328c0) (3) Data frame sent I0131 01:53:29.527840 7 log.go:181] (0xc000927e40) Data frame received for 3 I0131 01:53:29.527859 7 log.go:181] (0xc0005328c0) (3) Data frame handling I0131 01:53:29.528086 7 log.go:181] (0xc000927e40) Data frame received for 5 I0131 01:53:29.528105 7 log.go:181] (0xc002440640) (5) Data frame handling I0131 01:53:29.529746 7 log.go:181] (0xc000927e40) Data frame received for 1 I0131 01:53:29.529765 7 log.go:181] (0xc002cfaaa0) (1) Data frame handling I0131 01:53:29.529780 7 log.go:181] (0xc002cfaaa0) (1) Data frame sent I0131 01:53:29.529793 7 log.go:181] (0xc000927e40) (0xc002cfaaa0) Stream removed, broadcasting: 1 I0131 01:53:29.529866 7 log.go:181] (0xc000927e40) Go away received I0131 01:53:29.529928 7 log.go:181] (0xc000927e40) (0xc002cfaaa0) Stream removed, broadcasting: 1 I0131 01:53:29.529950 7 log.go:181] (0xc000927e40) (0xc0005328c0) Stream removed, broadcasting: 3 I0131 01:53:29.529978 7 log.go:181] (0xc000927e40) (0xc002440640) Stream removed, broadcasting: 5 Jan 31 01:53:29.529: INFO: Found all 1 expected endpoints: [netserver-0] Jan 31 01:53:29.530: INFO: Going to poll 10.244.1.215 on port 8080 at least 0 times, with a maximum of 34 tries before failing Jan 31 01:53:29.532: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.215:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5118 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 31 01:53:29.532: INFO: >>> kubeConfig: /root/.kube/config I0131 01:53:29.560614 7 log.go:181] (0xc00002f760) (0xc002fec500) Create stream I0131 01:53:29.560642 7 log.go:181] (0xc00002f760) (0xc002fec500) Stream added, broadcasting: 1 I0131 01:53:29.562361 7 log.go:181] (0xc00002f760) Reply frame received for 1 I0131 01:53:29.562434 7 log.go:181] (0xc00002f760) (0xc00476a000) Create stream I0131 01:53:29.562456 7 log.go:181] (0xc00002f760) (0xc00476a000) Stream added, broadcasting: 3 I0131 01:53:29.563475 7 log.go:181] (0xc00002f760) Reply frame received for 3 I0131 01:53:29.563513 7 log.go:181] (0xc00002f760) (0xc00476a0a0) Create stream I0131 01:53:29.563525 7 log.go:181] (0xc00002f760) (0xc00476a0a0) Stream added, broadcasting: 5 I0131 01:53:29.564295 7 log.go:181] (0xc00002f760) Reply frame received for 5 I0131 01:53:29.633377 7 log.go:181] (0xc00002f760) Data frame received for 5 I0131 01:53:29.633410 7 log.go:181] (0xc00476a0a0) (5) Data frame handling I0131 01:53:29.633430 7 log.go:181] (0xc00002f760) Data frame received for 3 I0131 01:53:29.633451 7 log.go:181] (0xc00476a000) (3) Data frame handling I0131 01:53:29.633463 7 log.go:181] (0xc00476a000) (3) Data frame sent I0131 01:53:29.633471 7 log.go:181] (0xc00002f760) Data frame received for 3 I0131 01:53:29.633475 7 log.go:181] (0xc00476a000) (3) Data frame handling I0131 01:53:29.634992 7 log.go:181] (0xc00002f760) Data frame received for 1 I0131 01:53:29.635013 7 log.go:181] (0xc002fec500) (1) Data frame handling I0131 01:53:29.635037 7 log.go:181] (0xc002fec500) (1) Data frame sent I0131 01:53:29.635057 7 log.go:181] (0xc00002f760) (0xc002fec500) Stream removed, broadcasting: 1 I0131 01:53:29.635078 7 log.go:181] (0xc00002f760) Go away received I0131 01:53:29.635135 7 log.go:181] (0xc00002f760) (0xc002fec500) Stream removed, broadcasting: 1 I0131 01:53:29.635153 7 log.go:181] (0xc00002f760) (0xc00476a000) Stream removed, broadcasting: 3 I0131 01:53:29.635162 7 log.go:181] (0xc00002f760) (0xc00476a0a0) Stream removed, broadcasting: 5 Jan 31 01:53:29.635: INFO: Found all 1 expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:53:29.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-5118" for this suite. • [SLOW TEST:20.510 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:27 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:30 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":257,"skipped":4622,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:53:29.643: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [AfterEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:53:33.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9935" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":311,"completed":258,"skipped":4642,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:53:33.819: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a pod to test downward API volume plugin Jan 31 01:53:33.919: INFO: Waiting up to 5m0s for pod "downwardapi-volume-65d6bf48-e8df-47bb-b30a-c279ffb1e929" in namespace "downward-api-4467" to be "Succeeded or Failed" Jan 31 01:53:33.930: INFO: Pod "downwardapi-volume-65d6bf48-e8df-47bb-b30a-c279ffb1e929": Phase="Pending", Reason="", readiness=false. Elapsed: 10.729372ms Jan 31 01:53:36.165: INFO: Pod "downwardapi-volume-65d6bf48-e8df-47bb-b30a-c279ffb1e929": Phase="Pending", Reason="", readiness=false. Elapsed: 2.246012431s Jan 31 01:53:38.176: INFO: Pod "downwardapi-volume-65d6bf48-e8df-47bb-b30a-c279ffb1e929": Phase="Pending", Reason="", readiness=false. Elapsed: 4.257052194s Jan 31 01:53:40.179: INFO: Pod "downwardapi-volume-65d6bf48-e8df-47bb-b30a-c279ffb1e929": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.260050313s STEP: Saw pod success Jan 31 01:53:40.179: INFO: Pod "downwardapi-volume-65d6bf48-e8df-47bb-b30a-c279ffb1e929" satisfied condition "Succeeded or Failed" Jan 31 01:53:40.182: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-65d6bf48-e8df-47bb-b30a-c279ffb1e929 container client-container: STEP: delete the pod Jan 31 01:53:40.220: INFO: Waiting for pod downwardapi-volume-65d6bf48-e8df-47bb-b30a-c279ffb1e929 to disappear Jan 31 01:53:40.231: INFO: Pod downwardapi-volume-65d6bf48-e8df-47bb-b30a-c279ffb1e929 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:53:40.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4467" for this suite. • [SLOW TEST:6.421 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":311,"completed":259,"skipped":4661,"failed":0} SSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:53:40.240: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should serve multiport endpoints from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: creating service multi-endpoint-test in namespace services-130 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-130 to expose endpoints map[] Jan 31 01:53:40.393: INFO: Failed go get Endpoints object: endpoints "multi-endpoint-test" not found Jan 31 01:53:41.403: INFO: successfully validated that service multi-endpoint-test in namespace services-130 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-130 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-130 to expose endpoints map[pod1:[100]] Jan 31 01:53:45.470: INFO: successfully validated that service multi-endpoint-test in namespace services-130 exposes endpoints map[pod1:[100]] STEP: Creating pod pod2 in namespace services-130 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-130 to expose endpoints map[pod1:[100] pod2:[101]] Jan 31 01:53:49.520: INFO: successfully validated that service multi-endpoint-test in namespace services-130 exposes endpoints map[pod1:[100] pod2:[101]] STEP: Deleting pod pod1 in namespace services-130 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-130 to expose endpoints map[pod2:[101]] Jan 31 01:53:49.613: INFO: successfully validated that service multi-endpoint-test in namespace services-130 exposes endpoints map[pod2:[101]] STEP: Deleting pod pod2 in namespace services-130 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-130 to expose endpoints map[] Jan 31 01:53:49.649: INFO: successfully validated that service multi-endpoint-test in namespace services-130 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:53:50.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-130" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 • [SLOW TEST:10.055 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":311,"completed":260,"skipped":4667,"failed":0} [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:53:50.295: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:77 Jan 31 01:53:50.515: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Registering the sample API server. Jan 31 01:53:51.340: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Jan 31 01:53:53.833: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747654831, loc:(*time.Location)(0x79bd420)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747654831, loc:(*time.Location)(0x79bd420)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747654831, loc:(*time.Location)(0x79bd420)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747654831, loc:(*time.Location)(0x79bd420)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-67dc674868\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 31 01:53:55.843: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747654831, loc:(*time.Location)(0x79bd420)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747654831, loc:(*time.Location)(0x79bd420)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747654831, loc:(*time.Location)(0x79bd420)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747654831, loc:(*time.Location)(0x79bd420)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-67dc674868\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 31 01:53:58.561: INFO: Waited 718.851863ms for the sample-apiserver to be ready to handle requests. STEP: Read Status for v1alpha1.wardle.example.com STEP: kubectl patch apiservice v1alpha1.wardle.example.com -p '{"spec":{"versionPriority": 400}}' STEP: List APIServices Jan 31 01:53:58.704: INFO: Found v1alpha1.wardle.example.com in APIServiceList [AfterEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:68 [AfterEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:53:59.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-3679" for this suite. • [SLOW TEST:9.407 seconds] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":311,"completed":261,"skipped":4667,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:53:59.703: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should be able to change the type from ExternalName to NodePort [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: creating a service externalname-service with the type=ExternalName in namespace services-7863 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-7863 I0131 01:54:00.321007 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-7863, replica count: 2 I0131 01:54:03.371374 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0131 01:54:06.371657 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 31 01:54:06.371: INFO: Creating new exec pod Jan 31 01:54:11.417: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=services-7863 exec execpodk5g6g -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Jan 31 01:54:11.670: INFO: stderr: "I0131 01:54:11.572439 2739 log.go:181] (0xc00003ac60) (0xc000b2e1e0) Create stream\nI0131 01:54:11.572509 2739 log.go:181] (0xc00003ac60) (0xc000b2e1e0) Stream added, broadcasting: 1\nI0131 01:54:11.574536 2739 log.go:181] (0xc00003ac60) Reply frame received for 1\nI0131 01:54:11.574584 2739 log.go:181] (0xc00003ac60) (0xc000227d60) Create stream\nI0131 01:54:11.574599 2739 log.go:181] (0xc00003ac60) (0xc000227d60) Stream added, broadcasting: 3\nI0131 01:54:11.575706 2739 log.go:181] (0xc00003ac60) Reply frame received for 3\nI0131 01:54:11.575760 2739 log.go:181] (0xc00003ac60) (0xc000b2e280) Create stream\nI0131 01:54:11.575774 2739 log.go:181] (0xc00003ac60) (0xc000b2e280) Stream added, broadcasting: 5\nI0131 01:54:11.577007 2739 log.go:181] (0xc00003ac60) Reply frame received for 5\nI0131 01:54:11.663200 2739 log.go:181] (0xc00003ac60) Data frame received for 5\nI0131 01:54:11.663227 2739 log.go:181] (0xc000b2e280) (5) Data frame handling\nI0131 01:54:11.663239 2739 log.go:181] (0xc000b2e280) (5) Data frame sent\nI0131 01:54:11.663250 2739 log.go:181] (0xc00003ac60) Data frame received for 5\nI0131 01:54:11.663272 2739 log.go:181] (0xc000b2e280) (5) Data frame handling\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0131 01:54:11.663303 2739 log.go:181] (0xc000b2e280) (5) Data frame sent\nI0131 01:54:11.663328 2739 log.go:181] (0xc00003ac60) Data frame received for 5\nI0131 01:54:11.663336 2739 log.go:181] (0xc000b2e280) (5) Data frame handling\nI0131 01:54:11.663501 2739 log.go:181] (0xc00003ac60) Data frame received for 3\nI0131 01:54:11.663520 2739 log.go:181] (0xc000227d60) (3) Data frame handling\nI0131 01:54:11.665178 2739 log.go:181] (0xc00003ac60) Data frame received for 1\nI0131 01:54:11.665198 2739 log.go:181] (0xc000b2e1e0) (1) Data frame handling\nI0131 01:54:11.665212 2739 log.go:181] (0xc000b2e1e0) (1) Data frame sent\nI0131 01:54:11.665228 2739 log.go:181] (0xc00003ac60) (0xc000b2e1e0) Stream removed, broadcasting: 1\nI0131 01:54:11.665274 2739 log.go:181] (0xc00003ac60) Go away received\nI0131 01:54:11.665617 2739 log.go:181] (0xc00003ac60) (0xc000b2e1e0) Stream removed, broadcasting: 1\nI0131 01:54:11.665635 2739 log.go:181] (0xc00003ac60) (0xc000227d60) Stream removed, broadcasting: 3\nI0131 01:54:11.665648 2739 log.go:181] (0xc00003ac60) (0xc000b2e280) Stream removed, broadcasting: 5\n" Jan 31 01:54:11.670: INFO: stdout: "" Jan 31 01:54:11.671: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=services-7863 exec execpodk5g6g -- /bin/sh -x -c nc -zv -t -w 2 10.96.219.181 80' Jan 31 01:54:11.889: INFO: stderr: "I0131 01:54:11.797884 2757 log.go:181] (0xc000e133f0) (0xc00089c8c0) Create stream\nI0131 01:54:11.797949 2757 log.go:181] (0xc000e133f0) (0xc00089c8c0) Stream added, broadcasting: 1\nI0131 01:54:11.800342 2757 log.go:181] (0xc000e133f0) Reply frame received for 1\nI0131 01:54:11.800387 2757 log.go:181] (0xc000e133f0) (0xc000e0a280) Create stream\nI0131 01:54:11.800403 2757 log.go:181] (0xc000e133f0) (0xc000e0a280) Stream added, broadcasting: 3\nI0131 01:54:11.801428 2757 log.go:181] (0xc000e133f0) Reply frame received for 3\nI0131 01:54:11.801463 2757 log.go:181] (0xc000e133f0) (0xc000d42000) Create stream\nI0131 01:54:11.801473 2757 log.go:181] (0xc000e133f0) (0xc000d42000) Stream added, broadcasting: 5\nI0131 01:54:11.802519 2757 log.go:181] (0xc000e133f0) Reply frame received for 5\nI0131 01:54:11.881757 2757 log.go:181] (0xc000e133f0) Data frame received for 3\nI0131 01:54:11.881802 2757 log.go:181] (0xc000e0a280) (3) Data frame handling\nI0131 01:54:11.881912 2757 log.go:181] (0xc000e133f0) Data frame received for 5\nI0131 01:54:11.881970 2757 log.go:181] (0xc000d42000) (5) Data frame handling\nI0131 01:54:11.881997 2757 log.go:181] (0xc000d42000) (5) Data frame sent\nI0131 01:54:11.882011 2757 log.go:181] (0xc000e133f0) Data frame received for 5\nI0131 01:54:11.882022 2757 log.go:181] (0xc000d42000) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.219.181 80\nConnection to 10.96.219.181 80 port [tcp/http] succeeded!\nI0131 01:54:11.883240 2757 log.go:181] (0xc000e133f0) Data frame received for 1\nI0131 01:54:11.883259 2757 log.go:181] (0xc00089c8c0) (1) Data frame handling\nI0131 01:54:11.883270 2757 log.go:181] (0xc00089c8c0) (1) Data frame sent\nI0131 01:54:11.883287 2757 log.go:181] (0xc000e133f0) (0xc00089c8c0) Stream removed, broadcasting: 1\nI0131 01:54:11.883428 2757 log.go:181] (0xc000e133f0) Go away received\nI0131 01:54:11.883587 2757 log.go:181] (0xc000e133f0) (0xc00089c8c0) Stream removed, broadcasting: 1\nI0131 01:54:11.883609 2757 log.go:181] (0xc000e133f0) (0xc000e0a280) Stream removed, broadcasting: 3\nI0131 01:54:11.883616 2757 log.go:181] (0xc000e133f0) (0xc000d42000) Stream removed, broadcasting: 5\n" Jan 31 01:54:11.889: INFO: stdout: "" Jan 31 01:54:11.889: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=services-7863 exec execpodk5g6g -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.14 31802' Jan 31 01:54:12.091: INFO: stderr: "I0131 01:54:12.026280 2775 log.go:181] (0xc000e90000) (0xc000da8000) Create stream\nI0131 01:54:12.026357 2775 log.go:181] (0xc000e90000) (0xc000da8000) Stream added, broadcasting: 1\nI0131 01:54:12.033153 2775 log.go:181] (0xc000e90000) Reply frame received for 1\nI0131 01:54:12.033199 2775 log.go:181] (0xc000e90000) (0xc000c0a460) Create stream\nI0131 01:54:12.033210 2775 log.go:181] (0xc000e90000) (0xc000c0a460) Stream added, broadcasting: 3\nI0131 01:54:12.034539 2775 log.go:181] (0xc000e90000) Reply frame received for 3\nI0131 01:54:12.034577 2775 log.go:181] (0xc000e90000) (0xc000c0a500) Create stream\nI0131 01:54:12.034588 2775 log.go:181] (0xc000e90000) (0xc000c0a500) Stream added, broadcasting: 5\nI0131 01:54:12.035672 2775 log.go:181] (0xc000e90000) Reply frame received for 5\nI0131 01:54:12.084830 2775 log.go:181] (0xc000e90000) Data frame received for 3\nI0131 01:54:12.084937 2775 log.go:181] (0xc000c0a460) (3) Data frame handling\nI0131 01:54:12.084978 2775 log.go:181] (0xc000e90000) Data frame received for 5\nI0131 01:54:12.085013 2775 log.go:181] (0xc000c0a500) (5) Data frame handling\nI0131 01:54:12.085034 2775 log.go:181] (0xc000c0a500) (5) Data frame sent\nI0131 01:54:12.085051 2775 log.go:181] (0xc000e90000) Data frame received for 5\nI0131 01:54:12.085066 2775 log.go:181] (0xc000c0a500) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.14 31802\nConnection to 172.18.0.14 31802 port [tcp/31802] succeeded!\nI0131 01:54:12.086400 2775 log.go:181] (0xc000e90000) Data frame received for 1\nI0131 01:54:12.086415 2775 log.go:181] (0xc000da8000) (1) Data frame handling\nI0131 01:54:12.086425 2775 log.go:181] (0xc000da8000) (1) Data frame sent\nI0131 01:54:12.086445 2775 log.go:181] (0xc000e90000) (0xc000da8000) Stream removed, broadcasting: 1\nI0131 01:54:12.086559 2775 log.go:181] (0xc000e90000) Go away received\nI0131 01:54:12.086729 2775 log.go:181] (0xc000e90000) (0xc000da8000) Stream removed, broadcasting: 1\nI0131 01:54:12.086747 2775 log.go:181] (0xc000e90000) (0xc000c0a460) Stream removed, broadcasting: 3\nI0131 01:54:12.086753 2775 log.go:181] (0xc000e90000) (0xc000c0a500) Stream removed, broadcasting: 5\n" Jan 31 01:54:12.091: INFO: stdout: "" Jan 31 01:54:12.092: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=services-7863 exec execpodk5g6g -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.16 31802' Jan 31 01:54:12.287: INFO: stderr: "I0131 01:54:12.220745 2793 log.go:181] (0xc000140370) (0xc000a120a0) Create stream\nI0131 01:54:12.220799 2793 log.go:181] (0xc000140370) (0xc000a120a0) Stream added, broadcasting: 1\nI0131 01:54:12.222369 2793 log.go:181] (0xc000140370) Reply frame received for 1\nI0131 01:54:12.222405 2793 log.go:181] (0xc000140370) (0xc00070c320) Create stream\nI0131 01:54:12.222414 2793 log.go:181] (0xc000140370) (0xc00070c320) Stream added, broadcasting: 3\nI0131 01:54:12.223154 2793 log.go:181] (0xc000140370) Reply frame received for 3\nI0131 01:54:12.223182 2793 log.go:181] (0xc000140370) (0xc00070ca00) Create stream\nI0131 01:54:12.223191 2793 log.go:181] (0xc000140370) (0xc00070ca00) Stream added, broadcasting: 5\nI0131 01:54:12.223947 2793 log.go:181] (0xc000140370) Reply frame received for 5\nI0131 01:54:12.279463 2793 log.go:181] (0xc000140370) Data frame received for 3\nI0131 01:54:12.279503 2793 log.go:181] (0xc00070c320) (3) Data frame handling\nI0131 01:54:12.279557 2793 log.go:181] (0xc000140370) Data frame received for 5\nI0131 01:54:12.279580 2793 log.go:181] (0xc00070ca00) (5) Data frame handling\nI0131 01:54:12.279599 2793 log.go:181] (0xc00070ca00) (5) Data frame sent\nI0131 01:54:12.279613 2793 log.go:181] (0xc000140370) Data frame received for 5\nI0131 01:54:12.279624 2793 log.go:181] (0xc00070ca00) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.16 31802\nConnection to 172.18.0.16 31802 port [tcp/31802] succeeded!\nI0131 01:54:12.281169 2793 log.go:181] (0xc000140370) Data frame received for 1\nI0131 01:54:12.281189 2793 log.go:181] (0xc000a120a0) (1) Data frame handling\nI0131 01:54:12.281197 2793 log.go:181] (0xc000a120a0) (1) Data frame sent\nI0131 01:54:12.281214 2793 log.go:181] (0xc000140370) (0xc000a120a0) Stream removed, broadcasting: 1\nI0131 01:54:12.281506 2793 log.go:181] (0xc000140370) Go away received\nI0131 01:54:12.281571 2793 log.go:181] (0xc000140370) (0xc000a120a0) Stream removed, broadcasting: 1\nI0131 01:54:12.281611 2793 log.go:181] (0xc000140370) (0xc00070c320) Stream removed, broadcasting: 3\nI0131 01:54:12.281628 2793 log.go:181] (0xc000140370) (0xc00070ca00) Stream removed, broadcasting: 5\n" Jan 31 01:54:12.287: INFO: stdout: "" Jan 31 01:54:12.287: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:54:12.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7863" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 • [SLOW TEST:12.677 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":311,"completed":262,"skipped":4692,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:54:12.380: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a pod to test override all Jan 31 01:54:12.497: INFO: Waiting up to 5m0s for pod "client-containers-fbbfdb56-facd-456b-8eff-4c6d7be3cc26" in namespace "containers-3560" to be "Succeeded or Failed" Jan 31 01:54:12.500: INFO: Pod "client-containers-fbbfdb56-facd-456b-8eff-4c6d7be3cc26": Phase="Pending", Reason="", readiness=false. Elapsed: 3.173402ms Jan 31 01:54:14.626: INFO: Pod "client-containers-fbbfdb56-facd-456b-8eff-4c6d7be3cc26": Phase="Pending", Reason="", readiness=false. Elapsed: 2.129305578s Jan 31 01:54:16.630: INFO: Pod "client-containers-fbbfdb56-facd-456b-8eff-4c6d7be3cc26": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.133843534s STEP: Saw pod success Jan 31 01:54:16.631: INFO: Pod "client-containers-fbbfdb56-facd-456b-8eff-4c6d7be3cc26" satisfied condition "Succeeded or Failed" Jan 31 01:54:16.634: INFO: Trying to get logs from node latest-worker2 pod client-containers-fbbfdb56-facd-456b-8eff-4c6d7be3cc26 container agnhost-container: STEP: delete the pod Jan 31 01:54:16.672: INFO: Waiting for pod client-containers-fbbfdb56-facd-456b-8eff-4c6d7be3cc26 to disappear Jan 31 01:54:16.680: INFO: Pod client-containers-fbbfdb56-facd-456b-8eff-4c6d7be3cc26 no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:54:16.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3560" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":311,"completed":263,"skipped":4734,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:54:16.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 Jan 31 01:54:16.806: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Jan 31 01:54:20.383: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6264 --namespace=crd-publish-openapi-6264 create -f -' Jan 31 01:54:25.360: INFO: stderr: "" Jan 31 01:54:25.360: INFO: stdout: "e2e-test-crd-publish-openapi-1181-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Jan 31 01:54:25.360: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6264 --namespace=crd-publish-openapi-6264 delete e2e-test-crd-publish-openapi-1181-crds test-cr' Jan 31 01:54:25.567: INFO: stderr: "" Jan 31 01:54:25.567: INFO: stdout: "e2e-test-crd-publish-openapi-1181-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Jan 31 01:54:25.568: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6264 --namespace=crd-publish-openapi-6264 apply -f -' Jan 31 01:54:25.899: INFO: stderr: "" Jan 31 01:54:25.899: INFO: stdout: "e2e-test-crd-publish-openapi-1181-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Jan 31 01:54:25.899: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6264 --namespace=crd-publish-openapi-6264 delete e2e-test-crd-publish-openapi-1181-crds test-cr' Jan 31 01:54:26.004: INFO: stderr: "" Jan 31 01:54:26.004: INFO: stdout: "e2e-test-crd-publish-openapi-1181-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Jan 31 01:54:26.004: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6264 explain e2e-test-crd-publish-openapi-1181-crds' Jan 31 01:54:26.285: INFO: stderr: "" Jan 31 01:54:26.285: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1181-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:54:29.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6264" for this suite. • [SLOW TEST:13.161 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":311,"completed":264,"skipped":4773,"failed":0} SSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:54:29.849: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a volume subpath [sig-storage] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a pod to test substitution in volume subpath Jan 31 01:54:29.954: INFO: Waiting up to 5m0s for pod "var-expansion-6d49986b-4cf6-49a8-9a3d-178080682720" in namespace "var-expansion-8990" to be "Succeeded or Failed" Jan 31 01:54:29.958: INFO: Pod "var-expansion-6d49986b-4cf6-49a8-9a3d-178080682720": Phase="Pending", Reason="", readiness=false. Elapsed: 3.558933ms Jan 31 01:54:31.963: INFO: Pod "var-expansion-6d49986b-4cf6-49a8-9a3d-178080682720": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008813716s Jan 31 01:54:33.968: INFO: Pod "var-expansion-6d49986b-4cf6-49a8-9a3d-178080682720": Phase="Running", Reason="", readiness=true. Elapsed: 4.01370437s Jan 31 01:54:35.971: INFO: Pod "var-expansion-6d49986b-4cf6-49a8-9a3d-178080682720": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.017189969s STEP: Saw pod success Jan 31 01:54:35.971: INFO: Pod "var-expansion-6d49986b-4cf6-49a8-9a3d-178080682720" satisfied condition "Succeeded or Failed" Jan 31 01:54:35.974: INFO: Trying to get logs from node latest-worker pod var-expansion-6d49986b-4cf6-49a8-9a3d-178080682720 container dapi-container: STEP: delete the pod Jan 31 01:54:36.007: INFO: Waiting for pod var-expansion-6d49986b-4cf6-49a8-9a3d-178080682720 to disappear Jan 31 01:54:36.022: INFO: Pod var-expansion-6d49986b-4cf6-49a8-9a3d-178080682720 no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:54:36.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8990" for this suite. • [SLOW TEST:6.184 seconds] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:635 should allow substituting values in a volume subpath [sig-storage] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance]","total":311,"completed":265,"skipped":4780,"failed":0} SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:54:36.034: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a pod to test emptydir 0644 on tmpfs Jan 31 01:54:36.111: INFO: Waiting up to 5m0s for pod "pod-7e80a529-c589-4522-bb4b-391ae9f7d149" in namespace "emptydir-1076" to be "Succeeded or Failed" Jan 31 01:54:36.140: INFO: Pod "pod-7e80a529-c589-4522-bb4b-391ae9f7d149": Phase="Pending", Reason="", readiness=false. Elapsed: 28.484834ms Jan 31 01:54:38.144: INFO: Pod "pod-7e80a529-c589-4522-bb4b-391ae9f7d149": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032833007s Jan 31 01:54:40.149: INFO: Pod "pod-7e80a529-c589-4522-bb4b-391ae9f7d149": Phase="Running", Reason="", readiness=true. Elapsed: 4.037409052s Jan 31 01:54:42.154: INFO: Pod "pod-7e80a529-c589-4522-bb4b-391ae9f7d149": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.042324825s STEP: Saw pod success Jan 31 01:54:42.154: INFO: Pod "pod-7e80a529-c589-4522-bb4b-391ae9f7d149" satisfied condition "Succeeded or Failed" Jan 31 01:54:42.157: INFO: Trying to get logs from node latest-worker pod pod-7e80a529-c589-4522-bb4b-391ae9f7d149 container test-container: STEP: delete the pod Jan 31 01:54:42.195: INFO: Waiting for pod pod-7e80a529-c589-4522-bb4b-391ae9f7d149 to disappear Jan 31 01:54:42.210: INFO: Pod pod-7e80a529-c589-4522-bb4b-391ae9f7d149 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:54:42.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1076" for this suite. • [SLOW TEST:6.201 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:45 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":266,"skipped":4788,"failed":0} SSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:54:42.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating configMap configmap-5273/configmap-test-4430c0d3-a8c5-441f-bcc8-b89953fc5f4b STEP: Creating a pod to test consume configMaps Jan 31 01:54:42.319: INFO: Waiting up to 5m0s for pod "pod-configmaps-2381201d-10fb-43e6-a6bc-16634e95bacf" in namespace "configmap-5273" to be "Succeeded or Failed" Jan 31 01:54:42.323: INFO: Pod "pod-configmaps-2381201d-10fb-43e6-a6bc-16634e95bacf": Phase="Pending", Reason="", readiness=false. Elapsed: 3.636088ms Jan 31 01:54:44.327: INFO: Pod "pod-configmaps-2381201d-10fb-43e6-a6bc-16634e95bacf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007669292s Jan 31 01:54:46.331: INFO: Pod "pod-configmaps-2381201d-10fb-43e6-a6bc-16634e95bacf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011951363s STEP: Saw pod success Jan 31 01:54:46.331: INFO: Pod "pod-configmaps-2381201d-10fb-43e6-a6bc-16634e95bacf" satisfied condition "Succeeded or Failed" Jan 31 01:54:46.334: INFO: Trying to get logs from node latest-worker pod pod-configmaps-2381201d-10fb-43e6-a6bc-16634e95bacf container env-test: STEP: delete the pod Jan 31 01:54:46.406: INFO: Waiting for pod pod-configmaps-2381201d-10fb-43e6-a6bc-16634e95bacf to disappear Jan 31 01:54:46.413: INFO: Pod pod-configmaps-2381201d-10fb-43e6-a6bc-16634e95bacf no longer exists [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:54:46.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5273" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":311,"completed":267,"skipped":4798,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:54:46.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: creating service in namespace services-8653 STEP: creating service affinity-clusterip-transition in namespace services-8653 STEP: creating replication controller affinity-clusterip-transition in namespace services-8653 I0131 01:54:46.886829 7 runners.go:190] Created replication controller with name: affinity-clusterip-transition, namespace: services-8653, replica count: 3 I0131 01:54:49.937226 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0131 01:54:52.937516 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 31 01:54:52.945: INFO: Creating new exec pod Jan 31 01:54:57.992: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=services-8653 exec execpod-affinity4jn4t -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' Jan 31 01:54:58.216: INFO: stderr: "I0131 01:54:58.151390 2902 log.go:181] (0xc0000d1130) (0xc000afa320) Create stream\nI0131 01:54:58.151472 2902 log.go:181] (0xc0000d1130) (0xc000afa320) Stream added, broadcasting: 1\nI0131 01:54:58.157475 2902 log.go:181] (0xc0000d1130) Reply frame received for 1\nI0131 01:54:58.157513 2902 log.go:181] (0xc0000d1130) (0xc000afa3c0) Create stream\nI0131 01:54:58.157523 2902 log.go:181] (0xc0000d1130) (0xc000afa3c0) Stream added, broadcasting: 3\nI0131 01:54:58.158496 2902 log.go:181] (0xc0000d1130) Reply frame received for 3\nI0131 01:54:58.158560 2902 log.go:181] (0xc0000d1130) (0xc000889360) Create stream\nI0131 01:54:58.158589 2902 log.go:181] (0xc0000d1130) (0xc000889360) Stream added, broadcasting: 5\nI0131 01:54:58.159741 2902 log.go:181] (0xc0000d1130) Reply frame received for 5\nI0131 01:54:58.209161 2902 log.go:181] (0xc0000d1130) Data frame received for 5\nI0131 01:54:58.209218 2902 log.go:181] (0xc000889360) (5) Data frame handling\nI0131 01:54:58.209259 2902 log.go:181] (0xc000889360) (5) Data frame sent\nI0131 01:54:58.209390 2902 log.go:181] (0xc0000d1130) Data frame received for 5\nI0131 01:54:58.209426 2902 log.go:181] (0xc000889360) (5) Data frame handling\n+ nc -zv -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\nI0131 01:54:58.209570 2902 log.go:181] (0xc000889360) (5) Data frame sent\nI0131 01:54:58.209618 2902 log.go:181] (0xc0000d1130) Data frame received for 3\nI0131 01:54:58.209645 2902 log.go:181] (0xc000afa3c0) (3) Data frame handling\nI0131 01:54:58.209684 2902 log.go:181] (0xc0000d1130) Data frame received for 5\nI0131 01:54:58.209708 2902 log.go:181] (0xc000889360) (5) Data frame handling\nI0131 01:54:58.211308 2902 log.go:181] (0xc0000d1130) Data frame received for 1\nI0131 01:54:58.211328 2902 log.go:181] (0xc000afa320) (1) Data frame handling\nI0131 01:54:58.211340 2902 log.go:181] (0xc000afa320) (1) Data frame sent\nI0131 01:54:58.211353 2902 log.go:181] (0xc0000d1130) (0xc000afa320) Stream removed, broadcasting: 1\nI0131 01:54:58.211454 2902 log.go:181] (0xc0000d1130) Go away received\nI0131 01:54:58.211720 2902 log.go:181] (0xc0000d1130) (0xc000afa320) Stream removed, broadcasting: 1\nI0131 01:54:58.211737 2902 log.go:181] (0xc0000d1130) (0xc000afa3c0) Stream removed, broadcasting: 3\nI0131 01:54:58.211748 2902 log.go:181] (0xc0000d1130) (0xc000889360) Stream removed, broadcasting: 5\n" Jan 31 01:54:58.216: INFO: stdout: "" Jan 31 01:54:58.217: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=services-8653 exec execpod-affinity4jn4t -- /bin/sh -x -c nc -zv -t -w 2 10.96.221.159 80' Jan 31 01:54:58.422: INFO: stderr: "I0131 01:54:58.357767 2920 log.go:181] (0xc000e06000) (0xc000618a00) Create stream\nI0131 01:54:58.357847 2920 log.go:181] (0xc000e06000) (0xc000618a00) Stream added, broadcasting: 1\nI0131 01:54:58.359828 2920 log.go:181] (0xc000e06000) Reply frame received for 1\nI0131 01:54:58.359873 2920 log.go:181] (0xc000e06000) (0xc0005a5a40) Create stream\nI0131 01:54:58.359886 2920 log.go:181] (0xc000e06000) (0xc0005a5a40) Stream added, broadcasting: 3\nI0131 01:54:58.361006 2920 log.go:181] (0xc000e06000) Reply frame received for 3\nI0131 01:54:58.361046 2920 log.go:181] (0xc000e06000) (0xc000619400) Create stream\nI0131 01:54:58.361054 2920 log.go:181] (0xc000e06000) (0xc000619400) Stream added, broadcasting: 5\nI0131 01:54:58.362820 2920 log.go:181] (0xc000e06000) Reply frame received for 5\nI0131 01:54:58.415249 2920 log.go:181] (0xc000e06000) Data frame received for 5\nI0131 01:54:58.415311 2920 log.go:181] (0xc000619400) (5) Data frame handling\nI0131 01:54:58.415338 2920 log.go:181] (0xc000619400) (5) Data frame sent\nI0131 01:54:58.415360 2920 log.go:181] (0xc000e06000) Data frame received for 5\nI0131 01:54:58.415378 2920 log.go:181] (0xc000619400) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.221.159 80\nConnection to 10.96.221.159 80 port [tcp/http] succeeded!\nI0131 01:54:58.415404 2920 log.go:181] (0xc000e06000) Data frame received for 3\nI0131 01:54:58.415427 2920 log.go:181] (0xc0005a5a40) (3) Data frame handling\nI0131 01:54:58.416813 2920 log.go:181] (0xc000e06000) Data frame received for 1\nI0131 01:54:58.416904 2920 log.go:181] (0xc000618a00) (1) Data frame handling\nI0131 01:54:58.416948 2920 log.go:181] (0xc000618a00) (1) Data frame sent\nI0131 01:54:58.417142 2920 log.go:181] (0xc000e06000) (0xc000618a00) Stream removed, broadcasting: 1\nI0131 01:54:58.417165 2920 log.go:181] (0xc000e06000) Go away received\nI0131 01:54:58.417483 2920 log.go:181] (0xc000e06000) (0xc000618a00) Stream removed, broadcasting: 1\nI0131 01:54:58.417502 2920 log.go:181] (0xc000e06000) (0xc0005a5a40) Stream removed, broadcasting: 3\nI0131 01:54:58.417513 2920 log.go:181] (0xc000e06000) (0xc000619400) Stream removed, broadcasting: 5\n" Jan 31 01:54:58.422: INFO: stdout: "" Jan 31 01:54:58.431: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=services-8653 exec execpod-affinity4jn4t -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.96.221.159:80/ ; done' Jan 31 01:54:58.728: INFO: stderr: "I0131 01:54:58.550562 2939 log.go:181] (0xc000658000) (0xc000174780) Create stream\nI0131 01:54:58.550622 2939 log.go:181] (0xc000658000) (0xc000174780) Stream added, broadcasting: 1\nI0131 01:54:58.552193 2939 log.go:181] (0xc000658000) Reply frame received for 1\nI0131 01:54:58.552227 2939 log.go:181] (0xc000658000) (0xc0009f4000) Create stream\nI0131 01:54:58.552241 2939 log.go:181] (0xc000658000) (0xc0009f4000) Stream added, broadcasting: 3\nI0131 01:54:58.553272 2939 log.go:181] (0xc000658000) Reply frame received for 3\nI0131 01:54:58.553331 2939 log.go:181] (0xc000658000) (0xc00019d400) Create stream\nI0131 01:54:58.553351 2939 log.go:181] (0xc000658000) (0xc00019d400) Stream added, broadcasting: 5\nI0131 01:54:58.554087 2939 log.go:181] (0xc000658000) Reply frame received for 5\nI0131 01:54:58.621263 2939 log.go:181] (0xc000658000) Data frame received for 5\nI0131 01:54:58.621314 2939 log.go:181] (0xc00019d400) (5) Data frame handling\nI0131 01:54:58.621332 2939 log.go:181] (0xc00019d400) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.221.159:80/\nI0131 01:54:58.621349 2939 log.go:181] (0xc000658000) Data frame received for 3\nI0131 01:54:58.621397 2939 log.go:181] (0xc0009f4000) (3) Data frame handling\nI0131 01:54:58.621418 2939 log.go:181] (0xc0009f4000) (3) Data frame sent\nI0131 01:54:58.626297 2939 log.go:181] (0xc000658000) Data frame received for 3\nI0131 01:54:58.626313 2939 log.go:181] (0xc0009f4000) (3) Data frame handling\nI0131 01:54:58.626322 2939 log.go:181] (0xc0009f4000) (3) Data frame sent\nI0131 01:54:58.627179 2939 log.go:181] (0xc000658000) Data frame received for 5\nI0131 01:54:58.627192 2939 log.go:181] (0xc00019d400) (5) Data frame handling\nI0131 01:54:58.627201 2939 log.go:181] (0xc00019d400) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.221.159:80/\nI0131 01:54:58.627222 2939 log.go:181] (0xc000658000) Data frame received for 3\nI0131 01:54:58.627248 2939 log.go:181] (0xc0009f4000) (3) Data frame handling\nI0131 01:54:58.627264 2939 log.go:181] (0xc0009f4000) (3) Data frame sent\nI0131 01:54:58.634959 2939 log.go:181] (0xc000658000) Data frame received for 3\nI0131 01:54:58.634979 2939 log.go:181] (0xc0009f4000) (3) Data frame handling\nI0131 01:54:58.634990 2939 log.go:181] (0xc0009f4000) (3) Data frame sent\nI0131 01:54:58.635715 2939 log.go:181] (0xc000658000) Data frame received for 5\nI0131 01:54:58.635749 2939 log.go:181] (0xc00019d400) (5) Data frame handling\nI0131 01:54:58.635775 2939 log.go:181] (0xc00019d400) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.221.159:80/\nI0131 01:54:58.635803 2939 log.go:181] (0xc000658000) Data frame received for 3\nI0131 01:54:58.635848 2939 log.go:181] (0xc0009f4000) (3) Data frame handling\nI0131 01:54:58.635887 2939 log.go:181] (0xc0009f4000) (3) Data frame sent\nI0131 01:54:58.641032 2939 log.go:181] (0xc000658000) Data frame received for 3\nI0131 01:54:58.641061 2939 log.go:181] (0xc0009f4000) (3) Data frame handling\nI0131 01:54:58.641081 2939 log.go:181] (0xc0009f4000) (3) Data frame sent\nI0131 01:54:58.641538 2939 log.go:181] (0xc000658000) Data frame received for 3\nI0131 01:54:58.641578 2939 log.go:181] (0xc0009f4000) (3) Data frame handling\nI0131 01:54:58.641597 2939 log.go:181] (0xc0009f4000) (3) Data frame sent\nI0131 01:54:58.641624 2939 log.go:181] (0xc000658000) Data frame received for 5\nI0131 01:54:58.641640 2939 log.go:181] (0xc00019d400) (5) Data frame handling\nI0131 01:54:58.641682 2939 log.go:181] (0xc00019d400) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.221.159:80/\nI0131 01:54:58.646119 2939 log.go:181] (0xc000658000) Data frame received for 3\nI0131 01:54:58.646152 2939 log.go:181] (0xc0009f4000) (3) Data frame handling\nI0131 01:54:58.646181 2939 log.go:181] (0xc0009f4000) (3) Data frame sent\nI0131 01:54:58.646607 2939 log.go:181] (0xc000658000) Data frame received for 3\nI0131 01:54:58.646624 2939 log.go:181] (0xc0009f4000) (3) Data frame handling\nI0131 01:54:58.646632 2939 log.go:181] (0xc0009f4000) (3) Data frame sent\nI0131 01:54:58.646641 2939 log.go:181] (0xc000658000) Data frame received for 5\nI0131 01:54:58.646647 2939 log.go:181] (0xc00019d400) (5) Data frame handling\nI0131 01:54:58.646654 2939 log.go:181] (0xc00019d400) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.221.159:80/\nI0131 01:54:58.653236 2939 log.go:181] (0xc000658000) Data frame received for 3\nI0131 01:54:58.653276 2939 log.go:181] (0xc0009f4000) (3) Data frame handling\nI0131 01:54:58.653306 2939 log.go:181] (0xc0009f4000) (3) Data frame sent\nI0131 01:54:58.654239 2939 log.go:181] (0xc000658000) Data frame received for 3\nI0131 01:54:58.654300 2939 log.go:181] (0xc0009f4000) (3) Data frame handling\nI0131 01:54:58.654332 2939 log.go:181] (0xc0009f4000) (3) Data frame sent\nI0131 01:54:58.654374 2939 log.go:181] (0xc000658000) Data frame received for 5\nI0131 01:54:58.654421 2939 log.go:181] (0xc00019d400) (5) Data frame handling\nI0131 01:54:58.654444 2939 log.go:181] (0xc00019d400) (5) Data frame sent\nI0131 01:54:58.654469 2939 log.go:181] (0xc000658000) Data frame received for 5\nI0131 01:54:58.654479 2939 log.go:181] (0xc00019d400) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.221.159:80/\nI0131 01:54:58.654503 2939 log.go:181] (0xc00019d400) (5) Data frame sent\nI0131 01:54:58.659099 2939 log.go:181] (0xc000658000) Data frame received for 3\nI0131 01:54:58.659119 2939 log.go:181] (0xc0009f4000) (3) Data frame handling\nI0131 01:54:58.659143 2939 log.go:181] (0xc0009f4000) (3) Data frame sent\nI0131 01:54:58.659706 2939 log.go:181] (0xc000658000) Data frame received for 5\nI0131 01:54:58.659737 2939 log.go:181] (0xc00019d400) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.221.159:80/\nI0131 01:54:58.659765 2939 log.go:181] (0xc000658000) Data frame received for 3\nI0131 01:54:58.659791 2939 log.go:181] (0xc0009f4000) (3) Data frame handling\nI0131 01:54:58.659812 2939 log.go:181] (0xc0009f4000) (3) Data frame sent\nI0131 01:54:58.659828 2939 log.go:181] (0xc00019d400) (5) Data frame sent\nI0131 01:54:58.666942 2939 log.go:181] (0xc000658000) Data frame received for 3\nI0131 01:54:58.666966 2939 log.go:181] (0xc0009f4000) (3) Data frame handling\nI0131 01:54:58.666990 2939 log.go:181] (0xc0009f4000) (3) Data frame sent\nI0131 01:54:58.667340 2939 log.go:181] (0xc000658000) Data frame received for 3\nI0131 01:54:58.667355 2939 log.go:181] (0xc0009f4000) (3) Data frame handling\nI0131 01:54:58.667365 2939 log.go:181] (0xc0009f4000) (3) Data frame sent\nI0131 01:54:58.667382 2939 log.go:181] (0xc000658000) Data frame received for 5\nI0131 01:54:58.667406 2939 log.go:181] (0xc00019d400) (5) Data frame handling\nI0131 01:54:58.667432 2939 log.go:181] (0xc00019d400) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.221.159:80/\nI0131 01:54:58.673296 2939 log.go:181] (0xc000658000) Data frame received for 3\nI0131 01:54:58.673313 2939 log.go:181] (0xc0009f4000) (3) Data frame handling\nI0131 01:54:58.673327 2939 log.go:181] (0xc0009f4000) (3) Data frame sent\nI0131 01:54:58.673956 2939 log.go:181] (0xc000658000) Data frame received for 3\nI0131 01:54:58.673979 2939 log.go:181] (0xc0009f4000) (3) Data frame handling\nI0131 01:54:58.674005 2939 log.go:181] (0xc000658000) Data frame received for 5\nI0131 01:54:58.674035 2939 log.go:181] (0xc00019d400) (5) Data frame handling\nI0131 01:54:58.674045 2939 log.go:181] (0xc00019d400) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.221.159:80/\nI0131 01:54:58.674061 2939 log.go:181] (0xc0009f4000) (3) Data frame sent\nI0131 01:54:58.677470 2939 log.go:181] (0xc000658000) Data frame received for 3\nI0131 01:54:58.677489 2939 log.go:181] (0xc0009f4000) (3) Data frame handling\nI0131 01:54:58.677511 2939 log.go:181] (0xc0009f4000) (3) Data frame sent\nI0131 01:54:58.677918 2939 log.go:181] (0xc000658000) Data frame received for 5\nI0131 01:54:58.677939 2939 log.go:181] (0xc00019d400) (5) Data frame handling\nI0131 01:54:58.677948 2939 log.go:181] (0xc00019d400) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.221.159:80/\nI0131 01:54:58.677960 2939 log.go:181] (0xc000658000) Data frame received for 3\nI0131 01:54:58.677971 2939 log.go:181] (0xc0009f4000) (3) Data frame handling\nI0131 01:54:58.677978 2939 log.go:181] (0xc0009f4000) (3) Data frame sent\nI0131 01:54:58.681647 2939 log.go:181] (0xc000658000) Data frame received for 3\nI0131 01:54:58.681666 2939 log.go:181] (0xc0009f4000) (3) Data frame handling\nI0131 01:54:58.681697 2939 log.go:181] (0xc0009f4000) (3) Data frame sent\nI0131 01:54:58.682231 2939 log.go:181] (0xc000658000) Data frame received for 3\nI0131 01:54:58.682257 2939 log.go:181] (0xc0009f4000) (3) Data frame handling\nI0131 01:54:58.682272 2939 log.go:181] (0xc0009f4000) (3) Data frame sent\nI0131 01:54:58.682299 2939 log.go:181] (0xc000658000) Data frame received for 5\nI0131 01:54:58.682318 2939 log.go:181] (0xc00019d400) (5) Data frame handling\nI0131 01:54:58.682336 2939 log.go:181] (0xc00019d400) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.221.159:80/\nI0131 01:54:58.687936 2939 log.go:181] (0xc000658000) Data frame received for 3\nI0131 01:54:58.687953 2939 log.go:181] (0xc0009f4000) (3) Data frame handling\nI0131 01:54:58.687964 2939 log.go:181] (0xc0009f4000) (3) Data frame sent\nI0131 01:54:58.688425 2939 log.go:181] (0xc000658000) Data frame received for 3\nI0131 01:54:58.688453 2939 log.go:181] (0xc0009f4000) (3) Data frame handling\nI0131 01:54:58.688468 2939 log.go:181] (0xc0009f4000) (3) Data frame sent\nI0131 01:54:58.688488 2939 log.go:181] (0xc000658000) Data frame received for 5\nI0131 01:54:58.688496 2939 log.go:181] (0xc00019d400) (5) Data frame handling\nI0131 01:54:58.688503 2939 log.go:181] (0xc00019d400) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.221.159:80/\nI0131 01:54:58.694527 2939 log.go:181] (0xc000658000) Data frame received for 3\nI0131 01:54:58.694554 2939 log.go:181] (0xc0009f4000) (3) Data frame handling\nI0131 01:54:58.694591 2939 log.go:181] (0xc0009f4000) (3) Data frame sent\nI0131 01:54:58.695090 2939 log.go:181] (0xc000658000) Data frame received for 3\nI0131 01:54:58.695104 2939 log.go:181] (0xc0009f4000) (3) Data frame handling\nI0131 01:54:58.695115 2939 log.go:181] (0xc0009f4000) (3) Data frame sent\nI0131 01:54:58.695135 2939 log.go:181] (0xc000658000) Data frame received for 5\nI0131 01:54:58.695157 2939 log.go:181] (0xc00019d400) (5) Data frame handling\nI0131 01:54:58.695174 2939 log.go:181] (0xc00019d400) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.221.159:80/\nI0131 01:54:58.700257 2939 log.go:181] (0xc000658000) Data frame received for 3\nI0131 01:54:58.700276 2939 log.go:181] (0xc0009f4000) (3) Data frame handling\nI0131 01:54:58.700288 2939 log.go:181] (0xc0009f4000) (3) Data frame sent\nI0131 01:54:58.700802 2939 log.go:181] (0xc000658000) Data frame received for 5\nI0131 01:54:58.700820 2939 log.go:181] (0xc00019d400) (5) Data frame handling\nI0131 01:54:58.700950 2939 log.go:181] (0xc00019d400) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.221.159:80/\nI0131 01:54:58.701038 2939 log.go:181] (0xc000658000) Data frame received for 3\nI0131 01:54:58.701048 2939 log.go:181] (0xc0009f4000) (3) Data frame handling\nI0131 01:54:58.701055 2939 log.go:181] (0xc0009f4000) (3) Data frame sent\nI0131 01:54:58.706472 2939 log.go:181] (0xc000658000) Data frame received for 3\nI0131 01:54:58.706492 2939 log.go:181] (0xc0009f4000) (3) Data frame handling\nI0131 01:54:58.706508 2939 log.go:181] (0xc0009f4000) (3) Data frame sent\nI0131 01:54:58.706991 2939 log.go:181] (0xc000658000) Data frame received for 3\nI0131 01:54:58.707006 2939 log.go:181] (0xc0009f4000) (3) Data frame handling\nI0131 01:54:58.707018 2939 log.go:181] (0xc0009f4000) (3) Data frame sent\nI0131 01:54:58.707048 2939 log.go:181] (0xc000658000) Data frame received for 5\nI0131 01:54:58.707080 2939 log.go:181] (0xc00019d400) (5) Data frame handling\nI0131 01:54:58.707103 2939 log.go:181] (0xc00019d400) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.221.159:80/\nI0131 01:54:58.711370 2939 log.go:181] (0xc000658000) Data frame received for 3\nI0131 01:54:58.711395 2939 log.go:181] (0xc0009f4000) (3) Data frame handling\nI0131 01:54:58.711408 2939 log.go:181] (0xc0009f4000) (3) Data frame sent\nI0131 01:54:58.712316 2939 log.go:181] (0xc000658000) Data frame received for 5\nI0131 01:54:58.712344 2939 log.go:181] (0xc00019d400) (5) Data frame handling\nI0131 01:54:58.712356 2939 log.go:181] (0xc00019d400) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.221.159:80/\nI0131 01:54:58.712372 2939 log.go:181] (0xc000658000) Data frame received for 3\nI0131 01:54:58.712381 2939 log.go:181] (0xc0009f4000) (3) Data frame handling\nI0131 01:54:58.712391 2939 log.go:181] (0xc0009f4000) (3) Data frame sent\nI0131 01:54:58.717578 2939 log.go:181] (0xc000658000) Data frame received for 3\nI0131 01:54:58.717601 2939 log.go:181] (0xc0009f4000) (3) Data frame handling\nI0131 01:54:58.717619 2939 log.go:181] (0xc0009f4000) (3) Data frame sent\nI0131 01:54:58.718326 2939 log.go:181] (0xc000658000) Data frame received for 5\nI0131 01:54:58.718340 2939 log.go:181] (0xc00019d400) (5) Data frame handling\nI0131 01:54:58.718375 2939 log.go:181] (0xc000658000) Data frame received for 3\nI0131 01:54:58.718419 2939 log.go:181] (0xc0009f4000) (3) Data frame handling\nI0131 01:54:58.720484 2939 log.go:181] (0xc000658000) Data frame received for 1\nI0131 01:54:58.720510 2939 log.go:181] (0xc000174780) (1) Data frame handling\nI0131 01:54:58.720533 2939 log.go:181] (0xc000174780) (1) Data frame sent\nI0131 01:54:58.720555 2939 log.go:181] (0xc000658000) (0xc000174780) Stream removed, broadcasting: 1\nI0131 01:54:58.720573 2939 log.go:181] (0xc000658000) Go away received\nI0131 01:54:58.721237 2939 log.go:181] (0xc000658000) (0xc000174780) Stream removed, broadcasting: 1\nI0131 01:54:58.721280 2939 log.go:181] (0xc000658000) (0xc0009f4000) Stream removed, broadcasting: 3\nI0131 01:54:58.721301 2939 log.go:181] (0xc000658000) (0xc00019d400) Stream removed, broadcasting: 5\n" Jan 31 01:54:58.729: INFO: stdout: "\naffinity-clusterip-transition-7chd8\naffinity-clusterip-transition-7chd8\naffinity-clusterip-transition-gpcdd\naffinity-clusterip-transition-8gdvd\naffinity-clusterip-transition-8gdvd\naffinity-clusterip-transition-7chd8\naffinity-clusterip-transition-8gdvd\naffinity-clusterip-transition-7chd8\naffinity-clusterip-transition-7chd8\naffinity-clusterip-transition-7chd8\naffinity-clusterip-transition-7chd8\naffinity-clusterip-transition-8gdvd\naffinity-clusterip-transition-gpcdd\naffinity-clusterip-transition-7chd8\naffinity-clusterip-transition-7chd8\naffinity-clusterip-transition-7chd8" Jan 31 01:54:58.729: INFO: Received response from host: affinity-clusterip-transition-7chd8 Jan 31 01:54:58.729: INFO: Received response from host: affinity-clusterip-transition-7chd8 Jan 31 01:54:58.729: INFO: Received response from host: affinity-clusterip-transition-gpcdd Jan 31 01:54:58.729: INFO: Received response from host: affinity-clusterip-transition-8gdvd Jan 31 01:54:58.729: INFO: Received response from host: affinity-clusterip-transition-8gdvd Jan 31 01:54:58.729: INFO: Received response from host: affinity-clusterip-transition-7chd8 Jan 31 01:54:58.729: INFO: Received response from host: affinity-clusterip-transition-8gdvd Jan 31 01:54:58.729: INFO: Received response from host: affinity-clusterip-transition-7chd8 Jan 31 01:54:58.729: INFO: Received response from host: affinity-clusterip-transition-7chd8 Jan 31 01:54:58.729: INFO: Received response from host: affinity-clusterip-transition-7chd8 Jan 31 01:54:58.729: INFO: Received response from host: affinity-clusterip-transition-7chd8 Jan 31 01:54:58.729: INFO: Received response from host: affinity-clusterip-transition-8gdvd Jan 31 01:54:58.729: INFO: Received response from host: affinity-clusterip-transition-gpcdd Jan 31 01:54:58.729: INFO: Received response from host: affinity-clusterip-transition-7chd8 Jan 31 01:54:58.729: INFO: Received response from host: affinity-clusterip-transition-7chd8 Jan 31 01:54:58.729: INFO: Received response from host: affinity-clusterip-transition-7chd8 Jan 31 01:54:58.740: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=services-8653 exec execpod-affinity4jn4t -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.96.221.159:80/ ; done' Jan 31 01:54:59.037: INFO: stderr: "I0131 01:54:58.873031 2957 log.go:181] (0xc00003a420) (0xc0005c6000) Create stream\nI0131 01:54:58.873091 2957 log.go:181] (0xc00003a420) (0xc0005c6000) Stream added, broadcasting: 1\nI0131 01:54:58.874834 2957 log.go:181] (0xc00003a420) Reply frame received for 1\nI0131 01:54:58.874903 2957 log.go:181] (0xc00003a420) (0xc0009806e0) Create stream\nI0131 01:54:58.874933 2957 log.go:181] (0xc00003a420) (0xc0009806e0) Stream added, broadcasting: 3\nI0131 01:54:58.876257 2957 log.go:181] (0xc00003a420) Reply frame received for 3\nI0131 01:54:58.876320 2957 log.go:181] (0xc00003a420) (0xc000795ea0) Create stream\nI0131 01:54:58.876361 2957 log.go:181] (0xc00003a420) (0xc000795ea0) Stream added, broadcasting: 5\nI0131 01:54:58.877576 2957 log.go:181] (0xc00003a420) Reply frame received for 5\nI0131 01:54:58.933374 2957 log.go:181] (0xc00003a420) Data frame received for 3\nI0131 01:54:58.933414 2957 log.go:181] (0xc0009806e0) (3) Data frame handling\nI0131 01:54:58.933428 2957 log.go:181] (0xc0009806e0) (3) Data frame sent\nI0131 01:54:58.933457 2957 log.go:181] (0xc00003a420) Data frame received for 5\nI0131 01:54:58.933468 2957 log.go:181] (0xc000795ea0) (5) Data frame handling\nI0131 01:54:58.933479 2957 log.go:181] (0xc000795ea0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.221.159:80/\nI0131 01:54:58.938900 2957 log.go:181] (0xc00003a420) Data frame received for 3\nI0131 01:54:58.938917 2957 log.go:181] (0xc0009806e0) (3) Data frame handling\nI0131 01:54:58.938929 2957 log.go:181] (0xc0009806e0) (3) Data frame sent\nI0131 01:54:58.939407 2957 log.go:181] (0xc00003a420) Data frame received for 3\nI0131 01:54:58.939442 2957 log.go:181] (0xc0009806e0) (3) Data frame handling\nI0131 01:54:58.939461 2957 log.go:181] (0xc0009806e0) (3) Data frame sent\nI0131 01:54:58.939484 2957 log.go:181] (0xc00003a420) Data frame received for 5\nI0131 01:54:58.939520 2957 log.go:181] (0xc000795ea0) (5) Data frame handling\nI0131 01:54:58.939541 2957 log.go:181] (0xc000795ea0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.221.159:80/\nI0131 01:54:58.946570 2957 log.go:181] (0xc00003a420) Data frame received for 3\nI0131 01:54:58.946585 2957 log.go:181] (0xc0009806e0) (3) Data frame handling\nI0131 01:54:58.946597 2957 log.go:181] (0xc0009806e0) (3) Data frame sent\nI0131 01:54:58.947153 2957 log.go:181] (0xc00003a420) Data frame received for 5\nI0131 01:54:58.947176 2957 log.go:181] (0xc00003a420) Data frame received for 3\nI0131 01:54:58.947202 2957 log.go:181] (0xc0009806e0) (3) Data frame handling\nI0131 01:54:58.947219 2957 log.go:181] (0xc0009806e0) (3) Data frame sent\nI0131 01:54:58.947236 2957 log.go:181] (0xc000795ea0) (5) Data frame handling\nI0131 01:54:58.947245 2957 log.go:181] (0xc000795ea0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.221.159:80/\nI0131 01:54:58.950657 2957 log.go:181] (0xc00003a420) Data frame received for 3\nI0131 01:54:58.950669 2957 log.go:181] (0xc0009806e0) (3) Data frame handling\nI0131 01:54:58.950674 2957 log.go:181] (0xc0009806e0) (3) Data frame sent\nI0131 01:54:58.950996 2957 log.go:181] (0xc00003a420) Data frame received for 5\nI0131 01:54:58.951011 2957 log.go:181] (0xc000795ea0) (5) Data frame handling\nI0131 01:54:58.951022 2957 log.go:181] (0xc000795ea0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.221.159:80/\nI0131 01:54:58.951033 2957 log.go:181] (0xc00003a420) Data frame received for 3\nI0131 01:54:58.951091 2957 log.go:181] (0xc0009806e0) (3) Data frame handling\nI0131 01:54:58.951123 2957 log.go:181] (0xc0009806e0) (3) Data frame sent\nI0131 01:54:58.955179 2957 log.go:181] (0xc00003a420) Data frame received for 3\nI0131 01:54:58.955193 2957 log.go:181] (0xc0009806e0) (3) Data frame handling\nI0131 01:54:58.955205 2957 log.go:181] (0xc0009806e0) (3) Data frame sent\nI0131 01:54:58.955961 2957 log.go:181] (0xc00003a420) Data frame received for 3\nI0131 01:54:58.955974 2957 log.go:181] (0xc0009806e0) (3) Data frame handling\nI0131 01:54:58.955984 2957 log.go:181] (0xc0009806e0) (3) Data frame sent\nI0131 01:54:58.956014 2957 log.go:181] (0xc00003a420) Data frame received for 5\nI0131 01:54:58.956044 2957 log.go:181] (0xc000795ea0) (5) Data frame handling\nI0131 01:54:58.956077 2957 log.go:181] (0xc000795ea0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.221.159:80/\nI0131 01:54:58.960338 2957 log.go:181] (0xc00003a420) Data frame received for 3\nI0131 01:54:58.960354 2957 log.go:181] (0xc0009806e0) (3) Data frame handling\nI0131 01:54:58.960367 2957 log.go:181] (0xc0009806e0) (3) Data frame sent\nI0131 01:54:58.961055 2957 log.go:181] (0xc00003a420) Data frame received for 3\nI0131 01:54:58.961104 2957 log.go:181] (0xc0009806e0) (3) Data frame handling\nI0131 01:54:58.961131 2957 log.go:181] (0xc0009806e0) (3) Data frame sent\nI0131 01:54:58.961160 2957 log.go:181] (0xc00003a420) Data frame received for 5\nI0131 01:54:58.961180 2957 log.go:181] (0xc000795ea0) (5) Data frame handling\nI0131 01:54:58.961206 2957 log.go:181] (0xc000795ea0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.221.159:80/\nI0131 01:54:58.965910 2957 log.go:181] (0xc00003a420) Data frame received for 3\nI0131 01:54:58.965926 2957 log.go:181] (0xc0009806e0) (3) Data frame handling\nI0131 01:54:58.965939 2957 log.go:181] (0xc0009806e0) (3) Data frame sent\nI0131 01:54:58.966607 2957 log.go:181] (0xc00003a420) Data frame received for 5\nI0131 01:54:58.966639 2957 log.go:181] (0xc000795ea0) (5) Data frame handling\nI0131 01:54:58.966653 2957 log.go:181] (0xc000795ea0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.221.159:80/\nI0131 01:54:58.966670 2957 log.go:181] (0xc00003a420) Data frame received for 3\nI0131 01:54:58.966686 2957 log.go:181] (0xc0009806e0) (3) Data frame handling\nI0131 01:54:58.966697 2957 log.go:181] (0xc0009806e0) (3) Data frame sent\nI0131 01:54:58.972217 2957 log.go:181] (0xc00003a420) Data frame received for 3\nI0131 01:54:58.972243 2957 log.go:181] (0xc0009806e0) (3) Data frame handling\nI0131 01:54:58.972263 2957 log.go:181] (0xc0009806e0) (3) Data frame sent\nI0131 01:54:58.973394 2957 log.go:181] (0xc00003a420) Data frame received for 5\nI0131 01:54:58.973413 2957 log.go:181] (0xc000795ea0) (5) Data frame handling\nI0131 01:54:58.973422 2957 log.go:181] (0xc000795ea0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.221.159:80/\nI0131 01:54:58.973434 2957 log.go:181] (0xc00003a420) Data frame received for 3\nI0131 01:54:58.973441 2957 log.go:181] (0xc0009806e0) (3) Data frame handling\nI0131 01:54:58.973448 2957 log.go:181] (0xc0009806e0) (3) Data frame sent\nI0131 01:54:58.978618 2957 log.go:181] (0xc00003a420) Data frame received for 3\nI0131 01:54:58.978635 2957 log.go:181] (0xc0009806e0) (3) Data frame handling\nI0131 01:54:58.978652 2957 log.go:181] (0xc0009806e0) (3) Data frame sent\nI0131 01:54:58.979408 2957 log.go:181] (0xc00003a420) Data frame received for 5\nI0131 01:54:58.979419 2957 log.go:181] (0xc000795ea0) (5) Data frame handling\nI0131 01:54:58.979425 2957 log.go:181] (0xc000795ea0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.221.159:80/\nI0131 01:54:58.979437 2957 log.go:181] (0xc00003a420) Data frame received for 3\nI0131 01:54:58.979450 2957 log.go:181] (0xc0009806e0) (3) Data frame handling\nI0131 01:54:58.979462 2957 log.go:181] (0xc0009806e0) (3) Data frame sent\nI0131 01:54:58.984897 2957 log.go:181] (0xc00003a420) Data frame received for 3\nI0131 01:54:58.984916 2957 log.go:181] (0xc0009806e0) (3) Data frame handling\nI0131 01:54:58.984929 2957 log.go:181] (0xc0009806e0) (3) Data frame sent\nI0131 01:54:58.985632 2957 log.go:181] (0xc00003a420) Data frame received for 3\nI0131 01:54:58.985648 2957 log.go:181] (0xc0009806e0) (3) Data frame handling\nI0131 01:54:58.985657 2957 log.go:181] (0xc0009806e0) (3) Data frame sent\nI0131 01:54:58.985665 2957 log.go:181] (0xc00003a420) Data frame received for 5\nI0131 01:54:58.985669 2957 log.go:181] (0xc000795ea0) (5) Data frame handling\nI0131 01:54:58.985674 2957 log.go:181] (0xc000795ea0) (5) Data frame sent\nI0131 01:54:58.985680 2957 log.go:181] (0xc00003a420) Data frame received for 5\nI0131 01:54:58.985691 2957 log.go:181] (0xc000795ea0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.221.159:80/\nI0131 01:54:58.985713 2957 log.go:181] (0xc000795ea0) (5) Data frame sent\nI0131 01:54:58.993215 2957 log.go:181] (0xc00003a420) Data frame received for 3\nI0131 01:54:58.993232 2957 log.go:181] (0xc0009806e0) (3) Data frame handling\nI0131 01:54:58.993243 2957 log.go:181] (0xc0009806e0) (3) Data frame sent\nI0131 01:54:58.993703 2957 log.go:181] (0xc00003a420) Data frame received for 5\nI0131 01:54:58.993724 2957 log.go:181] (0xc000795ea0) (5) Data frame handling\nI0131 01:54:58.993733 2957 log.go:181] (0xc000795ea0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.221.159:80/\nI0131 01:54:58.993758 2957 log.go:181] (0xc00003a420) Data frame received for 3\nI0131 01:54:58.993780 2957 log.go:181] (0xc0009806e0) (3) Data frame handling\nI0131 01:54:58.993804 2957 log.go:181] (0xc0009806e0) (3) Data frame sent\nI0131 01:54:58.998247 2957 log.go:181] (0xc00003a420) Data frame received for 3\nI0131 01:54:58.998260 2957 log.go:181] (0xc0009806e0) (3) Data frame handling\nI0131 01:54:58.998267 2957 log.go:181] (0xc0009806e0) (3) Data frame sent\nI0131 01:54:58.998717 2957 log.go:181] (0xc00003a420) Data frame received for 3\nI0131 01:54:58.998735 2957 log.go:181] (0xc0009806e0) (3) Data frame handling\nI0131 01:54:58.998741 2957 log.go:181] (0xc0009806e0) (3) Data frame sent\nI0131 01:54:58.998769 2957 log.go:181] (0xc00003a420) Data frame received for 5\nI0131 01:54:58.998806 2957 log.go:181] (0xc000795ea0) (5) Data frame handling\nI0131 01:54:58.998829 2957 log.go:181] (0xc000795ea0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.221.159:80/\nI0131 01:54:59.003419 2957 log.go:181] (0xc00003a420) Data frame received for 3\nI0131 01:54:59.003430 2957 log.go:181] (0xc0009806e0) (3) Data frame handling\nI0131 01:54:59.003435 2957 log.go:181] (0xc0009806e0) (3) Data frame sent\nI0131 01:54:59.003794 2957 log.go:181] (0xc00003a420) Data frame received for 5\nI0131 01:54:59.003805 2957 log.go:181] (0xc000795ea0) (5) Data frame handling\nI0131 01:54:59.003811 2957 log.go:181] (0xc000795ea0) (5) Data frame sent\nI0131 01:54:59.003815 2957 log.go:181] (0xc00003a420) Data frame received for 5\nI0131 01:54:59.003822 2957 log.go:181] (0xc000795ea0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.221.159:80/\nI0131 01:54:59.003842 2957 log.go:181] (0xc000795ea0) (5) Data frame sent\nI0131 01:54:59.003859 2957 log.go:181] (0xc00003a420) Data frame received for 3\nI0131 01:54:59.003877 2957 log.go:181] (0xc0009806e0) (3) Data frame handling\nI0131 01:54:59.003887 2957 log.go:181] (0xc0009806e0) (3) Data frame sent\nI0131 01:54:59.008608 2957 log.go:181] (0xc00003a420) Data frame received for 3\nI0131 01:54:59.008618 2957 log.go:181] (0xc0009806e0) (3) Data frame handling\nI0131 01:54:59.008624 2957 log.go:181] (0xc0009806e0) (3) Data frame sent\nI0131 01:54:59.009290 2957 log.go:181] (0xc00003a420) Data frame received for 3\nI0131 01:54:59.009304 2957 log.go:181] (0xc0009806e0) (3) Data frame handling\nI0131 01:54:59.009317 2957 log.go:181] (0xc0009806e0) (3) Data frame sent\nI0131 01:54:59.009329 2957 log.go:181] (0xc00003a420) Data frame received for 5\nI0131 01:54:59.009349 2957 log.go:181] (0xc000795ea0) (5) Data frame handling\nI0131 01:54:59.009365 2957 log.go:181] (0xc000795ea0) (5) Data frame sent\nI0131 01:54:59.009376 2957 log.go:181] (0xc00003a420) Data frame received for 5\nI0131 01:54:59.009384 2957 log.go:181] (0xc000795ea0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.221.159:80/\nI0131 01:54:59.009402 2957 log.go:181] (0xc000795ea0) (5) Data frame sent\nI0131 01:54:59.014619 2957 log.go:181] (0xc00003a420) Data frame received for 3\nI0131 01:54:59.014633 2957 log.go:181] (0xc0009806e0) (3) Data frame handling\nI0131 01:54:59.014649 2957 log.go:181] (0xc0009806e0) (3) Data frame sent\nI0131 01:54:59.015158 2957 log.go:181] (0xc00003a420) Data frame received for 3\nI0131 01:54:59.015183 2957 log.go:181] (0xc0009806e0) (3) Data frame handling\nI0131 01:54:59.015215 2957 log.go:181] (0xc0009806e0) (3) Data frame sent\nI0131 01:54:59.015235 2957 log.go:181] (0xc00003a420) Data frame received for 5\nI0131 01:54:59.015258 2957 log.go:181] (0xc000795ea0) (5) Data frame handling\nI0131 01:54:59.015288 2957 log.go:181] (0xc000795ea0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.221.159:80/\nI0131 01:54:59.023497 2957 log.go:181] (0xc00003a420) Data frame received for 3\nI0131 01:54:59.023515 2957 log.go:181] (0xc0009806e0) (3) Data frame handling\nI0131 01:54:59.023528 2957 log.go:181] (0xc0009806e0) (3) Data frame sent\nI0131 01:54:59.024006 2957 log.go:181] (0xc00003a420) Data frame received for 5\nI0131 01:54:59.024028 2957 log.go:181] (0xc000795ea0) (5) Data frame handling\nI0131 01:54:59.024039 2957 log.go:181] (0xc000795ea0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.221.159:80/\nI0131 01:54:59.024053 2957 log.go:181] (0xc00003a420) Data frame received for 3\nI0131 01:54:59.024063 2957 log.go:181] (0xc0009806e0) (3) Data frame handling\nI0131 01:54:59.024071 2957 log.go:181] (0xc0009806e0) (3) Data frame sent\nI0131 01:54:59.030779 2957 log.go:181] (0xc00003a420) Data frame received for 3\nI0131 01:54:59.030803 2957 log.go:181] (0xc0009806e0) (3) Data frame handling\nI0131 01:54:59.030821 2957 log.go:181] (0xc0009806e0) (3) Data frame sent\nI0131 01:54:59.031445 2957 log.go:181] (0xc00003a420) Data frame received for 3\nI0131 01:54:59.031472 2957 log.go:181] (0xc0009806e0) (3) Data frame handling\nI0131 01:54:59.031582 2957 log.go:181] (0xc00003a420) Data frame received for 5\nI0131 01:54:59.031603 2957 log.go:181] (0xc000795ea0) (5) Data frame handling\nI0131 01:54:59.033143 2957 log.go:181] (0xc00003a420) Data frame received for 1\nI0131 01:54:59.033216 2957 log.go:181] (0xc0005c6000) (1) Data frame handling\nI0131 01:54:59.033240 2957 log.go:181] (0xc0005c6000) (1) Data frame sent\nI0131 01:54:59.033255 2957 log.go:181] (0xc00003a420) (0xc0005c6000) Stream removed, broadcasting: 1\nI0131 01:54:59.033305 2957 log.go:181] (0xc00003a420) Go away received\nI0131 01:54:59.033558 2957 log.go:181] (0xc00003a420) (0xc0005c6000) Stream removed, broadcasting: 1\nI0131 01:54:59.033579 2957 log.go:181] (0xc00003a420) (0xc0009806e0) Stream removed, broadcasting: 3\nI0131 01:54:59.033589 2957 log.go:181] (0xc00003a420) (0xc000795ea0) Stream removed, broadcasting: 5\n" Jan 31 01:54:59.038: INFO: stdout: "\naffinity-clusterip-transition-8gdvd\naffinity-clusterip-transition-8gdvd\naffinity-clusterip-transition-8gdvd\naffinity-clusterip-transition-8gdvd\naffinity-clusterip-transition-8gdvd\naffinity-clusterip-transition-8gdvd\naffinity-clusterip-transition-8gdvd\naffinity-clusterip-transition-8gdvd\naffinity-clusterip-transition-8gdvd\naffinity-clusterip-transition-8gdvd\naffinity-clusterip-transition-8gdvd\naffinity-clusterip-transition-8gdvd\naffinity-clusterip-transition-8gdvd\naffinity-clusterip-transition-8gdvd\naffinity-clusterip-transition-8gdvd\naffinity-clusterip-transition-8gdvd" Jan 31 01:54:59.038: INFO: Received response from host: affinity-clusterip-transition-8gdvd Jan 31 01:54:59.038: INFO: Received response from host: affinity-clusterip-transition-8gdvd Jan 31 01:54:59.038: INFO: Received response from host: affinity-clusterip-transition-8gdvd Jan 31 01:54:59.038: INFO: Received response from host: affinity-clusterip-transition-8gdvd Jan 31 01:54:59.038: INFO: Received response from host: affinity-clusterip-transition-8gdvd Jan 31 01:54:59.038: INFO: Received response from host: affinity-clusterip-transition-8gdvd Jan 31 01:54:59.038: INFO: Received response from host: affinity-clusterip-transition-8gdvd Jan 31 01:54:59.038: INFO: Received response from host: affinity-clusterip-transition-8gdvd Jan 31 01:54:59.038: INFO: Received response from host: affinity-clusterip-transition-8gdvd Jan 31 01:54:59.038: INFO: Received response from host: affinity-clusterip-transition-8gdvd Jan 31 01:54:59.038: INFO: Received response from host: affinity-clusterip-transition-8gdvd Jan 31 01:54:59.038: INFO: Received response from host: affinity-clusterip-transition-8gdvd Jan 31 01:54:59.038: INFO: Received response from host: affinity-clusterip-transition-8gdvd Jan 31 01:54:59.038: INFO: Received response from host: affinity-clusterip-transition-8gdvd Jan 31 01:54:59.038: INFO: Received response from host: affinity-clusterip-transition-8gdvd Jan 31 01:54:59.038: INFO: Received response from host: affinity-clusterip-transition-8gdvd Jan 31 01:54:59.038: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-8653, will wait for the garbage collector to delete the pods Jan 31 01:54:59.159: INFO: Deleting ReplicationController affinity-clusterip-transition took: 6.094329ms Jan 31 01:54:59.759: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 600.212217ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:55:31.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8653" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 • [SLOW TEST:44.940 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":311,"completed":268,"skipped":4813,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:55:31.363: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a pod to test downward API volume plugin Jan 31 01:55:31.422: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4b04907f-9ca0-487c-9f29-e86598964536" in namespace "projected-5762" to be "Succeeded or Failed" Jan 31 01:55:31.426: INFO: Pod "downwardapi-volume-4b04907f-9ca0-487c-9f29-e86598964536": Phase="Pending", Reason="", readiness=false. Elapsed: 3.655382ms Jan 31 01:55:33.444: INFO: Pod "downwardapi-volume-4b04907f-9ca0-487c-9f29-e86598964536": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022171007s Jan 31 01:55:35.449: INFO: Pod "downwardapi-volume-4b04907f-9ca0-487c-9f29-e86598964536": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02658267s STEP: Saw pod success Jan 31 01:55:35.449: INFO: Pod "downwardapi-volume-4b04907f-9ca0-487c-9f29-e86598964536" satisfied condition "Succeeded or Failed" Jan 31 01:55:35.451: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-4b04907f-9ca0-487c-9f29-e86598964536 container client-container: STEP: delete the pod Jan 31 01:55:35.515: INFO: Waiting for pod downwardapi-volume-4b04907f-9ca0-487c-9f29-e86598964536 to disappear Jan 31 01:55:35.522: INFO: Pod downwardapi-volume-4b04907f-9ca0-487c-9f29-e86598964536 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:55:35.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5762" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":311,"completed":269,"skipped":4836,"failed":0} SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:55:35.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should create and stop a working application [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: creating all guestbook components Jan 31 01:55:35.829: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-replica labels: app: agnhost role: replica tier: backend spec: ports: - port: 6379 selector: app: agnhost role: replica tier: backend Jan 31 01:55:35.829: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-233 create -f -' Jan 31 01:55:36.273: INFO: stderr: "" Jan 31 01:55:36.273: INFO: stdout: "service/agnhost-replica created\n" Jan 31 01:55:36.273: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-primary labels: app: agnhost role: primary tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: primary tier: backend Jan 31 01:55:36.273: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-233 create -f -' Jan 31 01:55:36.597: INFO: stderr: "" Jan 31 01:55:36.597: INFO: stdout: "service/agnhost-primary created\n" Jan 31 01:55:36.597: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Jan 31 01:55:36.597: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-233 create -f -' Jan 31 01:55:36.925: INFO: stderr: "" Jan 31 01:55:36.926: INFO: stdout: "service/frontend created\n" Jan 31 01:55:36.926: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: k8s.gcr.io/e2e-test-images/agnhost:2.21 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Jan 31 01:55:36.926: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-233 create -f -' Jan 31 01:55:37.258: INFO: stderr: "" Jan 31 01:55:37.258: INFO: stdout: "deployment.apps/frontend created\n" Jan 31 01:55:37.258: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-primary spec: replicas: 1 selector: matchLabels: app: agnhost role: primary tier: backend template: metadata: labels: app: agnhost role: primary tier: backend spec: containers: - name: primary image: k8s.gcr.io/e2e-test-images/agnhost:2.21 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jan 31 01:55:37.258: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-233 create -f -' Jan 31 01:55:37.639: INFO: stderr: "" Jan 31 01:55:37.639: INFO: stdout: "deployment.apps/agnhost-primary created\n" Jan 31 01:55:37.639: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-replica spec: replicas: 2 selector: matchLabels: app: agnhost role: replica tier: backend template: metadata: labels: app: agnhost role: replica tier: backend spec: containers: - name: replica image: k8s.gcr.io/e2e-test-images/agnhost:2.21 args: [ "guestbook", "--replicaof", "agnhost-primary", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jan 31 01:55:37.639: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-233 create -f -' Jan 31 01:55:37.980: INFO: stderr: "" Jan 31 01:55:37.980: INFO: stdout: "deployment.apps/agnhost-replica created\n" STEP: validating guestbook app Jan 31 01:55:37.980: INFO: Waiting for all frontend pods to be Running. Jan 31 01:55:48.031: INFO: Waiting for frontend to serve content. Jan 31 01:55:48.040: INFO: Trying to add a new entry to the guestbook. Jan 31 01:55:48.050: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Jan 31 01:55:48.056: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-233 delete --grace-period=0 --force -f -' Jan 31 01:55:48.211: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 31 01:55:48.211: INFO: stdout: "service \"agnhost-replica\" force deleted\n" STEP: using delete to clean up resources Jan 31 01:55:48.212: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-233 delete --grace-period=0 --force -f -' Jan 31 01:55:48.356: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 31 01:55:48.356: INFO: stdout: "service \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources Jan 31 01:55:48.357: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-233 delete --grace-period=0 --force -f -' Jan 31 01:55:48.535: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 31 01:55:48.535: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Jan 31 01:55:48.535: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-233 delete --grace-period=0 --force -f -' Jan 31 01:55:48.648: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 31 01:55:48.648: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Jan 31 01:55:48.649: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-233 delete --grace-period=0 --force -f -' Jan 31 01:55:49.060: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 31 01:55:49.060: INFO: stdout: "deployment.apps \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources Jan 31 01:55:49.061: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-233 delete --grace-period=0 --force -f -' Jan 31 01:55:49.547: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 31 01:55:49.547: INFO: stdout: "deployment.apps \"agnhost-replica\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:55:49.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-233" for this suite. • [SLOW TEST:14.338 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:342 should create and stop a working application [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":311,"completed":270,"skipped":4846,"failed":0} SSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:55:49.868: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jan 31 01:55:58.693: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 31 01:55:58.718: INFO: Pod pod-with-prestop-exec-hook still exists Jan 31 01:56:00.718: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 31 01:56:00.801: INFO: Pod pod-with-prestop-exec-hook still exists Jan 31 01:56:02.718: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 31 01:56:02.724: INFO: Pod pod-with-prestop-exec-hook still exists Jan 31 01:56:04.718: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 31 01:56:04.724: INFO: Pod pod-with-prestop-exec-hook still exists Jan 31 01:56:06.718: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 31 01:56:06.723: INFO: Pod pod-with-prestop-exec-hook still exists Jan 31 01:56:08.718: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 31 01:56:08.741: INFO: Pod pod-with-prestop-exec-hook still exists Jan 31 01:56:10.718: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 31 01:56:10.753: INFO: Pod pod-with-prestop-exec-hook still exists Jan 31 01:56:12.718: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 31 01:56:12.729: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:56:12.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4791" for this suite. • [SLOW TEST:22.885 seconds] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:635 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:43 should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":311,"completed":271,"skipped":4850,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:56:12.753: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:56:29.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1857" for this suite. • [SLOW TEST:17.110 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":311,"completed":272,"skipped":4860,"failed":0} SSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:56:29.864: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating secret with name s-test-opt-del-ad3b7937-0685-4c4b-a170-484ca7fa8192 STEP: Creating secret with name s-test-opt-upd-a2a643b5-fe66-432d-8ab6-97782c07bc00 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-ad3b7937-0685-4c4b-a170-484ca7fa8192 STEP: Updating secret s-test-opt-upd-a2a643b5-fe66-432d-8ab6-97782c07bc00 STEP: Creating secret with name s-test-opt-create-907ba3a7-fbc0-40de-ba84-fed3799c9676 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:56:38.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1252" for this suite. • [SLOW TEST:8.282 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":311,"completed":273,"skipped":4863,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:56:38.146: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [AfterEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:56:42.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8529" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":274,"skipped":4895,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:56:42.269: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should add annotations for pods in rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: creating Agnhost RC Jan 31 01:56:42.354: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-4698 create -f -' Jan 31 01:56:42.653: INFO: stderr: "" Jan 31 01:56:42.653: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Jan 31 01:56:43.747: INFO: Selector matched 1 pods for map[app:agnhost] Jan 31 01:56:43.747: INFO: Found 0 / 1 Jan 31 01:56:44.795: INFO: Selector matched 1 pods for map[app:agnhost] Jan 31 01:56:44.795: INFO: Found 0 / 1 Jan 31 01:56:45.659: INFO: Selector matched 1 pods for map[app:agnhost] Jan 31 01:56:45.659: INFO: Found 0 / 1 Jan 31 01:56:46.684: INFO: Selector matched 1 pods for map[app:agnhost] Jan 31 01:56:46.685: INFO: Found 0 / 1 Jan 31 01:56:47.657: INFO: Selector matched 1 pods for map[app:agnhost] Jan 31 01:56:47.657: INFO: Found 1 / 1 Jan 31 01:56:47.657: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Jan 31 01:56:47.659: INFO: Selector matched 1 pods for map[app:agnhost] Jan 31 01:56:47.659: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 31 01:56:47.659: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-4698 patch pod agnhost-primary-tl2bf -p {"metadata":{"annotations":{"x":"y"}}}' Jan 31 01:56:47.766: INFO: stderr: "" Jan 31 01:56:47.766: INFO: stdout: "pod/agnhost-primary-tl2bf patched\n" STEP: checking annotations Jan 31 01:56:47.775: INFO: Selector matched 1 pods for map[app:agnhost] Jan 31 01:56:47.775: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:56:47.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4698" for this suite. • [SLOW TEST:5.520 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl patch /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1466 should add annotations for pods in rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":311,"completed":275,"skipped":4961,"failed":0} SS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:56:47.789: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a pod to test downward API volume plugin Jan 31 01:56:47.858: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6e453c44-5d40-4f95-9cf7-fb4d9d41cdb6" in namespace "projected-2248" to be "Succeeded or Failed" Jan 31 01:56:47.894: INFO: Pod "downwardapi-volume-6e453c44-5d40-4f95-9cf7-fb4d9d41cdb6": Phase="Pending", Reason="", readiness=false. Elapsed: 36.438426ms Jan 31 01:56:49.909: INFO: Pod "downwardapi-volume-6e453c44-5d40-4f95-9cf7-fb4d9d41cdb6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050872892s Jan 31 01:56:51.913: INFO: Pod "downwardapi-volume-6e453c44-5d40-4f95-9cf7-fb4d9d41cdb6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.055052433s STEP: Saw pod success Jan 31 01:56:51.913: INFO: Pod "downwardapi-volume-6e453c44-5d40-4f95-9cf7-fb4d9d41cdb6" satisfied condition "Succeeded or Failed" Jan 31 01:56:51.916: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-6e453c44-5d40-4f95-9cf7-fb4d9d41cdb6 container client-container: STEP: delete the pod Jan 31 01:56:52.055: INFO: Waiting for pod downwardapi-volume-6e453c44-5d40-4f95-9cf7-fb4d9d41cdb6 to disappear Jan 31 01:56:52.080: INFO: Pod downwardapi-volume-6e453c44-5d40-4f95-9cf7-fb4d9d41cdb6 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:56:52.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2248" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":311,"completed":276,"skipped":4963,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:56:52.087: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 Jan 31 01:56:52.214: INFO: Creating ReplicaSet my-hostname-basic-d8b0c218-4334-4f93-96d2-25cd32155044 Jan 31 01:56:52.255: INFO: Pod name my-hostname-basic-d8b0c218-4334-4f93-96d2-25cd32155044: Found 0 pods out of 1 Jan 31 01:56:57.261: INFO: Pod name my-hostname-basic-d8b0c218-4334-4f93-96d2-25cd32155044: Found 1 pods out of 1 Jan 31 01:56:57.261: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-d8b0c218-4334-4f93-96d2-25cd32155044" is running Jan 31 01:56:57.266: INFO: Pod "my-hostname-basic-d8b0c218-4334-4f93-96d2-25cd32155044-chlgk" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-01-31 01:56:52 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-01-31 01:56:55 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-01-31 01:56:55 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-01-31 01:56:52 +0000 UTC Reason: Message:}]) Jan 31 01:56:57.267: INFO: Trying to dial the pod Jan 31 01:57:02.299: INFO: Controller my-hostname-basic-d8b0c218-4334-4f93-96d2-25cd32155044: Got expected result from replica 1 [my-hostname-basic-d8b0c218-4334-4f93-96d2-25cd32155044-chlgk]: "my-hostname-basic-d8b0c218-4334-4f93-96d2-25cd32155044-chlgk", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:57:02.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-4072" for this suite. • [SLOW TEST:10.220 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":311,"completed":277,"skipped":4981,"failed":0} SSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:57:02.307: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: creating the pod Jan 31 01:57:02.388: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:57:12.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7689" for this suite. • [SLOW TEST:10.266 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:635 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":311,"completed":278,"skipped":4988,"failed":0} SSS ------------------------------ [sig-network] Services should find a service from listing all namespaces [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:57:12.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should find a service from listing all namespaces [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: fetching services [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:57:12.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5529" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 •{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":311,"completed":279,"skipped":4991,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:57:12.707: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: creating the pod Jan 31 01:57:12.979: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:57:25.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5636" for this suite. • [SLOW TEST:13.099 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:635 should invoke init containers on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":311,"completed":280,"skipped":5016,"failed":0} [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:57:25.806: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should run the lifecycle of PodTemplates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [AfterEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:57:26.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-5340" for this suite. •{"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":311,"completed":281,"skipped":5016,"failed":0} SSSSSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:57:26.007: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-667 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-667;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-667 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-667;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-667.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-667.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-667.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-667.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-667.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-667.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-667.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-667.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-667.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-667.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-667.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-667.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-667.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 237.77.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.77.237_udp@PTR;check="$$(dig +tcp +noall +answer +search 237.77.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.77.237_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-667 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-667;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-667 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-667;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-667.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-667.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-667.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-667.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-667.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-667.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-667.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-667.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-667.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-667.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-667.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-667.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-667.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 237.77.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.77.237_udp@PTR;check="$$(dig +tcp +noall +answer +search 237.77.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.77.237_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 31 01:57:34.221: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:34.224: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:34.228: INFO: Unable to read wheezy_udp@dns-test-service.dns-667 from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:34.231: INFO: Unable to read wheezy_tcp@dns-test-service.dns-667 from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:34.234: INFO: Unable to read wheezy_udp@dns-test-service.dns-667.svc from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:34.238: INFO: Unable to read wheezy_tcp@dns-test-service.dns-667.svc from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:34.241: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-667.svc from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:34.245: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-667.svc from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:34.270: INFO: Unable to read jessie_udp@dns-test-service from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:34.272: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:34.275: INFO: Unable to read jessie_udp@dns-test-service.dns-667 from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:34.278: INFO: Unable to read jessie_tcp@dns-test-service.dns-667 from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:34.280: INFO: Unable to read jessie_udp@dns-test-service.dns-667.svc from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:34.283: INFO: Unable to read jessie_tcp@dns-test-service.dns-667.svc from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:34.285: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-667.svc from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:34.288: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-667.svc from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:34.306: INFO: Lookups using dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-667 wheezy_tcp@dns-test-service.dns-667 wheezy_udp@dns-test-service.dns-667.svc wheezy_tcp@dns-test-service.dns-667.svc wheezy_udp@_http._tcp.dns-test-service.dns-667.svc wheezy_tcp@_http._tcp.dns-test-service.dns-667.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-667 jessie_tcp@dns-test-service.dns-667 jessie_udp@dns-test-service.dns-667.svc jessie_tcp@dns-test-service.dns-667.svc jessie_udp@_http._tcp.dns-test-service.dns-667.svc jessie_tcp@_http._tcp.dns-test-service.dns-667.svc] Jan 31 01:57:39.311: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:39.315: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:39.319: INFO: Unable to read wheezy_udp@dns-test-service.dns-667 from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:39.322: INFO: Unable to read wheezy_tcp@dns-test-service.dns-667 from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:39.326: INFO: Unable to read wheezy_udp@dns-test-service.dns-667.svc from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:39.329: INFO: Unable to read wheezy_tcp@dns-test-service.dns-667.svc from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:39.331: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-667.svc from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:39.335: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-667.svc from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:39.356: INFO: Unable to read jessie_udp@dns-test-service from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:39.360: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:39.363: INFO: Unable to read jessie_udp@dns-test-service.dns-667 from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:39.367: INFO: Unable to read jessie_tcp@dns-test-service.dns-667 from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:39.370: INFO: Unable to read jessie_udp@dns-test-service.dns-667.svc from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:39.373: INFO: Unable to read jessie_tcp@dns-test-service.dns-667.svc from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:39.377: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-667.svc from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:39.380: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-667.svc from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:39.398: INFO: Lookups using dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-667 wheezy_tcp@dns-test-service.dns-667 wheezy_udp@dns-test-service.dns-667.svc wheezy_tcp@dns-test-service.dns-667.svc wheezy_udp@_http._tcp.dns-test-service.dns-667.svc wheezy_tcp@_http._tcp.dns-test-service.dns-667.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-667 jessie_tcp@dns-test-service.dns-667 jessie_udp@dns-test-service.dns-667.svc jessie_tcp@dns-test-service.dns-667.svc jessie_udp@_http._tcp.dns-test-service.dns-667.svc jessie_tcp@_http._tcp.dns-test-service.dns-667.svc] Jan 31 01:57:44.312: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:44.316: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:44.320: INFO: Unable to read wheezy_udp@dns-test-service.dns-667 from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:44.323: INFO: Unable to read wheezy_tcp@dns-test-service.dns-667 from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:44.326: INFO: Unable to read wheezy_udp@dns-test-service.dns-667.svc from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:44.330: INFO: Unable to read wheezy_tcp@dns-test-service.dns-667.svc from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:44.333: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-667.svc from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:44.336: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-667.svc from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:44.361: INFO: Unable to read jessie_udp@dns-test-service from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:44.364: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:44.368: INFO: Unable to read jessie_udp@dns-test-service.dns-667 from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:44.372: INFO: Unable to read jessie_tcp@dns-test-service.dns-667 from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:44.375: INFO: Unable to read jessie_udp@dns-test-service.dns-667.svc from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:44.378: INFO: Unable to read jessie_tcp@dns-test-service.dns-667.svc from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:44.381: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-667.svc from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:44.386: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-667.svc from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:44.404: INFO: Lookups using dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-667 wheezy_tcp@dns-test-service.dns-667 wheezy_udp@dns-test-service.dns-667.svc wheezy_tcp@dns-test-service.dns-667.svc wheezy_udp@_http._tcp.dns-test-service.dns-667.svc wheezy_tcp@_http._tcp.dns-test-service.dns-667.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-667 jessie_tcp@dns-test-service.dns-667 jessie_udp@dns-test-service.dns-667.svc jessie_tcp@dns-test-service.dns-667.svc jessie_udp@_http._tcp.dns-test-service.dns-667.svc jessie_tcp@_http._tcp.dns-test-service.dns-667.svc] Jan 31 01:57:49.311: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:49.315: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:49.319: INFO: Unable to read wheezy_udp@dns-test-service.dns-667 from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:49.323: INFO: Unable to read wheezy_tcp@dns-test-service.dns-667 from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:49.327: INFO: Unable to read wheezy_udp@dns-test-service.dns-667.svc from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:49.329: INFO: Unable to read wheezy_tcp@dns-test-service.dns-667.svc from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:49.332: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-667.svc from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:49.338: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-667.svc from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:49.356: INFO: Unable to read jessie_udp@dns-test-service from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:49.359: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:49.362: INFO: Unable to read jessie_udp@dns-test-service.dns-667 from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:49.365: INFO: Unable to read jessie_tcp@dns-test-service.dns-667 from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:49.368: INFO: Unable to read jessie_udp@dns-test-service.dns-667.svc from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:49.371: INFO: Unable to read jessie_tcp@dns-test-service.dns-667.svc from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:49.374: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-667.svc from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:49.377: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-667.svc from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:49.396: INFO: Lookups using dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-667 wheezy_tcp@dns-test-service.dns-667 wheezy_udp@dns-test-service.dns-667.svc wheezy_tcp@dns-test-service.dns-667.svc wheezy_udp@_http._tcp.dns-test-service.dns-667.svc wheezy_tcp@_http._tcp.dns-test-service.dns-667.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-667 jessie_tcp@dns-test-service.dns-667 jessie_udp@dns-test-service.dns-667.svc jessie_tcp@dns-test-service.dns-667.svc jessie_udp@_http._tcp.dns-test-service.dns-667.svc jessie_tcp@_http._tcp.dns-test-service.dns-667.svc] Jan 31 01:57:54.311: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:54.315: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:54.319: INFO: Unable to read wheezy_udp@dns-test-service.dns-667 from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:54.322: INFO: Unable to read wheezy_tcp@dns-test-service.dns-667 from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:54.326: INFO: Unable to read wheezy_udp@dns-test-service.dns-667.svc from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:54.329: INFO: Unable to read wheezy_tcp@dns-test-service.dns-667.svc from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:54.332: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-667.svc from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:54.336: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-667.svc from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:54.359: INFO: Unable to read jessie_udp@dns-test-service from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:54.362: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:54.366: INFO: Unable to read jessie_udp@dns-test-service.dns-667 from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:54.369: INFO: Unable to read jessie_tcp@dns-test-service.dns-667 from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:54.371: INFO: Unable to read jessie_udp@dns-test-service.dns-667.svc from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:54.374: INFO: Unable to read jessie_tcp@dns-test-service.dns-667.svc from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:54.376: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-667.svc from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:54.379: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-667.svc from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:54.397: INFO: Lookups using dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-667 wheezy_tcp@dns-test-service.dns-667 wheezy_udp@dns-test-service.dns-667.svc wheezy_tcp@dns-test-service.dns-667.svc wheezy_udp@_http._tcp.dns-test-service.dns-667.svc wheezy_tcp@_http._tcp.dns-test-service.dns-667.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-667 jessie_tcp@dns-test-service.dns-667 jessie_udp@dns-test-service.dns-667.svc jessie_tcp@dns-test-service.dns-667.svc jessie_udp@_http._tcp.dns-test-service.dns-667.svc jessie_tcp@_http._tcp.dns-test-service.dns-667.svc] Jan 31 01:57:59.311: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:59.316: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:59.319: INFO: Unable to read wheezy_udp@dns-test-service.dns-667 from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:59.326: INFO: Unable to read wheezy_tcp@dns-test-service.dns-667 from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:59.329: INFO: Unable to read wheezy_udp@dns-test-service.dns-667.svc from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:59.332: INFO: Unable to read wheezy_tcp@dns-test-service.dns-667.svc from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:59.335: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-667.svc from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:59.338: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-667.svc from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:59.365: INFO: Unable to read jessie_udp@dns-test-service from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:59.368: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:59.371: INFO: Unable to read jessie_udp@dns-test-service.dns-667 from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:59.374: INFO: Unable to read jessie_tcp@dns-test-service.dns-667 from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:59.377: INFO: Unable to read jessie_udp@dns-test-service.dns-667.svc from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:59.381: INFO: Unable to read jessie_tcp@dns-test-service.dns-667.svc from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:59.384: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-667.svc from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:59.386: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-667.svc from pod dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b: the server could not find the requested resource (get pods dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b) Jan 31 01:57:59.403: INFO: Lookups using dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-667 wheezy_tcp@dns-test-service.dns-667 wheezy_udp@dns-test-service.dns-667.svc wheezy_tcp@dns-test-service.dns-667.svc wheezy_udp@_http._tcp.dns-test-service.dns-667.svc wheezy_tcp@_http._tcp.dns-test-service.dns-667.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-667 jessie_tcp@dns-test-service.dns-667 jessie_udp@dns-test-service.dns-667.svc jessie_tcp@dns-test-service.dns-667.svc jessie_udp@_http._tcp.dns-test-service.dns-667.svc jessie_tcp@_http._tcp.dns-test-service.dns-667.svc] Jan 31 01:58:04.415: INFO: DNS probes using dns-667/dns-test-eb457df0-16b4-41fe-bc78-aad8ab24198b succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:58:05.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-667" for this suite. • [SLOW TEST:39.226 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":311,"completed":282,"skipped":5025,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:58:05.233: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 Jan 31 01:58:05.340: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:58:05.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-7068" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":311,"completed":283,"skipped":5026,"failed":0} SS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:58:05.953: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:58:12.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-5406" for this suite. STEP: Destroying namespace "nsdeletetest-4718" for this suite. Jan 31 01:58:12.498: INFO: Namespace nsdeletetest-4718 was already deleted STEP: Destroying namespace "nsdeletetest-5168" for this suite. • [SLOW TEST:6.548 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":311,"completed":284,"skipped":5028,"failed":0} SSSSSSSS ------------------------------ [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:58:12.502: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be immutable if `immutable` field is set [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:58:12.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9546" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","total":311,"completed":285,"skipped":5036,"failed":0} SSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:58:12.632: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating configMap with name projected-configmap-test-volume-map-1d8ab67b-f33f-4d8d-a10f-cf35279305fb STEP: Creating a pod to test consume configMaps Jan 31 01:58:12.754: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-cda85064-c572-431d-abe5-82847edac445" in namespace "projected-1340" to be "Succeeded or Failed" Jan 31 01:58:12.760: INFO: Pod "pod-projected-configmaps-cda85064-c572-431d-abe5-82847edac445": Phase="Pending", Reason="", readiness=false. Elapsed: 6.464486ms Jan 31 01:58:14.765: INFO: Pod "pod-projected-configmaps-cda85064-c572-431d-abe5-82847edac445": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011115922s Jan 31 01:58:16.769: INFO: Pod "pod-projected-configmaps-cda85064-c572-431d-abe5-82847edac445": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015658491s STEP: Saw pod success Jan 31 01:58:16.769: INFO: Pod "pod-projected-configmaps-cda85064-c572-431d-abe5-82847edac445" satisfied condition "Succeeded or Failed" Jan 31 01:58:16.772: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-cda85064-c572-431d-abe5-82847edac445 container agnhost-container: STEP: delete the pod Jan 31 01:58:16.813: INFO: Waiting for pod pod-projected-configmaps-cda85064-c572-431d-abe5-82847edac445 to disappear Jan 31 01:58:16.826: INFO: Pod pod-projected-configmaps-cda85064-c572-431d-abe5-82847edac445 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:58:16.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1340" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":311,"completed":286,"skipped":5041,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:58:16.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating pod pod-subpath-test-secret-86lk STEP: Creating a pod to test atomic-volume-subpath Jan 31 01:58:16.936: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-86lk" in namespace "subpath-5204" to be "Succeeded or Failed" Jan 31 01:58:16.955: INFO: Pod "pod-subpath-test-secret-86lk": Phase="Pending", Reason="", readiness=false. Elapsed: 19.125325ms Jan 31 01:58:18.959: INFO: Pod "pod-subpath-test-secret-86lk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023094324s Jan 31 01:58:20.964: INFO: Pod "pod-subpath-test-secret-86lk": Phase="Running", Reason="", readiness=true. Elapsed: 4.027983219s Jan 31 01:58:22.969: INFO: Pod "pod-subpath-test-secret-86lk": Phase="Running", Reason="", readiness=true. Elapsed: 6.032722985s Jan 31 01:58:24.974: INFO: Pod "pod-subpath-test-secret-86lk": Phase="Running", Reason="", readiness=true. Elapsed: 8.037567437s Jan 31 01:58:26.978: INFO: Pod "pod-subpath-test-secret-86lk": Phase="Running", Reason="", readiness=true. Elapsed: 10.041873185s Jan 31 01:58:28.983: INFO: Pod "pod-subpath-test-secret-86lk": Phase="Running", Reason="", readiness=true. Elapsed: 12.046713711s Jan 31 01:58:30.987: INFO: Pod "pod-subpath-test-secret-86lk": Phase="Running", Reason="", readiness=true. Elapsed: 14.051444226s Jan 31 01:58:32.990: INFO: Pod "pod-subpath-test-secret-86lk": Phase="Running", Reason="", readiness=true. Elapsed: 16.054492724s Jan 31 01:58:34.995: INFO: Pod "pod-subpath-test-secret-86lk": Phase="Running", Reason="", readiness=true. Elapsed: 18.059310189s Jan 31 01:58:37.000: INFO: Pod "pod-subpath-test-secret-86lk": Phase="Running", Reason="", readiness=true. Elapsed: 20.064063975s Jan 31 01:58:39.005: INFO: Pod "pod-subpath-test-secret-86lk": Phase="Running", Reason="", readiness=true. Elapsed: 22.069181562s Jan 31 01:58:41.010: INFO: Pod "pod-subpath-test-secret-86lk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.073684072s STEP: Saw pod success Jan 31 01:58:41.010: INFO: Pod "pod-subpath-test-secret-86lk" satisfied condition "Succeeded or Failed" Jan 31 01:58:41.013: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-secret-86lk container test-container-subpath-secret-86lk: STEP: delete the pod Jan 31 01:58:41.070: INFO: Waiting for pod pod-subpath-test-secret-86lk to disappear Jan 31 01:58:41.078: INFO: Pod pod-subpath-test-secret-86lk no longer exists STEP: Deleting pod pod-subpath-test-secret-86lk Jan 31 01:58:41.078: INFO: Deleting pod "pod-subpath-test-secret-86lk" in namespace "subpath-5204" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:58:41.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5204" for this suite. • [SLOW TEST:24.252 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":311,"completed":287,"skipped":5051,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:58:41.086: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:92 Jan 31 01:58:41.273: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 31 01:58:41.306: INFO: Waiting for terminating namespaces to be deleted... Jan 31 01:58:41.309: INFO: Logging pods the apiserver thinks is on node latest-worker before test Jan 31 01:58:41.315: INFO: chaos-controller-manager-69c479c674-tdrls from default started at 2021-01-26 11:59:56 +0000 UTC (1 container statuses recorded) Jan 31 01:58:41.315: INFO: Container chaos-mesh ready: true, restart count 0 Jan 31 01:58:41.315: INFO: chaos-daemon-vkxzr from default started at 2021-01-26 11:59:56 +0000 UTC (1 container statuses recorded) Jan 31 01:58:41.315: INFO: Container chaos-daemon ready: true, restart count 0 Jan 31 01:58:41.315: INFO: kindnet-5bf5g from kube-system started at 2021-01-26 08:08:41 +0000 UTC (1 container statuses recorded) Jan 31 01:58:41.315: INFO: Container kindnet-cni ready: true, restart count 0 Jan 31 01:58:41.315: INFO: kube-proxy-f59c8 from kube-system started at 2021-01-26 08:08:41 +0000 UTC (1 container statuses recorded) Jan 31 01:58:41.315: INFO: Container kube-proxy ready: true, restart count 0 Jan 31 01:58:41.315: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Jan 31 01:58:41.319: INFO: chaos-daemon-g67vf from default started at 2021-01-26 11:59:56 +0000 UTC (1 container statuses recorded) Jan 31 01:58:41.319: INFO: Container chaos-daemon ready: true, restart count 0 Jan 31 01:58:41.319: INFO: coredns-74ff55c5b-ngxdm from kube-system started at 2021-01-27 12:43:36 +0000 UTC (1 container statuses recorded) Jan 31 01:58:41.319: INFO: Container coredns ready: true, restart count 0 Jan 31 01:58:41.319: INFO: coredns-74ff55c5b-ntztq from kube-system started at 2021-01-27 12:43:35 +0000 UTC (1 container statuses recorded) Jan 31 01:58:41.319: INFO: Container coredns ready: true, restart count 0 Jan 31 01:58:41.319: INFO: kindnet-98jtw from kube-system started at 2021-01-26 08:08:41 +0000 UTC (1 container statuses recorded) Jan 31 01:58:41.319: INFO: Container kindnet-cni ready: true, restart count 0 Jan 31 01:58:41.319: INFO: kube-proxy-skm7x from kube-system started at 2021-01-26 08:08:41 +0000 UTC (1 container statuses recorded) Jan 31 01:58:41.319: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-62c3608d-fa7b-4828-887b-aa334b1530c4 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-62c3608d-fa7b-4828-887b-aa334b1530c4 off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-62c3608d-fa7b-4828-887b-aa334b1530c4 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:58:49.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5186" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:83 • [SLOW TEST:8.445 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":311,"completed":288,"skipped":5064,"failed":0} [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:58:49.531: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Jan 31 01:58:54.157: INFO: Successfully updated pod "adopt-release-gj7h2" STEP: Checking that the Job readopts the Pod Jan 31 01:58:54.157: INFO: Waiting up to 15m0s for pod "adopt-release-gj7h2" in namespace "job-5391" to be "adopted" Jan 31 01:58:54.209: INFO: Pod "adopt-release-gj7h2": Phase="Running", Reason="", readiness=true. Elapsed: 52.265635ms Jan 31 01:58:56.213: INFO: Pod "adopt-release-gj7h2": Phase="Running", Reason="", readiness=true. Elapsed: 2.055986111s Jan 31 01:58:56.213: INFO: Pod "adopt-release-gj7h2" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Jan 31 01:58:56.743: INFO: Successfully updated pod "adopt-release-gj7h2" STEP: Checking that the Job releases the Pod Jan 31 01:58:56.743: INFO: Waiting up to 15m0s for pod "adopt-release-gj7h2" in namespace "job-5391" to be "released" Jan 31 01:58:56.781: INFO: Pod "adopt-release-gj7h2": Phase="Running", Reason="", readiness=true. Elapsed: 37.312843ms Jan 31 01:58:58.798: INFO: Pod "adopt-release-gj7h2": Phase="Running", Reason="", readiness=true. Elapsed: 2.054886572s Jan 31 01:58:58.798: INFO: Pod "adopt-release-gj7h2" satisfied condition "released" [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:58:58.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-5391" for this suite. • [SLOW TEST:9.398 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":311,"completed":289,"skipped":5064,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:58:58.931: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 Jan 31 01:58:59.353: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Jan 31 01:59:02.907: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3167 --namespace=crd-publish-openapi-3167 create -f -' Jan 31 01:59:07.119: INFO: stderr: "" Jan 31 01:59:07.119: INFO: stdout: "e2e-test-crd-publish-openapi-9598-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Jan 31 01:59:07.119: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3167 --namespace=crd-publish-openapi-3167 delete e2e-test-crd-publish-openapi-9598-crds test-cr' Jan 31 01:59:07.216: INFO: stderr: "" Jan 31 01:59:07.216: INFO: stdout: "e2e-test-crd-publish-openapi-9598-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Jan 31 01:59:07.216: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3167 --namespace=crd-publish-openapi-3167 apply -f -' Jan 31 01:59:07.497: INFO: stderr: "" Jan 31 01:59:07.497: INFO: stdout: "e2e-test-crd-publish-openapi-9598-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Jan 31 01:59:07.498: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3167 --namespace=crd-publish-openapi-3167 delete e2e-test-crd-publish-openapi-9598-crds test-cr' Jan 31 01:59:07.611: INFO: stderr: "" Jan 31 01:59:07.611: INFO: stdout: "e2e-test-crd-publish-openapi-9598-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Jan 31 01:59:07.611: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3167 explain e2e-test-crd-publish-openapi-9598-crds' Jan 31 01:59:07.876: INFO: stderr: "" Jan 31 01:59:07.876: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9598-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t<>\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:59:09.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3167" for this suite. • [SLOW TEST:11.010 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":311,"completed":290,"skipped":5122,"failed":0} [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:59:09.941: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jan 31 01:59:14.135: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:59:14.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1457" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":311,"completed":291,"skipped":5122,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:59:14.157: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should run through a ConfigMap lifecycle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: creating a ConfigMap STEP: fetching the ConfigMap STEP: patching the ConfigMap STEP: listing all ConfigMaps in all namespaces with a label selector STEP: deleting the ConfigMap by collection with a label selector STEP: listing all ConfigMaps in test namespace [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:59:14.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3957" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":311,"completed":292,"skipped":5134,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:59:14.656: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating projection with secret that has name projected-secret-test-cb1c5d2d-1976-4569-adb6-7d9ad5c20a98 STEP: Creating a pod to test consume secrets Jan 31 01:59:14.752: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-07d35a8d-02d5-4f95-b651-187194a2d207" in namespace "projected-6831" to be "Succeeded or Failed" Jan 31 01:59:14.782: INFO: Pod "pod-projected-secrets-07d35a8d-02d5-4f95-b651-187194a2d207": Phase="Pending", Reason="", readiness=false. Elapsed: 30.19813ms Jan 31 01:59:16.826: INFO: Pod "pod-projected-secrets-07d35a8d-02d5-4f95-b651-187194a2d207": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074286847s Jan 31 01:59:18.830: INFO: Pod "pod-projected-secrets-07d35a8d-02d5-4f95-b651-187194a2d207": Phase="Running", Reason="", readiness=true. Elapsed: 4.078168346s Jan 31 01:59:20.835: INFO: Pod "pod-projected-secrets-07d35a8d-02d5-4f95-b651-187194a2d207": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.082893232s STEP: Saw pod success Jan 31 01:59:20.835: INFO: Pod "pod-projected-secrets-07d35a8d-02d5-4f95-b651-187194a2d207" satisfied condition "Succeeded or Failed" Jan 31 01:59:20.838: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-07d35a8d-02d5-4f95-b651-187194a2d207 container projected-secret-volume-test: STEP: delete the pod Jan 31 01:59:20.875: INFO: Waiting for pod pod-projected-secrets-07d35a8d-02d5-4f95-b651-187194a2d207 to disappear Jan 31 01:59:20.882: INFO: Pod pod-projected-secrets-07d35a8d-02d5-4f95-b651-187194a2d207 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:59:20.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6831" for this suite. • [SLOW TEST:6.234 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":293,"skipped":5143,"failed":0} SSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:59:20.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: getting the auto-created API token STEP: reading a file in the container Jan 31 01:59:25.499: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-6632 pod-service-account-1de17963-ef2e-4212-a3fd-5d6cf461de01 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Jan 31 01:59:25.754: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-6632 pod-service-account-1de17963-ef2e-4212-a3fd-5d6cf461de01 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Jan 31 01:59:25.959: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-6632 pod-service-account-1de17963-ef2e-4212-a3fd-5d6cf461de01 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:59:26.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-6632" for this suite. • [SLOW TEST:5.272 seconds] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":311,"completed":294,"skipped":5153,"failed":0} SSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:59:26.163: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 Jan 31 01:59:26.329: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-4e0d43ce-ad71-4aa9-a0ea-8f1ba1baeeb0" in namespace "security-context-test-3498" to be "Succeeded or Failed" Jan 31 01:59:26.331: INFO: Pod "busybox-privileged-false-4e0d43ce-ad71-4aa9-a0ea-8f1ba1baeeb0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062299ms Jan 31 01:59:28.336: INFO: Pod "busybox-privileged-false-4e0d43ce-ad71-4aa9-a0ea-8f1ba1baeeb0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006896993s Jan 31 01:59:30.342: INFO: Pod "busybox-privileged-false-4e0d43ce-ad71-4aa9-a0ea-8f1ba1baeeb0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012359485s Jan 31 01:59:30.342: INFO: Pod "busybox-privileged-false-4e0d43ce-ad71-4aa9-a0ea-8f1ba1baeeb0" satisfied condition "Succeeded or Failed" Jan 31 01:59:30.367: INFO: Got logs for pod "busybox-privileged-false-4e0d43ce-ad71-4aa9-a0ea-8f1ba1baeeb0": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:59:30.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3498" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":295,"skipped":5158,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:59:30.377: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a pod to test emptydir volume type on node default medium Jan 31 01:59:30.537: INFO: Waiting up to 5m0s for pod "pod-81dfa721-1689-4cbe-a8b6-358b847e76be" in namespace "emptydir-2342" to be "Succeeded or Failed" Jan 31 01:59:30.553: INFO: Pod "pod-81dfa721-1689-4cbe-a8b6-358b847e76be": Phase="Pending", Reason="", readiness=false. Elapsed: 15.36618ms Jan 31 01:59:32.761: INFO: Pod "pod-81dfa721-1689-4cbe-a8b6-358b847e76be": Phase="Pending", Reason="", readiness=false. Elapsed: 2.223889474s Jan 31 01:59:34.766: INFO: Pod "pod-81dfa721-1689-4cbe-a8b6-358b847e76be": Phase="Running", Reason="", readiness=true. Elapsed: 4.228716338s Jan 31 01:59:36.771: INFO: Pod "pod-81dfa721-1689-4cbe-a8b6-358b847e76be": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.233528614s STEP: Saw pod success Jan 31 01:59:36.771: INFO: Pod "pod-81dfa721-1689-4cbe-a8b6-358b847e76be" satisfied condition "Succeeded or Failed" Jan 31 01:59:36.773: INFO: Trying to get logs from node latest-worker2 pod pod-81dfa721-1689-4cbe-a8b6-358b847e76be container test-container: STEP: delete the pod Jan 31 01:59:36.788: INFO: Waiting for pod pod-81dfa721-1689-4cbe-a8b6-358b847e76be to disappear Jan 31 01:59:36.793: INFO: Pod pod-81dfa721-1689-4cbe-a8b6-358b847e76be no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 01:59:36.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2342" for this suite. • [SLOW TEST:6.441 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:45 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":296,"skipped":5160,"failed":0} SSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 01:59:36.818: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jan 31 01:59:45.065: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 31 01:59:45.068: INFO: Pod pod-with-poststart-exec-hook still exists Jan 31 01:59:47.069: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 31 01:59:47.104: INFO: Pod pod-with-poststart-exec-hook still exists Jan 31 01:59:49.069: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 31 01:59:49.073: INFO: Pod pod-with-poststart-exec-hook still exists Jan 31 01:59:51.069: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 31 01:59:51.073: INFO: Pod pod-with-poststart-exec-hook still exists Jan 31 01:59:53.069: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 31 01:59:53.073: INFO: Pod pod-with-poststart-exec-hook still exists Jan 31 01:59:55.069: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 31 01:59:55.073: INFO: Pod pod-with-poststart-exec-hook still exists Jan 31 01:59:57.069: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 31 01:59:57.075: INFO: Pod pod-with-poststart-exec-hook still exists Jan 31 01:59:59.069: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 31 01:59:59.079: INFO: Pod pod-with-poststart-exec-hook still exists Jan 31 02:00:01.069: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 31 02:00:01.074: INFO: Pod pod-with-poststart-exec-hook still exists Jan 31 02:00:03.069: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 31 02:00:03.073: INFO: Pod pod-with-poststart-exec-hook still exists Jan 31 02:00:05.069: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 31 02:00:05.074: INFO: Pod pod-with-poststart-exec-hook still exists Jan 31 02:00:07.069: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 31 02:00:07.074: INFO: Pod pod-with-poststart-exec-hook still exists Jan 31 02:00:09.069: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 31 02:00:09.074: INFO: Pod pod-with-poststart-exec-hook still exists Jan 31 02:00:11.069: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 31 02:00:11.074: INFO: Pod pod-with-poststart-exec-hook still exists Jan 31 02:00:13.069: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 31 02:00:13.073: INFO: Pod pod-with-poststart-exec-hook still exists Jan 31 02:00:15.069: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 31 02:00:15.074: INFO: Pod pod-with-poststart-exec-hook still exists Jan 31 02:00:17.069: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 31 02:00:17.074: INFO: Pod pod-with-poststart-exec-hook still exists Jan 31 02:00:19.069: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 31 02:00:19.073: INFO: Pod pod-with-poststart-exec-hook still exists Jan 31 02:00:21.069: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 31 02:00:21.074: INFO: Pod pod-with-poststart-exec-hook still exists Jan 31 02:00:23.069: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 31 02:00:23.073: INFO: Pod pod-with-poststart-exec-hook still exists Jan 31 02:00:25.069: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 31 02:00:25.075: INFO: Pod pod-with-poststart-exec-hook still exists Jan 31 02:00:27.069: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 31 02:00:27.074: INFO: Pod pod-with-poststart-exec-hook still exists Jan 31 02:00:29.069: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 31 02:00:29.074: INFO: Pod pod-with-poststart-exec-hook still exists Jan 31 02:00:31.069: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 31 02:00:31.073: INFO: Pod pod-with-poststart-exec-hook still exists Jan 31 02:00:33.069: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 31 02:00:33.073: INFO: Pod pod-with-poststart-exec-hook still exists Jan 31 02:00:35.069: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 31 02:00:35.074: INFO: Pod pod-with-poststart-exec-hook still exists Jan 31 02:00:37.069: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 31 02:00:37.074: INFO: Pod pod-with-poststart-exec-hook still exists Jan 31 02:00:39.069: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 31 02:00:39.073: INFO: Pod pod-with-poststart-exec-hook still exists Jan 31 02:00:41.069: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 31 02:00:41.072: INFO: Pod pod-with-poststart-exec-hook still exists Jan 31 02:00:43.069: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 31 02:00:43.084: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 02:00:43.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1738" for this suite. • [SLOW TEST:66.276 seconds] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:635 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:43 should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":311,"completed":297,"skipped":5163,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 02:00:43.094: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [BeforeEach] Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:299 [It] should create and stop a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: creating a replication controller Jan 31 02:00:43.162: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-5768 create -f -' Jan 31 02:00:43.541: INFO: stderr: "" Jan 31 02:00:43.542: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 31 02:00:43.542: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-5768 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 31 02:00:43.708: INFO: stderr: "" Jan 31 02:00:43.708: INFO: stdout: "update-demo-nautilus-bcxhd update-demo-nautilus-cxvt2 " Jan 31 02:00:43.708: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-5768 get pods update-demo-nautilus-bcxhd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 31 02:00:43.821: INFO: stderr: "" Jan 31 02:00:43.821: INFO: stdout: "" Jan 31 02:00:43.821: INFO: update-demo-nautilus-bcxhd is created but not running Jan 31 02:00:48.821: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-5768 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 31 02:00:48.925: INFO: stderr: "" Jan 31 02:00:48.925: INFO: stdout: "update-demo-nautilus-bcxhd update-demo-nautilus-cxvt2 " Jan 31 02:00:48.925: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-5768 get pods update-demo-nautilus-bcxhd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 31 02:00:49.022: INFO: stderr: "" Jan 31 02:00:49.022: INFO: stdout: "true" Jan 31 02:00:49.022: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-5768 get pods update-demo-nautilus-bcxhd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 31 02:00:49.109: INFO: stderr: "" Jan 31 02:00:49.109: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 31 02:00:49.109: INFO: validating pod update-demo-nautilus-bcxhd Jan 31 02:00:49.173: INFO: got data: { "image": "nautilus.jpg" } Jan 31 02:00:49.173: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 31 02:00:49.173: INFO: update-demo-nautilus-bcxhd is verified up and running Jan 31 02:00:49.173: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-5768 get pods update-demo-nautilus-cxvt2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 31 02:00:49.266: INFO: stderr: "" Jan 31 02:00:49.266: INFO: stdout: "true" Jan 31 02:00:49.267: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-5768 get pods update-demo-nautilus-cxvt2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 31 02:00:49.367: INFO: stderr: "" Jan 31 02:00:49.367: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 31 02:00:49.367: INFO: validating pod update-demo-nautilus-cxvt2 Jan 31 02:00:49.370: INFO: got data: { "image": "nautilus.jpg" } Jan 31 02:00:49.370: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 31 02:00:49.370: INFO: update-demo-nautilus-cxvt2 is verified up and running STEP: using delete to clean up resources Jan 31 02:00:49.370: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-5768 delete --grace-period=0 --force -f -' Jan 31 02:00:49.469: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 31 02:00:49.469: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jan 31 02:00:49.469: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-5768 get rc,svc -l name=update-demo --no-headers' Jan 31 02:00:49.579: INFO: stderr: "No resources found in kubectl-5768 namespace.\n" Jan 31 02:00:49.579: INFO: stdout: "" Jan 31 02:00:49.579: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-5768 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 31 02:00:49.705: INFO: stderr: "" Jan 31 02:00:49.705: INFO: stdout: "update-demo-nautilus-bcxhd\nupdate-demo-nautilus-cxvt2\n" Jan 31 02:00:50.206: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-5768 get rc,svc -l name=update-demo --no-headers' Jan 31 02:00:50.307: INFO: stderr: "No resources found in kubectl-5768 namespace.\n" Jan 31 02:00:50.307: INFO: stdout: "" Jan 31 02:00:50.307: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-5768 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 31 02:00:50.419: INFO: stderr: "" Jan 31 02:00:50.419: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 02:00:50.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5768" for this suite. • [SLOW TEST:7.333 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:297 should create and stop a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":311,"completed":298,"skipped":5182,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 02:00:50.428: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a pod to test emptydir 0666 on tmpfs Jan 31 02:00:50.764: INFO: Waiting up to 5m0s for pod "pod-2ff34513-074c-442a-b9e3-96c8b50dd9e7" in namespace "emptydir-2998" to be "Succeeded or Failed" Jan 31 02:00:50.819: INFO: Pod "pod-2ff34513-074c-442a-b9e3-96c8b50dd9e7": Phase="Pending", Reason="", readiness=false. Elapsed: 54.602033ms Jan 31 02:00:52.844: INFO: Pod "pod-2ff34513-074c-442a-b9e3-96c8b50dd9e7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080299615s Jan 31 02:00:54.848: INFO: Pod "pod-2ff34513-074c-442a-b9e3-96c8b50dd9e7": Phase="Running", Reason="", readiness=true. Elapsed: 4.084053968s Jan 31 02:00:56.853: INFO: Pod "pod-2ff34513-074c-442a-b9e3-96c8b50dd9e7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.088686464s STEP: Saw pod success Jan 31 02:00:56.853: INFO: Pod "pod-2ff34513-074c-442a-b9e3-96c8b50dd9e7" satisfied condition "Succeeded or Failed" Jan 31 02:00:56.856: INFO: Trying to get logs from node latest-worker pod pod-2ff34513-074c-442a-b9e3-96c8b50dd9e7 container test-container: STEP: delete the pod Jan 31 02:00:56.879: INFO: Waiting for pod pod-2ff34513-074c-442a-b9e3-96c8b50dd9e7 to disappear Jan 31 02:00:56.896: INFO: Pod pod-2ff34513-074c-442a-b9e3-96c8b50dd9e7 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 02:00:56.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2998" for this suite. • [SLOW TEST:6.476 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:45 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":299,"skipped":5201,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 02:00:56.905: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: creating secret secrets-4133/secret-test-bb3f6837-d66c-49ec-ba86-c31b5e563933 STEP: Creating a pod to test consume secrets Jan 31 02:00:57.023: INFO: Waiting up to 5m0s for pod "pod-configmaps-57e18612-8401-41ee-aa03-ceaa9fc4fd74" in namespace "secrets-4133" to be "Succeeded or Failed" Jan 31 02:00:57.028: INFO: Pod "pod-configmaps-57e18612-8401-41ee-aa03-ceaa9fc4fd74": Phase="Pending", Reason="", readiness=false. Elapsed: 4.397294ms Jan 31 02:00:59.031: INFO: Pod "pod-configmaps-57e18612-8401-41ee-aa03-ceaa9fc4fd74": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007989144s Jan 31 02:01:01.035: INFO: Pod "pod-configmaps-57e18612-8401-41ee-aa03-ceaa9fc4fd74": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011937595s STEP: Saw pod success Jan 31 02:01:01.035: INFO: Pod "pod-configmaps-57e18612-8401-41ee-aa03-ceaa9fc4fd74" satisfied condition "Succeeded or Failed" Jan 31 02:01:01.038: INFO: Trying to get logs from node latest-worker pod pod-configmaps-57e18612-8401-41ee-aa03-ceaa9fc4fd74 container env-test: STEP: delete the pod Jan 31 02:01:01.054: INFO: Waiting for pod pod-configmaps-57e18612-8401-41ee-aa03-ceaa9fc4fd74 to disappear Jan 31 02:01:01.072: INFO: Pod pod-configmaps-57e18612-8401-41ee-aa03-ceaa9fc4fd74 no longer exists [AfterEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 02:01:01.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4133" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":311,"completed":300,"skipped":5216,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should run through the lifecycle of Pods and PodStatus [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 02:01:01.119: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 [It] should run through the lifecycle of Pods and PodStatus [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: creating a Pod with a static label STEP: watching for Pod to be ready Jan 31 02:01:01.266: INFO: observed Pod pod-test in namespace pods-7983 in phase Pending with labels: map[test-pod-static:true] & conditions [] Jan 31 02:01:01.274: INFO: observed Pod pod-test in namespace pods-7983 in phase Pending with labels: map[test-pod-static:true] & conditions [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-31 02:01:01 +0000 UTC }] Jan 31 02:01:01.295: INFO: observed Pod pod-test in namespace pods-7983 in phase Pending with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-31 02:01:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-31 02:01:01 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-31 02:01:01 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-31 02:01:01 +0000 UTC }] Jan 31 02:01:04.389: INFO: Found Pod pod-test in namespace pods-7983 in phase Running with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-31 02:01:01 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-01-31 02:01:04 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-01-31 02:01:04 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-31 02:01:01 +0000 UTC }] STEP: patching the Pod with a new Label and updated data Jan 31 02:01:04.419: INFO: observed event type ADDED STEP: getting the Pod and ensuring that it's patched STEP: getting the PodStatus STEP: replacing the Pod's status Ready condition to False STEP: check the Pod again to ensure its Ready conditions are False STEP: deleting the Pod via a Collection with a LabelSelector STEP: watching for the Pod to be deleted Jan 31 02:01:04.521: INFO: observed event type ADDED Jan 31 02:01:04.521: INFO: observed event type MODIFIED Jan 31 02:01:04.521: INFO: observed event type MODIFIED Jan 31 02:01:04.522: INFO: observed event type MODIFIED Jan 31 02:01:04.522: INFO: observed event type MODIFIED Jan 31 02:01:04.522: INFO: observed event type MODIFIED Jan 31 02:01:04.522: INFO: observed event type MODIFIED [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 02:01:04.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7983" for this suite. •{"msg":"PASSED [k8s.io] Pods should run through the lifecycle of Pods and PodStatus [Conformance]","total":311,"completed":301,"skipped":5248,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 02:01:04.549: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating configMap with name cm-test-opt-del-ce105f19-2d5e-4031-ad28-6f6a26ae8ec8 STEP: Creating configMap with name cm-test-opt-upd-51b7bc7d-61f0-4ba9-b97d-d882fc67aa63 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-ce105f19-2d5e-4031-ad28-6f6a26ae8ec8 STEP: Updating configmap cm-test-opt-upd-51b7bc7d-61f0-4ba9-b97d-d882fc67aa63 STEP: Creating configMap with name cm-test-opt-create-7a447e4b-72e9-41e6-9ba0-3eab4e4e3a6b STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 02:02:43.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7202" for this suite. • [SLOW TEST:99.002 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":311,"completed":302,"skipped":5261,"failed":0} SSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 02:02:43.552: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:92 Jan 31 02:02:43.648: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 31 02:02:43.659: INFO: Waiting for terminating namespaces to be deleted... Jan 31 02:02:43.661: INFO: Logging pods the apiserver thinks is on node latest-worker before test Jan 31 02:02:43.666: INFO: pod-configmaps-2af5302a-d1b8-42ea-a0a3-ff8c78b44b66 from configmap-7202 started at 2021-01-31 02:01:05 +0000 UTC (3 container statuses recorded) Jan 31 02:02:43.666: INFO: Container createcm-volume-test ready: true, restart count 0 Jan 31 02:02:43.666: INFO: Container delcm-volume-test ready: true, restart count 0 Jan 31 02:02:43.666: INFO: Container updcm-volume-test ready: true, restart count 0 Jan 31 02:02:43.666: INFO: chaos-controller-manager-69c479c674-tdrls from default started at 2021-01-26 11:59:56 +0000 UTC (1 container statuses recorded) Jan 31 02:02:43.666: INFO: Container chaos-mesh ready: true, restart count 0 Jan 31 02:02:43.666: INFO: chaos-daemon-vkxzr from default started at 2021-01-26 11:59:56 +0000 UTC (1 container statuses recorded) Jan 31 02:02:43.666: INFO: Container chaos-daemon ready: true, restart count 0 Jan 31 02:02:43.666: INFO: kindnet-5bf5g from kube-system started at 2021-01-26 08:08:41 +0000 UTC (1 container statuses recorded) Jan 31 02:02:43.667: INFO: Container kindnet-cni ready: true, restart count 0 Jan 31 02:02:43.667: INFO: kube-proxy-f59c8 from kube-system started at 2021-01-26 08:08:41 +0000 UTC (1 container statuses recorded) Jan 31 02:02:43.667: INFO: Container kube-proxy ready: true, restart count 0 Jan 31 02:02:43.667: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Jan 31 02:02:43.673: INFO: chaos-daemon-g67vf from default started at 2021-01-26 11:59:56 +0000 UTC (1 container statuses recorded) Jan 31 02:02:43.673: INFO: Container chaos-daemon ready: true, restart count 0 Jan 31 02:02:43.673: INFO: coredns-74ff55c5b-ngxdm from kube-system started at 2021-01-27 12:43:36 +0000 UTC (1 container statuses recorded) Jan 31 02:02:43.673: INFO: Container coredns ready: true, restart count 0 Jan 31 02:02:43.673: INFO: coredns-74ff55c5b-ntztq from kube-system started at 2021-01-27 12:43:35 +0000 UTC (1 container statuses recorded) Jan 31 02:02:43.673: INFO: Container coredns ready: true, restart count 0 Jan 31 02:02:43.673: INFO: kindnet-98jtw from kube-system started at 2021-01-26 08:08:41 +0000 UTC (1 container statuses recorded) Jan 31 02:02:43.673: INFO: Container kindnet-cni ready: true, restart count 0 Jan 31 02:02:43.674: INFO: kube-proxy-skm7x from kube-system started at 2021-01-26 08:08:41 +0000 UTC (1 container statuses recorded) Jan 31 02:02:43.674: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.165f2eb8fdf71fd9], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 node(s) didn't match Pod's node affinity.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 02:02:44.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2486" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:83 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":311,"completed":303,"skipped":5266,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 02:02:44.705: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [BeforeEach] Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:299 [It] should scale a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: creating a replication controller Jan 31 02:02:44.862: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-2107 create -f -' Jan 31 02:02:45.135: INFO: stderr: "" Jan 31 02:02:45.135: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 31 02:02:45.135: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-2107 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 31 02:02:45.246: INFO: stderr: "" Jan 31 02:02:45.246: INFO: stdout: "update-demo-nautilus-jw2r7 update-demo-nautilus-zwlt7 " Jan 31 02:02:45.246: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-2107 get pods update-demo-nautilus-jw2r7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 31 02:02:45.358: INFO: stderr: "" Jan 31 02:02:45.358: INFO: stdout: "" Jan 31 02:02:45.358: INFO: update-demo-nautilus-jw2r7 is created but not running Jan 31 02:02:50.358: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-2107 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 31 02:02:50.467: INFO: stderr: "" Jan 31 02:02:50.467: INFO: stdout: "update-demo-nautilus-jw2r7 update-demo-nautilus-zwlt7 " Jan 31 02:02:50.468: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-2107 get pods update-demo-nautilus-jw2r7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 31 02:02:50.560: INFO: stderr: "" Jan 31 02:02:50.560: INFO: stdout: "true" Jan 31 02:02:50.560: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-2107 get pods update-demo-nautilus-jw2r7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 31 02:02:50.660: INFO: stderr: "" Jan 31 02:02:50.660: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 31 02:02:50.660: INFO: validating pod update-demo-nautilus-jw2r7 Jan 31 02:02:50.692: INFO: got data: { "image": "nautilus.jpg" } Jan 31 02:02:50.692: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 31 02:02:50.692: INFO: update-demo-nautilus-jw2r7 is verified up and running Jan 31 02:02:50.692: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-2107 get pods update-demo-nautilus-zwlt7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 31 02:02:50.793: INFO: stderr: "" Jan 31 02:02:50.793: INFO: stdout: "true" Jan 31 02:02:50.793: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-2107 get pods update-demo-nautilus-zwlt7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 31 02:02:50.915: INFO: stderr: "" Jan 31 02:02:50.915: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 31 02:02:50.915: INFO: validating pod update-demo-nautilus-zwlt7 Jan 31 02:02:50.919: INFO: got data: { "image": "nautilus.jpg" } Jan 31 02:02:50.919: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 31 02:02:50.919: INFO: update-demo-nautilus-zwlt7 is verified up and running STEP: scaling down the replication controller Jan 31 02:02:50.922: INFO: scanned /root for discovery docs: Jan 31 02:02:50.922: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-2107 scale rc update-demo-nautilus --replicas=1 --timeout=5m' Jan 31 02:02:52.051: INFO: stderr: "" Jan 31 02:02:52.051: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 31 02:02:52.051: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-2107 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 31 02:02:52.155: INFO: stderr: "" Jan 31 02:02:52.155: INFO: stdout: "update-demo-nautilus-jw2r7 update-demo-nautilus-zwlt7 " STEP: Replicas for name=update-demo: expected=1 actual=2 Jan 31 02:02:57.155: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-2107 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 31 02:02:57.261: INFO: stderr: "" Jan 31 02:02:57.261: INFO: stdout: "update-demo-nautilus-jw2r7 update-demo-nautilus-zwlt7 " STEP: Replicas for name=update-demo: expected=1 actual=2 Jan 31 02:03:02.262: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-2107 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 31 02:03:02.361: INFO: stderr: "" Jan 31 02:03:02.361: INFO: stdout: "update-demo-nautilus-jw2r7 " Jan 31 02:03:02.362: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-2107 get pods update-demo-nautilus-jw2r7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 31 02:03:02.466: INFO: stderr: "" Jan 31 02:03:02.466: INFO: stdout: "true" Jan 31 02:03:02.466: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-2107 get pods update-demo-nautilus-jw2r7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 31 02:03:02.567: INFO: stderr: "" Jan 31 02:03:02.568: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 31 02:03:02.568: INFO: validating pod update-demo-nautilus-jw2r7 Jan 31 02:03:02.571: INFO: got data: { "image": "nautilus.jpg" } Jan 31 02:03:02.571: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 31 02:03:02.571: INFO: update-demo-nautilus-jw2r7 is verified up and running STEP: scaling up the replication controller Jan 31 02:03:02.574: INFO: scanned /root for discovery docs: Jan 31 02:03:02.574: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-2107 scale rc update-demo-nautilus --replicas=2 --timeout=5m' Jan 31 02:03:03.750: INFO: stderr: "" Jan 31 02:03:03.750: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 31 02:03:03.750: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-2107 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 31 02:03:03.852: INFO: stderr: "" Jan 31 02:03:03.852: INFO: stdout: "update-demo-nautilus-jw2r7 update-demo-nautilus-ljjg5 " Jan 31 02:03:03.852: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-2107 get pods update-demo-nautilus-jw2r7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 31 02:03:03.956: INFO: stderr: "" Jan 31 02:03:03.956: INFO: stdout: "true" Jan 31 02:03:03.956: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-2107 get pods update-demo-nautilus-jw2r7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 31 02:03:04.051: INFO: stderr: "" Jan 31 02:03:04.051: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 31 02:03:04.051: INFO: validating pod update-demo-nautilus-jw2r7 Jan 31 02:03:04.054: INFO: got data: { "image": "nautilus.jpg" } Jan 31 02:03:04.054: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 31 02:03:04.054: INFO: update-demo-nautilus-jw2r7 is verified up and running Jan 31 02:03:04.054: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-2107 get pods update-demo-nautilus-ljjg5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 31 02:03:04.162: INFO: stderr: "" Jan 31 02:03:04.162: INFO: stdout: "" Jan 31 02:03:04.162: INFO: update-demo-nautilus-ljjg5 is created but not running Jan 31 02:03:09.162: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-2107 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 31 02:03:09.280: INFO: stderr: "" Jan 31 02:03:09.280: INFO: stdout: "update-demo-nautilus-jw2r7 update-demo-nautilus-ljjg5 " Jan 31 02:03:09.280: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-2107 get pods update-demo-nautilus-jw2r7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 31 02:03:09.392: INFO: stderr: "" Jan 31 02:03:09.392: INFO: stdout: "true" Jan 31 02:03:09.392: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-2107 get pods update-demo-nautilus-jw2r7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 31 02:03:09.500: INFO: stderr: "" Jan 31 02:03:09.500: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 31 02:03:09.500: INFO: validating pod update-demo-nautilus-jw2r7 Jan 31 02:03:09.503: INFO: got data: { "image": "nautilus.jpg" } Jan 31 02:03:09.503: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 31 02:03:09.503: INFO: update-demo-nautilus-jw2r7 is verified up and running Jan 31 02:03:09.503: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-2107 get pods update-demo-nautilus-ljjg5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 31 02:03:09.590: INFO: stderr: "" Jan 31 02:03:09.590: INFO: stdout: "true" Jan 31 02:03:09.591: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-2107 get pods update-demo-nautilus-ljjg5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 31 02:03:09.690: INFO: stderr: "" Jan 31 02:03:09.690: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 31 02:03:09.690: INFO: validating pod update-demo-nautilus-ljjg5 Jan 31 02:03:09.694: INFO: got data: { "image": "nautilus.jpg" } Jan 31 02:03:09.694: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 31 02:03:09.694: INFO: update-demo-nautilus-ljjg5 is verified up and running STEP: using delete to clean up resources Jan 31 02:03:09.694: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-2107 delete --grace-period=0 --force -f -' Jan 31 02:03:09.801: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 31 02:03:09.801: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jan 31 02:03:09.801: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-2107 get rc,svc -l name=update-demo --no-headers' Jan 31 02:03:09.902: INFO: stderr: "No resources found in kubectl-2107 namespace.\n" Jan 31 02:03:09.902: INFO: stdout: "" Jan 31 02:03:09.902: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-2107 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 31 02:03:10.012: INFO: stderr: "" Jan 31 02:03:10.012: INFO: stdout: "update-demo-nautilus-jw2r7\nupdate-demo-nautilus-ljjg5\n" Jan 31 02:03:10.514: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-2107 get rc,svc -l name=update-demo --no-headers' Jan 31 02:03:10.819: INFO: stderr: "No resources found in kubectl-2107 namespace.\n" Jan 31 02:03:10.819: INFO: stdout: "" Jan 31 02:03:10.819: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-2107 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 31 02:03:10.949: INFO: stderr: "" Jan 31 02:03:10.949: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 02:03:10.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2107" for this suite. • [SLOW TEST:26.252 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:297 should scale a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":311,"completed":304,"skipped":5270,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 02:03:10.957: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should adopt matching pods on creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 02:03:16.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6884" for this suite. • [SLOW TEST:5.303 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":311,"completed":305,"skipped":5280,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 02:03:16.261: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-5078 [It] Should recreate evicted statefulset [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-5078 STEP: Creating statefulset with conflicting port in namespace statefulset-5078 STEP: Waiting until pod test-pod will start running in namespace statefulset-5078 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-5078 Jan 31 02:03:22.483: INFO: Observed stateful pod in namespace: statefulset-5078, name: ss-0, uid: 34194e56-1f71-40c0-881b-a7307bdb6ca4, status phase: Pending. Waiting for statefulset controller to delete. Jan 31 02:03:22.603: INFO: Observed stateful pod in namespace: statefulset-5078, name: ss-0, uid: 34194e56-1f71-40c0-881b-a7307bdb6ca4, status phase: Failed. Waiting for statefulset controller to delete. Jan 31 02:03:22.614: INFO: Observed stateful pod in namespace: statefulset-5078, name: ss-0, uid: 34194e56-1f71-40c0-881b-a7307bdb6ca4, status phase: Failed. Waiting for statefulset controller to delete. Jan 31 02:03:22.681: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-5078 STEP: Removing pod with conflicting port in namespace statefulset-5078 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-5078 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Jan 31 02:03:28.834: INFO: Deleting all statefulset in ns statefulset-5078 Jan 31 02:03:28.836: INFO: Scaling statefulset ss to 0 Jan 31 02:03:48.869: INFO: Waiting for statefulset status.replicas updated to 0 Jan 31 02:03:48.871: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 02:03:48.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5078" for this suite. • [SLOW TEST:32.634 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:635 Should recreate evicted statefulset [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":311,"completed":306,"skipped":5298,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 02:03:48.895: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Jan 31 02:03:49.010: INFO: Created pod &Pod{ObjectMeta:{test-dns-nameservers dns-1718 6e59095d-1831-4970-90b1-9039cf69ec0c 1140870 0 2021-01-31 02:03:49 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2021-01-31 02:03:48 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hlpjm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hlpjm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hlpjm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 31 02:03:49.032: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) Jan 31 02:03:51.036: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) Jan 31 02:03:53.035: INFO: The status of Pod test-dns-nameservers is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... Jan 31 02:03:53.036: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-1718 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 31 02:03:53.036: INFO: >>> kubeConfig: /root/.kube/config I0131 02:03:53.071622 7 log.go:181] (0xc00705a790) (0xc0042d20a0) Create stream I0131 02:03:53.071649 7 log.go:181] (0xc00705a790) (0xc0042d20a0) Stream added, broadcasting: 1 I0131 02:03:53.073375 7 log.go:181] (0xc00705a790) Reply frame received for 1 I0131 02:03:53.073408 7 log.go:181] (0xc00705a790) (0xc001c0d540) Create stream I0131 02:03:53.073420 7 log.go:181] (0xc00705a790) (0xc001c0d540) Stream added, broadcasting: 3 I0131 02:03:53.074341 7 log.go:181] (0xc00705a790) Reply frame received for 3 I0131 02:03:53.074382 7 log.go:181] (0xc00705a790) (0xc001c0d5e0) Create stream I0131 02:03:53.074390 7 log.go:181] (0xc00705a790) (0xc001c0d5e0) Stream added, broadcasting: 5 I0131 02:03:53.075237 7 log.go:181] (0xc00705a790) Reply frame received for 5 I0131 02:03:53.174072 7 log.go:181] (0xc00705a790) Data frame received for 3 I0131 02:03:53.174099 7 log.go:181] (0xc001c0d540) (3) Data frame handling I0131 02:03:53.174113 7 log.go:181] (0xc001c0d540) (3) Data frame sent I0131 02:03:53.177539 7 log.go:181] (0xc00705a790) Data frame received for 5 I0131 02:03:53.177588 7 log.go:181] (0xc001c0d5e0) (5) Data frame handling I0131 02:03:53.177663 7 log.go:181] (0xc00705a790) Data frame received for 3 I0131 02:03:53.177687 7 log.go:181] (0xc001c0d540) (3) Data frame handling I0131 02:03:53.179589 7 log.go:181] (0xc00705a790) Data frame received for 1 I0131 02:03:53.179609 7 log.go:181] (0xc0042d20a0) (1) Data frame handling I0131 02:03:53.179623 7 log.go:181] (0xc0042d20a0) (1) Data frame sent I0131 02:03:53.179634 7 log.go:181] (0xc00705a790) (0xc0042d20a0) Stream removed, broadcasting: 1 I0131 02:03:53.179645 7 log.go:181] (0xc00705a790) Go away received I0131 02:03:53.179711 7 log.go:181] (0xc00705a790) (0xc0042d20a0) Stream removed, broadcasting: 1 I0131 02:03:53.179739 7 log.go:181] (0xc00705a790) (0xc001c0d540) Stream removed, broadcasting: 3 I0131 02:03:53.179757 7 log.go:181] (0xc00705a790) (0xc001c0d5e0) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Jan 31 02:03:53.179: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-1718 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 31 02:03:53.179: INFO: >>> kubeConfig: /root/.kube/config I0131 02:03:53.208170 7 log.go:181] (0xc002fa8a50) (0xc002441ae0) Create stream I0131 02:03:53.208210 7 log.go:181] (0xc002fa8a50) (0xc002441ae0) Stream added, broadcasting: 1 I0131 02:03:53.209972 7 log.go:181] (0xc002fa8a50) Reply frame received for 1 I0131 02:03:53.210036 7 log.go:181] (0xc002fa8a50) (0xc003994000) Create stream I0131 02:03:53.210060 7 log.go:181] (0xc002fa8a50) (0xc003994000) Stream added, broadcasting: 3 I0131 02:03:53.210879 7 log.go:181] (0xc002fa8a50) Reply frame received for 3 I0131 02:03:53.210910 7 log.go:181] (0xc002fa8a50) (0xc003d099a0) Create stream I0131 02:03:53.210919 7 log.go:181] (0xc002fa8a50) (0xc003d099a0) Stream added, broadcasting: 5 I0131 02:03:53.211672 7 log.go:181] (0xc002fa8a50) Reply frame received for 5 I0131 02:03:53.283621 7 log.go:181] (0xc002fa8a50) Data frame received for 3 I0131 02:03:53.283654 7 log.go:181] (0xc003994000) (3) Data frame handling I0131 02:03:53.283673 7 log.go:181] (0xc003994000) (3) Data frame sent I0131 02:03:53.285453 7 log.go:181] (0xc002fa8a50) Data frame received for 3 I0131 02:03:53.285487 7 log.go:181] (0xc003994000) (3) Data frame handling I0131 02:03:53.285996 7 log.go:181] (0xc002fa8a50) Data frame received for 5 I0131 02:03:53.286026 7 log.go:181] (0xc003d099a0) (5) Data frame handling I0131 02:03:53.287742 7 log.go:181] (0xc002fa8a50) Data frame received for 1 I0131 02:03:53.287774 7 log.go:181] (0xc002441ae0) (1) Data frame handling I0131 02:03:53.287791 7 log.go:181] (0xc002441ae0) (1) Data frame sent I0131 02:03:53.287806 7 log.go:181] (0xc002fa8a50) (0xc002441ae0) Stream removed, broadcasting: 1 I0131 02:03:53.287848 7 log.go:181] (0xc002fa8a50) Go away received I0131 02:03:53.287972 7 log.go:181] (0xc002fa8a50) (0xc002441ae0) Stream removed, broadcasting: 1 I0131 02:03:53.288011 7 log.go:181] (0xc002fa8a50) (0xc003994000) Stream removed, broadcasting: 3 I0131 02:03:53.288035 7 log.go:181] (0xc002fa8a50) (0xc003d099a0) Stream removed, broadcasting: 5 Jan 31 02:03:53.288: INFO: Deleting pod test-dns-nameservers... [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 02:03:53.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1718" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":311,"completed":307,"skipped":5310,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 02:03:53.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a pod to test downward API volume plugin Jan 31 02:03:53.778: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cf2977ae-9b2c-4140-a336-d8cf779f11fe" in namespace "projected-7558" to be "Succeeded or Failed" Jan 31 02:03:53.924: INFO: Pod "downwardapi-volume-cf2977ae-9b2c-4140-a336-d8cf779f11fe": Phase="Pending", Reason="", readiness=false. Elapsed: 146.164401ms Jan 31 02:03:55.927: INFO: Pod "downwardapi-volume-cf2977ae-9b2c-4140-a336-d8cf779f11fe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.149090571s Jan 31 02:03:57.930: INFO: Pod "downwardapi-volume-cf2977ae-9b2c-4140-a336-d8cf779f11fe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.152584316s STEP: Saw pod success Jan 31 02:03:57.930: INFO: Pod "downwardapi-volume-cf2977ae-9b2c-4140-a336-d8cf779f11fe" satisfied condition "Succeeded or Failed" Jan 31 02:03:57.933: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-cf2977ae-9b2c-4140-a336-d8cf779f11fe container client-container: STEP: delete the pod Jan 31 02:03:58.174: INFO: Waiting for pod downwardapi-volume-cf2977ae-9b2c-4140-a336-d8cf779f11fe to disappear Jan 31 02:03:58.374: INFO: Pod downwardapi-volume-cf2977ae-9b2c-4140-a336-d8cf779f11fe no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 02:03:58.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7558" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":311,"completed":308,"skipped":5336,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 02:03:58.386: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating secret with name projected-secret-test-00f44d0a-f3f6-4077-9b70-8abef191876e STEP: Creating a pod to test consume secrets Jan 31 02:03:58.495: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-249f026a-6ca4-46d5-ac82-e5cd687c0058" in namespace "projected-3382" to be "Succeeded or Failed" Jan 31 02:03:58.578: INFO: Pod "pod-projected-secrets-249f026a-6ca4-46d5-ac82-e5cd687c0058": Phase="Pending", Reason="", readiness=false. Elapsed: 83.759821ms Jan 31 02:04:00.583: INFO: Pod "pod-projected-secrets-249f026a-6ca4-46d5-ac82-e5cd687c0058": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0878809s Jan 31 02:04:02.588: INFO: Pod "pod-projected-secrets-249f026a-6ca4-46d5-ac82-e5cd687c0058": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.09296447s STEP: Saw pod success Jan 31 02:04:02.588: INFO: Pod "pod-projected-secrets-249f026a-6ca4-46d5-ac82-e5cd687c0058" satisfied condition "Succeeded or Failed" Jan 31 02:04:02.591: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-249f026a-6ca4-46d5-ac82-e5cd687c0058 container secret-volume-test: STEP: delete the pod Jan 31 02:04:02.636: INFO: Waiting for pod pod-projected-secrets-249f026a-6ca4-46d5-ac82-e5cd687c0058 to disappear Jan 31 02:04:02.649: INFO: Pod pod-projected-secrets-249f026a-6ca4-46d5-ac82-e5cd687c0058 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 02:04:02.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3382" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":311,"completed":309,"skipped":5350,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 02:04:02.657: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a pod to test emptydir 0777 on tmpfs Jan 31 02:04:02.755: INFO: Waiting up to 5m0s for pod "pod-37aea816-aabe-4442-a574-7856a3edf7e4" in namespace "emptydir-4240" to be "Succeeded or Failed" Jan 31 02:04:02.791: INFO: Pod "pod-37aea816-aabe-4442-a574-7856a3edf7e4": Phase="Pending", Reason="", readiness=false. Elapsed: 35.552957ms Jan 31 02:04:04.864: INFO: Pod "pod-37aea816-aabe-4442-a574-7856a3edf7e4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.108736405s Jan 31 02:04:06.912: INFO: Pod "pod-37aea816-aabe-4442-a574-7856a3edf7e4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.15695783s STEP: Saw pod success Jan 31 02:04:06.912: INFO: Pod "pod-37aea816-aabe-4442-a574-7856a3edf7e4" satisfied condition "Succeeded or Failed" Jan 31 02:04:06.916: INFO: Trying to get logs from node latest-worker pod pod-37aea816-aabe-4442-a574-7856a3edf7e4 container test-container: STEP: delete the pod Jan 31 02:04:06.938: INFO: Waiting for pod pod-37aea816-aabe-4442-a574-7856a3edf7e4 to disappear Jan 31 02:04:06.942: INFO: Pod pod-37aea816-aabe-4442-a574-7856a3edf7e4 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 02:04:06.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4240" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":310,"skipped":5375,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jan 31 02:04:06.950: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Jan 31 02:04:07.409: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4612 66712b2b-da86-4bf0-93ea-317f677df41b 1141028 0 2021-01-31 02:04:07 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-01-31 02:04:07 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jan 31 02:04:07.410: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4612 66712b2b-da86-4bf0-93ea-317f677df41b 1141029 0 2021-01-31 02:04:07 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-01-31 02:04:07 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Jan 31 02:04:07.410: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4612 66712b2b-da86-4bf0-93ea-317f677df41b 1141030 0 2021-01-31 02:04:07 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-01-31 02:04:07 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Jan 31 02:04:17.468: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4612 66712b2b-da86-4bf0-93ea-317f677df41b 1141073 0 2021-01-31 02:04:07 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-01-31 02:04:07 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jan 31 02:04:17.468: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4612 66712b2b-da86-4bf0-93ea-317f677df41b 1141074 0 2021-01-31 02:04:07 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-01-31 02:04:07 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} Jan 31 02:04:17.468: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4612 66712b2b-da86-4bf0-93ea-317f677df41b 1141075 0 2021-01-31 02:04:07 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-01-31 02:04:07 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 31 02:04:17.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4612" for this suite. • [SLOW TEST:10.528 seconds] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":311,"completed":311,"skipped":5382,"failed":0} SSSSSSSSSSJan 31 02:04:17.478: INFO: Running AfterSuite actions on all nodes Jan 31 02:04:17.478: INFO: Running AfterSuite actions on node 1 Jan 31 02:04:17.478: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml {"msg":"Test Suite completed","total":311,"completed":311,"skipped":5392,"failed":0} Ran 311 of 5703 Specs in 7492.244 seconds SUCCESS! -- 311 Passed | 0 Failed | 0 Pending | 5392 Skipped PASS