I0112 22:31:33.121071 7 test_context.go:457] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0112 22:31:33.121337 7 e2e.go:129] Starting e2e run "a764d39e-9fd6-4322-bfe7-985107111e58" on Ginkgo node 1 {"msg":"Test Suite starting","total":309,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1610490691 - Will randomize all specs Will run 309 of 5667 specs Jan 12 22:31:33.194: INFO: >>> kubeConfig: /root/.kube/config Jan 12 22:31:33.198: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jan 12 22:31:33.218: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jan 12 22:31:33.250: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jan 12 22:31:33.250: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jan 12 22:31:33.250: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jan 12 22:31:33.257: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Jan 12 22:31:33.257: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jan 12 22:31:33.257: INFO: e2e test version: v1.20.1 Jan 12 22:31:33.258: INFO: kube-apiserver version: v1.20.0 Jan 12 22:31:33.259: INFO: >>> kubeConfig: /root/.kube/config Jan 12 22:31:33.264: INFO: Cluster IP family: ipv4 SSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 22:31:33.265: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected Jan 12 22:31:33.343: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating secret with name projected-secret-test-b379f53e-9fa2-4c9d-8cd8-d79fbfacf41a STEP: Creating a pod to test consume secrets Jan 12 22:31:33.359: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-413a9b4d-1423-408e-a330-f58917b70a70" in namespace "projected-3225" to be "Succeeded or Failed" Jan 12 22:31:33.363: INFO: Pod "pod-projected-secrets-413a9b4d-1423-408e-a330-f58917b70a70": Phase="Pending", Reason="", readiness=false. Elapsed: 4.226336ms Jan 12 22:31:35.575: INFO: Pod "pod-projected-secrets-413a9b4d-1423-408e-a330-f58917b70a70": Phase="Pending", Reason="", readiness=false. Elapsed: 2.215641616s Jan 12 22:31:37.579: INFO: Pod "pod-projected-secrets-413a9b4d-1423-408e-a330-f58917b70a70": Phase="Running", Reason="", readiness=true. Elapsed: 4.219616345s Jan 12 22:31:39.582: INFO: Pod "pod-projected-secrets-413a9b4d-1423-408e-a330-f58917b70a70": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.222615886s STEP: Saw pod success Jan 12 22:31:39.582: INFO: Pod "pod-projected-secrets-413a9b4d-1423-408e-a330-f58917b70a70" satisfied condition "Succeeded or Failed" Jan 12 22:31:39.584: INFO: Trying to get logs from node leguer-worker2 pod pod-projected-secrets-413a9b4d-1423-408e-a330-f58917b70a70 container secret-volume-test: STEP: delete the pod Jan 12 22:31:39.696: INFO: Waiting for pod pod-projected-secrets-413a9b4d-1423-408e-a330-f58917b70a70 to disappear Jan 12 22:31:39.712: INFO: Pod pod-projected-secrets-413a9b4d-1423-408e-a330-f58917b70a70 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 22:31:39.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3225" for this suite. • [SLOW TEST:6.456 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":309,"completed":1,"skipped":8,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 22:31:39.721: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating the pod with failed condition STEP: updating the pod Jan 12 22:33:40.368: INFO: Successfully updated pod "var-expansion-788e1828-375f-41aa-86a2-6f60162ac6aa" STEP: waiting for pod running STEP: deleting the pod gracefully Jan 12 22:33:42.382: INFO: Deleting pod "var-expansion-788e1828-375f-41aa-86a2-6f60162ac6aa" in namespace "var-expansion-3423" Jan 12 22:33:42.387: INFO: Wait up to 5m0s for pod "var-expansion-788e1828-375f-41aa-86a2-6f60162ac6aa" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 22:34:20.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3423" for this suite. • [SLOW TEST:160.695 seconds] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]","total":309,"completed":2,"skipped":30,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 22:34:20.417: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating pod pod-subpath-test-secret-2zlq STEP: Creating a pod to test atomic-volume-subpath Jan 12 22:34:20.603: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-2zlq" in namespace "subpath-7796" to be "Succeeded or Failed" Jan 12 22:34:20.606: INFO: Pod "pod-subpath-test-secret-2zlq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.618182ms Jan 12 22:34:22.611: INFO: Pod "pod-subpath-test-secret-2zlq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007324666s Jan 12 22:34:24.616: INFO: Pod "pod-subpath-test-secret-2zlq": Phase="Running", Reason="", readiness=true. Elapsed: 4.012585834s Jan 12 22:34:26.621: INFO: Pod "pod-subpath-test-secret-2zlq": Phase="Running", Reason="", readiness=true. Elapsed: 6.017330212s Jan 12 22:34:28.625: INFO: Pod "pod-subpath-test-secret-2zlq": Phase="Running", Reason="", readiness=true. Elapsed: 8.022134116s Jan 12 22:34:30.630: INFO: Pod "pod-subpath-test-secret-2zlq": Phase="Running", Reason="", readiness=true. Elapsed: 10.026803024s Jan 12 22:34:32.636: INFO: Pod "pod-subpath-test-secret-2zlq": Phase="Running", Reason="", readiness=true. Elapsed: 12.032726118s Jan 12 22:34:34.642: INFO: Pod "pod-subpath-test-secret-2zlq": Phase="Running", Reason="", readiness=true. Elapsed: 14.038731724s Jan 12 22:34:36.646: INFO: Pod "pod-subpath-test-secret-2zlq": Phase="Running", Reason="", readiness=true. Elapsed: 16.042494629s Jan 12 22:34:38.651: INFO: Pod "pod-subpath-test-secret-2zlq": Phase="Running", Reason="", readiness=true. Elapsed: 18.047880953s Jan 12 22:34:40.656: INFO: Pod "pod-subpath-test-secret-2zlq": Phase="Running", Reason="", readiness=true. Elapsed: 20.052543146s Jan 12 22:34:42.662: INFO: Pod "pod-subpath-test-secret-2zlq": Phase="Running", Reason="", readiness=true. Elapsed: 22.058323823s Jan 12 22:34:44.666: INFO: Pod "pod-subpath-test-secret-2zlq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.062674395s STEP: Saw pod success Jan 12 22:34:44.666: INFO: Pod "pod-subpath-test-secret-2zlq" satisfied condition "Succeeded or Failed" Jan 12 22:34:44.669: INFO: Trying to get logs from node leguer-worker2 pod pod-subpath-test-secret-2zlq container test-container-subpath-secret-2zlq: STEP: delete the pod Jan 12 22:34:44.905: INFO: Waiting for pod pod-subpath-test-secret-2zlq to disappear Jan 12 22:34:45.016: INFO: Pod pod-subpath-test-secret-2zlq no longer exists STEP: Deleting pod pod-subpath-test-secret-2zlq Jan 12 22:34:45.016: INFO: Deleting pod "pod-subpath-test-secret-2zlq" in namespace "subpath-7796" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 22:34:45.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7796" for this suite. • [SLOW TEST:24.610 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":309,"completed":3,"skipped":59,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 22:34:45.028: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 12 22:34:45.084: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 22:34:49.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9864" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":309,"completed":4,"skipped":117,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 22:34:49.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 12 22:34:49.351: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Jan 12 22:34:52.946: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4505 --namespace=crd-publish-openapi-4505 create -f -' Jan 12 22:34:56.605: INFO: stderr: "" Jan 12 22:34:56.605: INFO: stdout: "e2e-test-crd-publish-openapi-6074-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Jan 12 22:34:56.606: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4505 --namespace=crd-publish-openapi-4505 delete e2e-test-crd-publish-openapi-6074-crds test-cr' Jan 12 22:34:56.714: INFO: stderr: "" Jan 12 22:34:56.714: INFO: stdout: "e2e-test-crd-publish-openapi-6074-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Jan 12 22:34:56.715: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4505 --namespace=crd-publish-openapi-4505 apply -f -' Jan 12 22:34:57.016: INFO: stderr: "" Jan 12 22:34:57.016: INFO: stdout: "e2e-test-crd-publish-openapi-6074-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Jan 12 22:34:57.016: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4505 --namespace=crd-publish-openapi-4505 delete e2e-test-crd-publish-openapi-6074-crds test-cr' Jan 12 22:34:57.114: INFO: stderr: "" Jan 12 22:34:57.114: INFO: stdout: "e2e-test-crd-publish-openapi-6074-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Jan 12 22:34:57.114: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4505 explain e2e-test-crd-publish-openapi-6074-crds' Jan 12 22:34:57.401: INFO: stderr: "" Jan 12 22:34:57.401: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6074-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 22:35:00.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4505" for this suite. • [SLOW TEST:11.776 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":309,"completed":5,"skipped":156,"failed":0} [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 22:35:00.998: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 22:35:01.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-5804" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":309,"completed":6,"skipped":156,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 22:35:01.105: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Performing setup for networking test in namespace pod-network-test-2500 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 12 22:35:01.243: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jan 12 22:35:01.306: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 12 22:35:03.636: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 12 22:35:05.357: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 12 22:35:07.321: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 12 22:35:09.312: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 12 22:35:11.311: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 12 22:35:13.310: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 12 22:35:15.311: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 12 22:35:17.321: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 12 22:35:19.314: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 12 22:35:21.310: INFO: The status of Pod netserver-0 is Running (Ready = true) Jan 12 22:35:21.317: INFO: The status of Pod netserver-1 is Running (Ready = false) Jan 12 22:35:23.323: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Jan 12 22:35:29.407: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 Jan 12 22:35:29.407: INFO: Going to poll 10.244.2.162 on port 8081 at least 0 times, with a maximum of 34 tries before failing Jan 12 22:35:29.410: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.162 8081 | grep -v '^\s*$'] Namespace:pod-network-test-2500 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 12 22:35:29.410: INFO: >>> kubeConfig: /root/.kube/config I0112 22:35:29.450736 7 log.go:181] (0xc0063a24d0) (0xc000cba460) Create stream I0112 22:35:29.450767 7 log.go:181] (0xc0063a24d0) (0xc000cba460) Stream added, broadcasting: 1 I0112 22:35:29.452755 7 log.go:181] (0xc0063a24d0) Reply frame received for 1 I0112 22:35:29.452791 7 log.go:181] (0xc0063a24d0) (0xc002ba4fa0) Create stream I0112 22:35:29.452806 7 log.go:181] (0xc0063a24d0) (0xc002ba4fa0) Stream added, broadcasting: 3 I0112 22:35:29.453824 7 log.go:181] (0xc0063a24d0) Reply frame received for 3 I0112 22:35:29.453853 7 log.go:181] (0xc0063a24d0) (0xc000cba5a0) Create stream I0112 22:35:29.453864 7 log.go:181] (0xc0063a24d0) (0xc000cba5a0) Stream added, broadcasting: 5 I0112 22:35:29.454894 7 log.go:181] (0xc0063a24d0) Reply frame received for 5 I0112 22:35:30.565713 7 log.go:181] (0xc0063a24d0) Data frame received for 3 I0112 22:35:30.565837 7 log.go:181] (0xc002ba4fa0) (3) Data frame handling I0112 22:35:30.565886 7 log.go:181] (0xc002ba4fa0) (3) Data frame sent I0112 22:35:30.565920 7 log.go:181] (0xc0063a24d0) Data frame received for 3 I0112 22:35:30.565952 7 log.go:181] (0xc002ba4fa0) (3) Data frame handling I0112 22:35:30.566053 7 log.go:181] (0xc0063a24d0) Data frame received for 5 I0112 22:35:30.566097 7 log.go:181] (0xc000cba5a0) (5) Data frame handling I0112 22:35:30.567987 7 log.go:181] (0xc0063a24d0) Data frame received for 1 I0112 22:35:30.568025 7 log.go:181] (0xc000cba460) (1) Data frame handling I0112 22:35:30.568058 7 log.go:181] (0xc000cba460) (1) Data frame sent I0112 22:35:30.568080 7 log.go:181] (0xc0063a24d0) (0xc000cba460) Stream removed, broadcasting: 1 I0112 22:35:30.568101 7 log.go:181] (0xc0063a24d0) Go away received I0112 22:35:30.568583 7 log.go:181] (0xc0063a24d0) (0xc000cba460) Stream removed, broadcasting: 1 I0112 22:35:30.568607 7 log.go:181] (0xc0063a24d0) (0xc002ba4fa0) Stream removed, broadcasting: 3 I0112 22:35:30.568618 7 log.go:181] (0xc0063a24d0) (0xc000cba5a0) Stream removed, broadcasting: 5 Jan 12 22:35:30.568: INFO: Found all 1 expected endpoints: [netserver-0] Jan 12 22:35:30.568: INFO: Going to poll 10.244.1.196 on port 8081 at least 0 times, with a maximum of 34 tries before failing Jan 12 22:35:30.573: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.196 8081 | grep -v '^\s*$'] Namespace:pod-network-test-2500 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 12 22:35:30.573: INFO: >>> kubeConfig: /root/.kube/config I0112 22:35:30.612592 7 log.go:181] (0xc0070ba420) (0xc0005dc460) Create stream I0112 22:35:30.612634 7 log.go:181] (0xc0070ba420) (0xc0005dc460) Stream added, broadcasting: 1 I0112 22:35:30.617937 7 log.go:181] (0xc0070ba420) Reply frame received for 1 I0112 22:35:30.617990 7 log.go:181] (0xc0070ba420) (0xc0005dc500) Create stream I0112 22:35:30.618008 7 log.go:181] (0xc0070ba420) (0xc0005dc500) Stream added, broadcasting: 3 I0112 22:35:30.619403 7 log.go:181] (0xc0070ba420) Reply frame received for 3 I0112 22:35:30.619442 7 log.go:181] (0xc0070ba420) (0xc000cba640) Create stream I0112 22:35:30.619466 7 log.go:181] (0xc0070ba420) (0xc000cba640) Stream added, broadcasting: 5 I0112 22:35:30.620382 7 log.go:181] (0xc0070ba420) Reply frame received for 5 I0112 22:35:31.718184 7 log.go:181] (0xc0070ba420) Data frame received for 5 I0112 22:35:31.718263 7 log.go:181] (0xc000cba640) (5) Data frame handling I0112 22:35:31.718319 7 log.go:181] (0xc0070ba420) Data frame received for 3 I0112 22:35:31.718404 7 log.go:181] (0xc0005dc500) (3) Data frame handling I0112 22:35:31.718469 7 log.go:181] (0xc0005dc500) (3) Data frame sent I0112 22:35:31.718502 7 log.go:181] (0xc0070ba420) Data frame received for 3 I0112 22:35:31.718525 7 log.go:181] (0xc0005dc500) (3) Data frame handling I0112 22:35:31.720362 7 log.go:181] (0xc0070ba420) Data frame received for 1 I0112 22:35:31.720402 7 log.go:181] (0xc0005dc460) (1) Data frame handling I0112 22:35:31.720436 7 log.go:181] (0xc0005dc460) (1) Data frame sent I0112 22:35:31.720461 7 log.go:181] (0xc0070ba420) (0xc0005dc460) Stream removed, broadcasting: 1 I0112 22:35:31.720490 7 log.go:181] (0xc0070ba420) Go away received I0112 22:35:31.720588 7 log.go:181] (0xc0070ba420) (0xc0005dc460) Stream removed, broadcasting: 1 I0112 22:35:31.720630 7 log.go:181] (0xc0070ba420) (0xc0005dc500) Stream removed, broadcasting: 3 I0112 22:35:31.720659 7 log.go:181] (0xc0070ba420) (0xc000cba640) Stream removed, broadcasting: 5 Jan 12 22:35:31.720: INFO: Found all 1 expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 22:35:31.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2500" for this suite. • [SLOW TEST:30.627 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:27 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:30 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":7,"skipped":209,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 22:35:31.733: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 12 22:35:32.365: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 12 22:35:34.374: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746087732, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746087732, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746087732, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746087732, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 12 22:35:36.379: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746087732, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746087732, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746087732, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746087732, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 12 22:35:40.652: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Jan 12 22:35:40.833: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 22:35:40.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1242" for this suite. STEP: Destroying namespace "webhook-1242-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:9.829 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":309,"completed":8,"skipped":236,"failed":0} SSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 22:35:41.562: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:150 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 22:35:41.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9685" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":309,"completed":9,"skipped":240,"failed":0} SSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 22:35:41.848: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating secret with name secret-test-1aa35292-f283-460b-87e1-49d0c001fd10 STEP: Creating a pod to test consume secrets Jan 12 22:35:42.272: INFO: Waiting up to 5m0s for pod "pod-secrets-bc1f1232-9030-41c3-803c-50d4dda080ab" in namespace "secrets-2701" to be "Succeeded or Failed" Jan 12 22:35:42.351: INFO: Pod "pod-secrets-bc1f1232-9030-41c3-803c-50d4dda080ab": Phase="Pending", Reason="", readiness=false. Elapsed: 79.57919ms Jan 12 22:35:44.355: INFO: Pod "pod-secrets-bc1f1232-9030-41c3-803c-50d4dda080ab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.083643477s Jan 12 22:35:46.394: INFO: Pod "pod-secrets-bc1f1232-9030-41c3-803c-50d4dda080ab": Phase="Running", Reason="", readiness=true. Elapsed: 4.12196632s Jan 12 22:35:48.399: INFO: Pod "pod-secrets-bc1f1232-9030-41c3-803c-50d4dda080ab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.127497014s STEP: Saw pod success Jan 12 22:35:48.399: INFO: Pod "pod-secrets-bc1f1232-9030-41c3-803c-50d4dda080ab" satisfied condition "Succeeded or Failed" Jan 12 22:35:48.402: INFO: Trying to get logs from node leguer-worker pod pod-secrets-bc1f1232-9030-41c3-803c-50d4dda080ab container secret-volume-test: STEP: delete the pod Jan 12 22:35:48.462: INFO: Waiting for pod pod-secrets-bc1f1232-9030-41c3-803c-50d4dda080ab to disappear Jan 12 22:35:48.561: INFO: Pod pod-secrets-bc1f1232-9030-41c3-803c-50d4dda080ab no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 22:35:48.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2701" for this suite. • [SLOW TEST:6.724 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":309,"completed":10,"skipped":247,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 22:35:48.573: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should create and stop a working application [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating all guestbook components Jan 12 22:35:48.800: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-replica labels: app: agnhost role: replica tier: backend spec: ports: - port: 6379 selector: app: agnhost role: replica tier: backend Jan 12 22:35:48.800: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-8786 create -f -' Jan 12 22:35:49.436: INFO: stderr: "" Jan 12 22:35:49.436: INFO: stdout: "service/agnhost-replica created\n" Jan 12 22:35:49.436: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-primary labels: app: agnhost role: primary tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: primary tier: backend Jan 12 22:35:49.437: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-8786 create -f -' Jan 12 22:35:49.871: INFO: stderr: "" Jan 12 22:35:49.871: INFO: stdout: "service/agnhost-primary created\n" Jan 12 22:35:49.872: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Jan 12 22:35:49.872: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-8786 create -f -' Jan 12 22:35:50.187: INFO: stderr: "" Jan 12 22:35:50.187: INFO: stdout: "service/frontend created\n" Jan 12 22:35:50.187: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: k8s.gcr.io/e2e-test-images/agnhost:2.21 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Jan 12 22:35:50.187: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-8786 create -f -' Jan 12 22:35:50.485: INFO: stderr: "" Jan 12 22:35:50.485: INFO: stdout: "deployment.apps/frontend created\n" Jan 12 22:35:50.485: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-primary spec: replicas: 1 selector: matchLabels: app: agnhost role: primary tier: backend template: metadata: labels: app: agnhost role: primary tier: backend spec: containers: - name: primary image: k8s.gcr.io/e2e-test-images/agnhost:2.21 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jan 12 22:35:50.485: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-8786 create -f -' Jan 12 22:35:50.825: INFO: stderr: "" Jan 12 22:35:50.825: INFO: stdout: "deployment.apps/agnhost-primary created\n" Jan 12 22:35:50.825: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-replica spec: replicas: 2 selector: matchLabels: app: agnhost role: replica tier: backend template: metadata: labels: app: agnhost role: replica tier: backend spec: containers: - name: replica image: k8s.gcr.io/e2e-test-images/agnhost:2.21 args: [ "guestbook", "--replicaof", "agnhost-primary", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jan 12 22:35:50.825: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-8786 create -f -' Jan 12 22:35:51.154: INFO: stderr: "" Jan 12 22:35:51.154: INFO: stdout: "deployment.apps/agnhost-replica created\n" STEP: validating guestbook app Jan 12 22:35:51.154: INFO: Waiting for all frontend pods to be Running. Jan 12 22:36:01.205: INFO: Waiting for frontend to serve content. Jan 12 22:36:01.215: INFO: Trying to add a new entry to the guestbook. Jan 12 22:36:01.225: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Jan 12 22:36:01.231: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-8786 delete --grace-period=0 --force -f -' Jan 12 22:36:01.387: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 12 22:36:01.387: INFO: stdout: "service \"agnhost-replica\" force deleted\n" STEP: using delete to clean up resources Jan 12 22:36:01.388: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-8786 delete --grace-period=0 --force -f -' Jan 12 22:36:01.642: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 12 22:36:01.642: INFO: stdout: "service \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources Jan 12 22:36:01.642: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-8786 delete --grace-period=0 --force -f -' Jan 12 22:36:01.805: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 12 22:36:01.805: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Jan 12 22:36:01.806: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-8786 delete --grace-period=0 --force -f -' Jan 12 22:36:01.903: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 12 22:36:01.903: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Jan 12 22:36:01.903: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-8786 delete --grace-period=0 --force -f -' Jan 12 22:36:02.530: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 12 22:36:02.530: INFO: stdout: "deployment.apps \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources Jan 12 22:36:02.530: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-8786 delete --grace-period=0 --force -f -' Jan 12 22:36:02.650: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 12 22:36:02.651: INFO: stdout: "deployment.apps \"agnhost-replica\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 22:36:02.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8786" for this suite. • [SLOW TEST:14.171 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:342 should create and stop a working application [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":309,"completed":11,"skipped":266,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 22:36:02.744: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a service externalname-service with the type=ExternalName in namespace services-1432 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-1432 I0112 22:36:03.718898 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-1432, replica count: 2 I0112 22:36:06.769361 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0112 22:36:09.769646 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 12 22:36:09.769: INFO: Creating new exec pod Jan 12 22:36:14.785: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-1432 exec execpodljt2z -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Jan 12 22:36:15.037: INFO: stderr: "I0112 22:36:14.929819 331 log.go:181] (0xc00003a420) (0xc000656000) Create stream\nI0112 22:36:14.929899 331 log.go:181] (0xc00003a420) (0xc000656000) Stream added, broadcasting: 1\nI0112 22:36:14.932476 331 log.go:181] (0xc00003a420) Reply frame received for 1\nI0112 22:36:14.932533 331 log.go:181] (0xc00003a420) (0xc000317720) Create stream\nI0112 22:36:14.932550 331 log.go:181] (0xc00003a420) (0xc000317720) Stream added, broadcasting: 3\nI0112 22:36:14.933808 331 log.go:181] (0xc00003a420) Reply frame received for 3\nI0112 22:36:14.933851 331 log.go:181] (0xc00003a420) (0xc000317b80) Create stream\nI0112 22:36:14.933865 331 log.go:181] (0xc00003a420) (0xc000317b80) Stream added, broadcasting: 5\nI0112 22:36:14.935014 331 log.go:181] (0xc00003a420) Reply frame received for 5\nI0112 22:36:15.015799 331 log.go:181] (0xc00003a420) Data frame received for 5\nI0112 22:36:15.015854 331 log.go:181] (0xc000317b80) (5) Data frame handling\nI0112 22:36:15.015894 331 log.go:181] (0xc000317b80) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0112 22:36:15.027575 331 log.go:181] (0xc00003a420) Data frame received for 5\nI0112 22:36:15.027610 331 log.go:181] (0xc000317b80) (5) Data frame handling\nI0112 22:36:15.027635 331 log.go:181] (0xc000317b80) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0112 22:36:15.027840 331 log.go:181] (0xc00003a420) Data frame received for 3\nI0112 22:36:15.027873 331 log.go:181] (0xc000317720) (3) Data frame handling\nI0112 22:36:15.028054 331 log.go:181] (0xc00003a420) Data frame received for 5\nI0112 22:36:15.028078 331 log.go:181] (0xc000317b80) (5) Data frame handling\nI0112 22:36:15.030073 331 log.go:181] (0xc00003a420) Data frame received for 1\nI0112 22:36:15.030092 331 log.go:181] (0xc000656000) (1) Data frame handling\nI0112 22:36:15.030103 331 log.go:181] (0xc000656000) (1) Data frame sent\nI0112 22:36:15.030114 331 log.go:181] (0xc00003a420) (0xc000656000) Stream removed, broadcasting: 1\nI0112 22:36:15.030130 331 log.go:181] (0xc00003a420) Go away received\nI0112 22:36:15.030717 331 log.go:181] (0xc00003a420) (0xc000656000) Stream removed, broadcasting: 1\nI0112 22:36:15.030757 331 log.go:181] (0xc00003a420) (0xc000317720) Stream removed, broadcasting: 3\nI0112 22:36:15.030780 331 log.go:181] (0xc00003a420) (0xc000317b80) Stream removed, broadcasting: 5\n" Jan 12 22:36:15.037: INFO: stdout: "" Jan 12 22:36:15.037: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-1432 exec execpodljt2z -- /bin/sh -x -c nc -zv -t -w 2 10.96.40.254 80' Jan 12 22:36:15.257: INFO: stderr: "I0112 22:36:15.186201 349 log.go:181] (0xc00003a420) (0xc000f06000) Create stream\nI0112 22:36:15.186274 349 log.go:181] (0xc00003a420) (0xc000f06000) Stream added, broadcasting: 1\nI0112 22:36:15.188395 349 log.go:181] (0xc00003a420) Reply frame received for 1\nI0112 22:36:15.188434 349 log.go:181] (0xc00003a420) (0xc0003b4640) Create stream\nI0112 22:36:15.188447 349 log.go:181] (0xc00003a420) (0xc0003b4640) Stream added, broadcasting: 3\nI0112 22:36:15.189308 349 log.go:181] (0xc00003a420) Reply frame received for 3\nI0112 22:36:15.189345 349 log.go:181] (0xc00003a420) (0xc0003b54a0) Create stream\nI0112 22:36:15.189357 349 log.go:181] (0xc00003a420) (0xc0003b54a0) Stream added, broadcasting: 5\nI0112 22:36:15.190155 349 log.go:181] (0xc00003a420) Reply frame received for 5\nI0112 22:36:15.249095 349 log.go:181] (0xc00003a420) Data frame received for 3\nI0112 22:36:15.249141 349 log.go:181] (0xc0003b4640) (3) Data frame handling\nI0112 22:36:15.249194 349 log.go:181] (0xc00003a420) Data frame received for 5\nI0112 22:36:15.249228 349 log.go:181] (0xc0003b54a0) (5) Data frame handling\nI0112 22:36:15.249251 349 log.go:181] (0xc0003b54a0) (5) Data frame sent\nI0112 22:36:15.249273 349 log.go:181] (0xc00003a420) Data frame received for 5\nI0112 22:36:15.249290 349 log.go:181] (0xc0003b54a0) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.40.254 80\nConnection to 10.96.40.254 80 port [tcp/http] succeeded!\nI0112 22:36:15.250600 349 log.go:181] (0xc00003a420) Data frame received for 1\nI0112 22:36:15.250631 349 log.go:181] (0xc000f06000) (1) Data frame handling\nI0112 22:36:15.250645 349 log.go:181] (0xc000f06000) (1) Data frame sent\nI0112 22:36:15.250660 349 log.go:181] (0xc00003a420) (0xc000f06000) Stream removed, broadcasting: 1\nI0112 22:36:15.250691 349 log.go:181] (0xc00003a420) Go away received\nI0112 22:36:15.251113 349 log.go:181] (0xc00003a420) (0xc000f06000) Stream removed, broadcasting: 1\nI0112 22:36:15.251143 349 log.go:181] (0xc00003a420) (0xc0003b4640) Stream removed, broadcasting: 3\nI0112 22:36:15.251154 349 log.go:181] (0xc00003a420) (0xc0003b54a0) Stream removed, broadcasting: 5\n" Jan 12 22:36:15.257: INFO: stdout: "" Jan 12 22:36:15.257: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 22:36:15.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1432" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 • [SLOW TEST:12.629 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":309,"completed":12,"skipped":293,"failed":0} SSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 22:36:15.373: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward API volume plugin Jan 12 22:36:15.438: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b47cf463-8793-496c-8925-81c6d48debfc" in namespace "downward-api-8544" to be "Succeeded or Failed" Jan 12 22:36:15.454: INFO: Pod "downwardapi-volume-b47cf463-8793-496c-8925-81c6d48debfc": Phase="Pending", Reason="", readiness=false. Elapsed: 16.390818ms Jan 12 22:36:17.458: INFO: Pod "downwardapi-volume-b47cf463-8793-496c-8925-81c6d48debfc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020522957s Jan 12 22:36:19.463: INFO: Pod "downwardapi-volume-b47cf463-8793-496c-8925-81c6d48debfc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025318445s STEP: Saw pod success Jan 12 22:36:19.463: INFO: Pod "downwardapi-volume-b47cf463-8793-496c-8925-81c6d48debfc" satisfied condition "Succeeded or Failed" Jan 12 22:36:19.467: INFO: Trying to get logs from node leguer-worker pod downwardapi-volume-b47cf463-8793-496c-8925-81c6d48debfc container client-container: STEP: delete the pod Jan 12 22:36:19.582: INFO: Waiting for pod downwardapi-volume-b47cf463-8793-496c-8925-81c6d48debfc to disappear Jan 12 22:36:19.587: INFO: Pod downwardapi-volume-b47cf463-8793-496c-8925-81c6d48debfc no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 22:36:19.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8544" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":309,"completed":13,"skipped":299,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 22:36:19.595: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name configmap-test-upd-b6bb7321-a20f-453f-aa63-714703ab98a8 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-b6bb7321-a20f-453f-aa63-714703ab98a8 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 22:37:42.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3302" for this suite. • [SLOW TEST:82.760 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":309,"completed":14,"skipped":309,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 22:37:42.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 22:38:42.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7727" for this suite. • [SLOW TEST:60.131 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":309,"completed":15,"skipped":350,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 22:38:42.487: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-996, will wait for the garbage collector to delete the pods Jan 12 22:38:46.670: INFO: Deleting Job.batch foo took: 5.969484ms Jan 12 22:38:49.270: INFO: Terminating Job.batch foo pods took: 2.600274127s STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 22:40:20.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-996" for this suite. • [SLOW TEST:98.015 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":309,"completed":16,"skipped":359,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 22:40:20.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap that has name configmap-test-emptyKey-863063d1-6e03-4671-a2a7-9ee990ab0458 [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 22:40:20.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3959" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":309,"completed":17,"skipped":394,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 22:40:20.645: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 12 22:40:20.705: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-4274 create -f -' Jan 12 22:40:20.982: INFO: stderr: "" Jan 12 22:40:20.982: INFO: stdout: "replicationcontroller/agnhost-primary created\n" Jan 12 22:40:20.982: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-4274 create -f -' Jan 12 22:40:21.343: INFO: stderr: "" Jan 12 22:40:21.343: INFO: stdout: "service/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Jan 12 22:40:22.397: INFO: Selector matched 1 pods for map[app:agnhost] Jan 12 22:40:22.397: INFO: Found 0 / 1 Jan 12 22:40:23.347: INFO: Selector matched 1 pods for map[app:agnhost] Jan 12 22:40:23.347: INFO: Found 0 / 1 Jan 12 22:40:24.348: INFO: Selector matched 1 pods for map[app:agnhost] Jan 12 22:40:24.348: INFO: Found 1 / 1 Jan 12 22:40:24.348: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jan 12 22:40:24.352: INFO: Selector matched 1 pods for map[app:agnhost] Jan 12 22:40:24.352: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 12 22:40:24.352: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-4274 describe pod agnhost-primary-l6pf5' Jan 12 22:40:24.466: INFO: stderr: "" Jan 12 22:40:24.466: INFO: stdout: "Name: agnhost-primary-l6pf5\nNamespace: kubectl-4274\nPriority: 0\nNode: leguer-worker/172.18.0.13\nStart Time: Tue, 12 Jan 2021 22:40:21 +0000\nLabels: app=agnhost\n role=primary\nAnnotations: \nStatus: Running\nIP: 10.244.2.177\nIPs:\n IP: 10.244.2.177\nControlled By: ReplicationController/agnhost-primary\nContainers:\n agnhost-primary:\n Container ID: containerd://f8ff5c3e2f5b4776508f2c87868652579f138cd4f508e50f5159a63e7f7075bb\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.21\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Tue, 12 Jan 2021 22:40:23 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-f2b27 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-f2b27:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-f2b27\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 3s default-scheduler Successfully assigned kubectl-4274/agnhost-primary-l6pf5 to leguer-worker\n Normal Pulled 2s kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.21\" already present on machine\n Normal Created 1s kubelet Created container agnhost-primary\n Normal Started 1s kubelet Started container agnhost-primary\n" Jan 12 22:40:24.466: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-4274 describe rc agnhost-primary' Jan 12 22:40:24.585: INFO: stderr: "" Jan 12 22:40:24.585: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-4274\nSelector: app=agnhost,role=primary\nLabels: app=agnhost\n role=primary\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=primary\n Containers:\n agnhost-primary:\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.21\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 3s replication-controller Created pod: agnhost-primary-l6pf5\n" Jan 12 22:40:24.586: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-4274 describe service agnhost-primary' Jan 12 22:40:24.698: INFO: stderr: "" Jan 12 22:40:24.698: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-4274\nLabels: app=agnhost\n role=primary\nAnnotations: \nSelector: app=agnhost,role=primary\nType: ClusterIP\nIP Families: \nIP: 10.96.199.27\nIPs: 10.96.199.27\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.2.177:6379\nSession Affinity: None\nEvents: \n" Jan 12 22:40:24.702: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-4274 describe node leguer-control-plane' Jan 12 22:40:24.854: INFO: stderr: "" Jan 12 22:40:24.854: INFO: stdout: "Name: leguer-control-plane\nRoles: control-plane,master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=leguer-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/control-plane=\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 10 Jan 2021 17:37:43 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: leguer-control-plane\n AcquireTime: \n RenewTime: Tue, 12 Jan 2021 22:40:20 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Tue, 12 Jan 2021 22:36:59 +0000 Sun, 10 Jan 2021 17:37:43 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Tue, 12 Jan 2021 22:36:59 +0000 Sun, 10 Jan 2021 17:37:43 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Tue, 12 Jan 2021 22:36:59 +0000 Sun, 10 Jan 2021 17:37:43 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Tue, 12 Jan 2021 22:36:59 +0000 Sun, 10 Jan 2021 17:38:11 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.18.0.17\n Hostname: leguer-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759868Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759868Ki\n pods: 110\nSystem Info:\n Machine ID: 5f1cb3b1931a44e6bb33804f4b6ca7e5\n System UUID: c2287e83-2c9f-458f-8294-12965d8d5e30\n Boot ID: b267d78b-f69b-4338-80e8-3f4944338e5d\n Kernel Version: 4.15.0-118-generic\n OS Image: Ubuntu Groovy Gorilla (development branch)\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.4.0\n Kubelet Version: v1.20.0\n Kube-Proxy Version: v1.20.0\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nProviderID: kind://docker/leguer/leguer-control-plane\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-74ff55c5b-flmf7 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 2d5h\n kube-system coredns-74ff55c5b-whxn7 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 2d5h\n kube-system etcd-leguer-control-plane 100m (0%) 0 (0%) 100Mi (0%) 0 (0%) 2d5h\n kube-system kindnet-rjz52 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 2d5h\n kube-system kube-apiserver-leguer-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 2d5h\n kube-system kube-controller-manager-leguer-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 2d5h\n kube-system kube-proxy-chqjl 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2d5h\n kube-system kube-scheduler-leguer-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 2d5h\n local-path-storage local-path-provisioner-78776bfc44-45fhs 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2d5h\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 950m (5%) 100m (0%)\n memory 290Mi (0%) 390Mi (0%)\n ephemeral-storage 100Mi (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" Jan 12 22:40:24.855: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-4274 describe namespace kubectl-4274' Jan 12 22:40:24.963: INFO: stderr: "" Jan 12 22:40:24.963: INFO: stdout: "Name: kubectl-4274\nLabels: e2e-framework=kubectl\n e2e-run=a764d39e-9fd6-4322-bfe7-985107111e58\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 22:40:24.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4274" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":309,"completed":18,"skipped":396,"failed":0} ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 22:40:24.974: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating pod pod-subpath-test-configmap-hdjc STEP: Creating a pod to test atomic-volume-subpath Jan 12 22:40:25.086: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-hdjc" in namespace "subpath-3440" to be "Succeeded or Failed" Jan 12 22:40:25.121: INFO: Pod "pod-subpath-test-configmap-hdjc": Phase="Pending", Reason="", readiness=false. Elapsed: 35.285233ms Jan 12 22:40:27.127: INFO: Pod "pod-subpath-test-configmap-hdjc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040663561s Jan 12 22:40:29.131: INFO: Pod "pod-subpath-test-configmap-hdjc": Phase="Running", Reason="", readiness=true. Elapsed: 4.045108485s Jan 12 22:40:31.136: INFO: Pod "pod-subpath-test-configmap-hdjc": Phase="Running", Reason="", readiness=true. Elapsed: 6.04977307s Jan 12 22:40:33.140: INFO: Pod "pod-subpath-test-configmap-hdjc": Phase="Running", Reason="", readiness=true. Elapsed: 8.053958593s Jan 12 22:40:35.143: INFO: Pod "pod-subpath-test-configmap-hdjc": Phase="Running", Reason="", readiness=true. Elapsed: 10.057285432s Jan 12 22:40:37.148: INFO: Pod "pod-subpath-test-configmap-hdjc": Phase="Running", Reason="", readiness=true. Elapsed: 12.062280903s Jan 12 22:40:39.153: INFO: Pod "pod-subpath-test-configmap-hdjc": Phase="Running", Reason="", readiness=true. Elapsed: 14.067069634s Jan 12 22:40:41.158: INFO: Pod "pod-subpath-test-configmap-hdjc": Phase="Running", Reason="", readiness=true. Elapsed: 16.071672743s Jan 12 22:40:43.163: INFO: Pod "pod-subpath-test-configmap-hdjc": Phase="Running", Reason="", readiness=true. Elapsed: 18.076422951s Jan 12 22:40:45.168: INFO: Pod "pod-subpath-test-configmap-hdjc": Phase="Running", Reason="", readiness=true. Elapsed: 20.081941473s Jan 12 22:40:47.173: INFO: Pod "pod-subpath-test-configmap-hdjc": Phase="Running", Reason="", readiness=true. Elapsed: 22.086421169s Jan 12 22:40:49.176: INFO: Pod "pod-subpath-test-configmap-hdjc": Phase="Running", Reason="", readiness=true. Elapsed: 24.090030032s Jan 12 22:40:51.180: INFO: Pod "pod-subpath-test-configmap-hdjc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.093517698s STEP: Saw pod success Jan 12 22:40:51.180: INFO: Pod "pod-subpath-test-configmap-hdjc" satisfied condition "Succeeded or Failed" Jan 12 22:40:51.182: INFO: Trying to get logs from node leguer-worker2 pod pod-subpath-test-configmap-hdjc container test-container-subpath-configmap-hdjc: STEP: delete the pod Jan 12 22:40:51.260: INFO: Waiting for pod pod-subpath-test-configmap-hdjc to disappear Jan 12 22:40:51.264: INFO: Pod pod-subpath-test-configmap-hdjc no longer exists STEP: Deleting pod pod-subpath-test-configmap-hdjc Jan 12 22:40:51.264: INFO: Deleting pod "pod-subpath-test-configmap-hdjc" in namespace "subpath-3440" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 22:40:51.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3440" for this suite. • [SLOW TEST:26.299 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":309,"completed":19,"skipped":396,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 22:40:51.273: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Jan 12 22:40:51.416: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-5941 83840ab4-1b9e-4759-b378-7de8bbe7f406 413325 0 2021-01-12 22:40:51 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2021-01-12 22:40:51 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jan 12 22:40:51.417: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-5941 83840ab4-1b9e-4759-b378-7de8bbe7f406 413326 0 2021-01-12 22:40:51 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2021-01-12 22:40:51 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 22:40:51.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5941" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":309,"completed":20,"skipped":415,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] PodTemplates should delete a collection of pod templates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 22:40:51.430: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of pod templates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Create set of pod templates Jan 12 22:40:51.485: INFO: created test-podtemplate-1 Jan 12 22:40:51.528: INFO: created test-podtemplate-2 Jan 12 22:40:51.531: INFO: created test-podtemplate-3 STEP: get a list of pod templates with a label in the current namespace STEP: delete collection of pod templates Jan 12 22:40:51.539: INFO: requesting DeleteCollection of pod templates STEP: check that the list of pod templates matches the requested quantity Jan 12 22:40:51.559: INFO: requesting list of pod templates to confirm quantity [AfterEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 22:40:51.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-7585" for this suite. •{"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":309,"completed":21,"skipped":463,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 22:40:51.570: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating Pod STEP: Reading file content from the nginx-container Jan 12 22:40:57.738: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-7956 PodName:pod-sharedvolume-8d8d11e5-8714-4cba-b55c-b25ac0502f3c ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 12 22:40:57.738: INFO: >>> kubeConfig: /root/.kube/config I0112 22:40:57.781864 7 log.go:181] (0xc002dfe630) (0xc002f71e00) Create stream I0112 22:40:57.781889 7 log.go:181] (0xc002dfe630) (0xc002f71e00) Stream added, broadcasting: 1 I0112 22:40:57.783987 7 log.go:181] (0xc002dfe630) Reply frame received for 1 I0112 22:40:57.784056 7 log.go:181] (0xc002dfe630) (0xc0020de320) Create stream I0112 22:40:57.784074 7 log.go:181] (0xc002dfe630) (0xc0020de320) Stream added, broadcasting: 3 I0112 22:40:57.785083 7 log.go:181] (0xc002dfe630) Reply frame received for 3 I0112 22:40:57.785122 7 log.go:181] (0xc002dfe630) (0xc001384140) Create stream I0112 22:40:57.785137 7 log.go:181] (0xc002dfe630) (0xc001384140) Stream added, broadcasting: 5 I0112 22:40:57.785919 7 log.go:181] (0xc002dfe630) Reply frame received for 5 I0112 22:40:57.850059 7 log.go:181] (0xc002dfe630) Data frame received for 5 I0112 22:40:57.850099 7 log.go:181] (0xc001384140) (5) Data frame handling I0112 22:40:57.850128 7 log.go:181] (0xc002dfe630) Data frame received for 3 I0112 22:40:57.850141 7 log.go:181] (0xc0020de320) (3) Data frame handling I0112 22:40:57.850158 7 log.go:181] (0xc0020de320) (3) Data frame sent I0112 22:40:57.850171 7 log.go:181] (0xc002dfe630) Data frame received for 3 I0112 22:40:57.850184 7 log.go:181] (0xc0020de320) (3) Data frame handling I0112 22:40:57.851831 7 log.go:181] (0xc002dfe630) Data frame received for 1 I0112 22:40:57.851867 7 log.go:181] (0xc002f71e00) (1) Data frame handling I0112 22:40:57.851899 7 log.go:181] (0xc002f71e00) (1) Data frame sent I0112 22:40:57.852037 7 log.go:181] (0xc002dfe630) (0xc002f71e00) Stream removed, broadcasting: 1 I0112 22:40:57.852127 7 log.go:181] (0xc002dfe630) Go away received I0112 22:40:57.852165 7 log.go:181] (0xc002dfe630) (0xc002f71e00) Stream removed, broadcasting: 1 I0112 22:40:57.852184 7 log.go:181] (0xc002dfe630) (0xc0020de320) Stream removed, broadcasting: 3 I0112 22:40:57.852198 7 log.go:181] (0xc002dfe630) (0xc001384140) Stream removed, broadcasting: 5 Jan 12 22:40:57.852: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 22:40:57.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7956" for this suite. • [SLOW TEST:6.291 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:45 pod should support shared volumes between containers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":309,"completed":22,"skipped":473,"failed":0} [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 22:40:57.861: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Jan 12 22:40:57.989: INFO: Waiting up to 1m0s for all nodes to be ready Jan 12 22:41:58.013: INFO: Waiting for terminating namespaces to be deleted... [It] validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Create pods that use 2/3 of node resources. Jan 12 22:41:58.035: INFO: Created pod: pod0-sched-preemption-low-priority Jan 12 22:41:58.075: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a high priority pod that has same requirements as that of lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 22:42:44.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-7057" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:106.395 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]","total":309,"completed":23,"skipped":473,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 22:42:44.256: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Jan 12 22:42:44.453: INFO: >>> kubeConfig: /root/.kube/config Jan 12 22:42:47.582: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 22:43:01.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3651" for this suite. • [SLOW TEST:17.179 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":309,"completed":24,"skipped":476,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 22:43:01.435: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Jan 12 22:43:01.540: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-6995 1bcb6092-29c4-4040-afab-8b7acfe15e47 413875 0 2021-01-12 22:43:01 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-01-12 22:43:01 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jan 12 22:43:01.540: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-6995 1bcb6092-29c4-4040-afab-8b7acfe15e47 413876 0 2021-01-12 22:43:01 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-01-12 22:43:01 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Jan 12 22:43:01.569: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-6995 1bcb6092-29c4-4040-afab-8b7acfe15e47 413877 0 2021-01-12 22:43:01 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-01-12 22:43:01 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jan 12 22:43:01.569: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-6995 1bcb6092-29c4-4040-afab-8b7acfe15e47 413878 0 2021-01-12 22:43:01 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-01-12 22:43:01 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 22:43:01.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6995" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":309,"completed":25,"skipped":490,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 22:43:01.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0112 22:43:02.797044 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Jan 12 22:44:04.896: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 22:44:04.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4470" for this suite. • [SLOW TEST:63.327 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":309,"completed":26,"skipped":574,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 22:44:04.905: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0112 22:44:06.630794 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Jan 12 22:45:08.676: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 22:45:08.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7752" for this suite. • [SLOW TEST:63.788 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":309,"completed":27,"skipped":586,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 22:45:08.693: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating secret with name secret-test-map-3e5aa3ac-2319-4a45-9cf5-2d26975f63b2 STEP: Creating a pod to test consume secrets Jan 12 22:45:08.803: INFO: Waiting up to 5m0s for pod "pod-secrets-56349dcc-4c61-4317-b8cc-3f7c375f9932" in namespace "secrets-92" to be "Succeeded or Failed" Jan 12 22:45:08.829: INFO: Pod "pod-secrets-56349dcc-4c61-4317-b8cc-3f7c375f9932": Phase="Pending", Reason="", readiness=false. Elapsed: 26.421468ms Jan 12 22:45:10.833: INFO: Pod "pod-secrets-56349dcc-4c61-4317-b8cc-3f7c375f9932": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030597869s Jan 12 22:45:12.837: INFO: Pod "pod-secrets-56349dcc-4c61-4317-b8cc-3f7c375f9932": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034136774s STEP: Saw pod success Jan 12 22:45:12.837: INFO: Pod "pod-secrets-56349dcc-4c61-4317-b8cc-3f7c375f9932" satisfied condition "Succeeded or Failed" Jan 12 22:45:12.840: INFO: Trying to get logs from node leguer-worker2 pod pod-secrets-56349dcc-4c61-4317-b8cc-3f7c375f9932 container secret-volume-test: STEP: delete the pod Jan 12 22:45:13.402: INFO: Waiting for pod pod-secrets-56349dcc-4c61-4317-b8cc-3f7c375f9932 to disappear Jan 12 22:45:13.431: INFO: Pod pod-secrets-56349dcc-4c61-4317-b8cc-3f7c375f9932 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 22:45:13.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-92" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":28,"skipped":609,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 22:45:13.526: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name projected-configmap-test-volume-map-0c043df2-066a-41b8-bca7-5162d07b405e STEP: Creating a pod to test consume configMaps Jan 12 22:45:13.645: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-013060a5-94b6-41fb-8442-c41ed7de665e" in namespace "projected-1204" to be "Succeeded or Failed" Jan 12 22:45:13.648: INFO: Pod "pod-projected-configmaps-013060a5-94b6-41fb-8442-c41ed7de665e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.010275ms Jan 12 22:45:15.735: INFO: Pod "pod-projected-configmaps-013060a5-94b6-41fb-8442-c41ed7de665e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089710394s Jan 12 22:45:18.004: INFO: Pod "pod-projected-configmaps-013060a5-94b6-41fb-8442-c41ed7de665e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.359320282s STEP: Saw pod success Jan 12 22:45:18.004: INFO: Pod "pod-projected-configmaps-013060a5-94b6-41fb-8442-c41ed7de665e" satisfied condition "Succeeded or Failed" Jan 12 22:45:18.007: INFO: Trying to get logs from node leguer-worker pod pod-projected-configmaps-013060a5-94b6-41fb-8442-c41ed7de665e container agnhost-container: STEP: delete the pod Jan 12 22:45:18.083: INFO: Waiting for pod pod-projected-configmaps-013060a5-94b6-41fb-8442-c41ed7de665e to disappear Jan 12 22:45:18.092: INFO: Pod pod-projected-configmaps-013060a5-94b6-41fb-8442-c41ed7de665e no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 22:45:18.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1204" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":29,"skipped":616,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 22:45:18.098: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 12 22:45:18.627: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 12 22:45:20.639: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746088318, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746088318, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746088318, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746088318, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 12 22:45:22.643: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746088318, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746088318, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746088318, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746088318, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 12 22:45:25.674: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 22:45:25.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5714" for this suite. STEP: Destroying namespace "webhook-5714-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:7.923 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":309,"completed":30,"skipped":624,"failed":0} SSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 22:45:26.022: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jan 12 22:45:34.166: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 12 22:45:34.274: INFO: Pod pod-with-prestop-exec-hook still exists Jan 12 22:45:36.274: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 12 22:45:36.280: INFO: Pod pod-with-prestop-exec-hook still exists Jan 12 22:45:38.274: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 12 22:45:38.286: INFO: Pod pod-with-prestop-exec-hook still exists Jan 12 22:45:40.274: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 12 22:45:40.280: INFO: Pod pod-with-prestop-exec-hook still exists Jan 12 22:45:42.274: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 12 22:45:42.278: INFO: Pod pod-with-prestop-exec-hook still exists Jan 12 22:45:44.274: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 12 22:45:44.279: INFO: Pod pod-with-prestop-exec-hook still exists Jan 12 22:45:46.274: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 12 22:45:46.279: INFO: Pod pod-with-prestop-exec-hook still exists Jan 12 22:45:48.274: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 12 22:45:48.279: INFO: Pod pod-with-prestop-exec-hook still exists Jan 12 22:45:50.274: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 12 22:45:50.286: INFO: Pod pod-with-prestop-exec-hook still exists Jan 12 22:45:52.274: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 12 22:45:52.282: INFO: Pod pod-with-prestop-exec-hook still exists Jan 12 22:45:54.274: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 12 22:45:54.278: INFO: Pod pod-with-prestop-exec-hook still exists Jan 12 22:45:56.274: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 12 22:45:56.280: INFO: Pod pod-with-prestop-exec-hook still exists Jan 12 22:45:58.274: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 12 22:45:58.279: INFO: Pod pod-with-prestop-exec-hook still exists Jan 12 22:46:00.274: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 12 22:46:00.279: INFO: Pod pod-with-prestop-exec-hook still exists Jan 12 22:46:02.274: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 12 22:46:02.281: INFO: Pod pod-with-prestop-exec-hook still exists Jan 12 22:46:04.274: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 12 22:46:04.280: INFO: Pod pod-with-prestop-exec-hook still exists Jan 12 22:46:06.274: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 12 22:46:06.278: INFO: Pod pod-with-prestop-exec-hook still exists Jan 12 22:46:08.274: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 12 22:46:08.586: INFO: Pod pod-with-prestop-exec-hook still exists Jan 12 22:46:10.274: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 12 22:46:10.279: INFO: Pod pod-with-prestop-exec-hook still exists Jan 12 22:46:12.274: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 12 22:46:12.279: INFO: Pod pod-with-prestop-exec-hook still exists Jan 12 22:46:14.274: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 12 22:46:14.279: INFO: Pod pod-with-prestop-exec-hook still exists Jan 12 22:46:16.274: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 12 22:46:16.279: INFO: Pod pod-with-prestop-exec-hook still exists Jan 12 22:46:18.274: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 12 22:46:18.278: INFO: Pod pod-with-prestop-exec-hook still exists Jan 12 22:46:20.274: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 12 22:46:20.278: INFO: Pod pod-with-prestop-exec-hook still exists Jan 12 22:46:22.274: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 12 22:46:22.278: INFO: Pod pod-with-prestop-exec-hook still exists Jan 12 22:46:24.274: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 12 22:46:24.284: INFO: Pod pod-with-prestop-exec-hook still exists Jan 12 22:46:26.274: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 12 22:46:26.279: INFO: Pod pod-with-prestop-exec-hook still exists Jan 12 22:46:28.274: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 12 22:46:28.279: INFO: Pod pod-with-prestop-exec-hook still exists Jan 12 22:46:30.274: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 12 22:46:30.286: INFO: Pod pod-with-prestop-exec-hook still exists Jan 12 22:46:32.274: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 12 22:46:32.418: INFO: Pod pod-with-prestop-exec-hook still exists Jan 12 22:46:34.274: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 12 22:46:34.278: INFO: Pod pod-with-prestop-exec-hook still exists Jan 12 22:46:36.274: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 12 22:46:36.292: INFO: Pod pod-with-prestop-exec-hook still exists Jan 12 22:46:38.274: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 12 22:46:38.280: INFO: Pod pod-with-prestop-exec-hook still exists Jan 12 22:46:40.274: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 12 22:46:40.299: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 22:46:40.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-7725" for this suite. • [SLOW TEST:74.305 seconds] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:43 should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":309,"completed":31,"skipped":627,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 22:46:40.327: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward api env vars Jan 12 22:46:40.434: INFO: Waiting up to 5m0s for pod "downward-api-54c55af1-f8a7-444a-8f71-319eef991b6c" in namespace "downward-api-2478" to be "Succeeded or Failed" Jan 12 22:46:40.436: INFO: Pod "downward-api-54c55af1-f8a7-444a-8f71-319eef991b6c": Phase="Pending", Reason="", readiness=false. Elapsed: 1.947873ms Jan 12 22:46:42.441: INFO: Pod "downward-api-54c55af1-f8a7-444a-8f71-319eef991b6c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006937259s Jan 12 22:46:44.446: INFO: Pod "downward-api-54c55af1-f8a7-444a-8f71-319eef991b6c": Phase="Running", Reason="", readiness=true. Elapsed: 4.011568413s Jan 12 22:46:46.490: INFO: Pod "downward-api-54c55af1-f8a7-444a-8f71-319eef991b6c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.056005003s STEP: Saw pod success Jan 12 22:46:46.490: INFO: Pod "downward-api-54c55af1-f8a7-444a-8f71-319eef991b6c" satisfied condition "Succeeded or Failed" Jan 12 22:46:46.493: INFO: Trying to get logs from node leguer-worker pod downward-api-54c55af1-f8a7-444a-8f71-319eef991b6c container dapi-container: STEP: delete the pod Jan 12 22:46:46.731: INFO: Waiting for pod downward-api-54c55af1-f8a7-444a-8f71-319eef991b6c to disappear Jan 12 22:46:46.781: INFO: Pod downward-api-54c55af1-f8a7-444a-8f71-319eef991b6c no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 22:46:46.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2478" for this suite. • [SLOW TEST:6.461 seconds] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":309,"completed":32,"skipped":652,"failed":0} [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 22:46:46.789: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating the pod Jan 12 22:46:51.952: INFO: Successfully updated pod "labelsupdatee4a81d49-4fb1-44da-b262-14817d4e1cbf" [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 22:46:56.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7266" for this suite. • [SLOW TEST:9.226 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35 should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":309,"completed":33,"skipped":652,"failed":0} SSSSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 22:46:56.015: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Jan 12 22:46:56.119: INFO: Created pod &Pod{ObjectMeta:{test-dns-nameservers dns-5284 bd64240e-1cc5-4c15-ba0e-868ab7d108d5 414985 0 2021-01-12 22:46:56 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2021-01-12 22:46:56 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-b74zr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-b74zr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-b74zr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 12 22:46:56.134: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) Jan 12 22:46:58.140: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) Jan 12 22:47:00.137: INFO: The status of Pod test-dns-nameservers is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... Jan 12 22:47:00.137: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-5284 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 12 22:47:00.137: INFO: >>> kubeConfig: /root/.kube/config I0112 22:47:00.173594 7 log.go:181] (0xc003f10370) (0xc002d84780) Create stream I0112 22:47:00.173628 7 log.go:181] (0xc003f10370) (0xc002d84780) Stream added, broadcasting: 1 I0112 22:47:00.176726 7 log.go:181] (0xc003f10370) Reply frame received for 1 I0112 22:47:00.176765 7 log.go:181] (0xc003f10370) (0xc002a3cbe0) Create stream I0112 22:47:00.176779 7 log.go:181] (0xc003f10370) (0xc002a3cbe0) Stream added, broadcasting: 3 I0112 22:47:00.177841 7 log.go:181] (0xc003f10370) Reply frame received for 3 I0112 22:47:00.177889 7 log.go:181] (0xc003f10370) (0xc00275e1e0) Create stream I0112 22:47:00.177913 7 log.go:181] (0xc003f10370) (0xc00275e1e0) Stream added, broadcasting: 5 I0112 22:47:00.178859 7 log.go:181] (0xc003f10370) Reply frame received for 5 I0112 22:47:00.294222 7 log.go:181] (0xc003f10370) Data frame received for 3 I0112 22:47:00.294250 7 log.go:181] (0xc002a3cbe0) (3) Data frame handling I0112 22:47:00.294268 7 log.go:181] (0xc002a3cbe0) (3) Data frame sent I0112 22:47:00.295156 7 log.go:181] (0xc003f10370) Data frame received for 3 I0112 22:47:00.295198 7 log.go:181] (0xc003f10370) Data frame received for 5 I0112 22:47:00.295235 7 log.go:181] (0xc00275e1e0) (5) Data frame handling I0112 22:47:00.295267 7 log.go:181] (0xc002a3cbe0) (3) Data frame handling I0112 22:47:00.297392 7 log.go:181] (0xc003f10370) Data frame received for 1 I0112 22:47:00.297408 7 log.go:181] (0xc002d84780) (1) Data frame handling I0112 22:47:00.297418 7 log.go:181] (0xc002d84780) (1) Data frame sent I0112 22:47:00.297426 7 log.go:181] (0xc003f10370) (0xc002d84780) Stream removed, broadcasting: 1 I0112 22:47:00.297434 7 log.go:181] (0xc003f10370) Go away received I0112 22:47:00.297565 7 log.go:181] (0xc003f10370) (0xc002d84780) Stream removed, broadcasting: 1 I0112 22:47:00.297585 7 log.go:181] (0xc003f10370) (0xc002a3cbe0) Stream removed, broadcasting: 3 I0112 22:47:00.297594 7 log.go:181] (0xc003f10370) (0xc00275e1e0) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Jan 12 22:47:00.297: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-5284 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 12 22:47:00.297: INFO: >>> kubeConfig: /root/.kube/config I0112 22:47:00.332736 7 log.go:181] (0xc002dfe6e0) (0xc001a34960) Create stream I0112 22:47:00.332776 7 log.go:181] (0xc002dfe6e0) (0xc001a34960) Stream added, broadcasting: 1 I0112 22:47:00.335109 7 log.go:181] (0xc002dfe6e0) Reply frame received for 1 I0112 22:47:00.335142 7 log.go:181] (0xc002dfe6e0) (0xc00275e280) Create stream I0112 22:47:00.335156 7 log.go:181] (0xc002dfe6e0) (0xc00275e280) Stream added, broadcasting: 3 I0112 22:47:00.336108 7 log.go:181] (0xc002dfe6e0) Reply frame received for 3 I0112 22:47:00.336141 7 log.go:181] (0xc002dfe6e0) (0xc002d84820) Create stream I0112 22:47:00.336153 7 log.go:181] (0xc002dfe6e0) (0xc002d84820) Stream added, broadcasting: 5 I0112 22:47:00.337370 7 log.go:181] (0xc002dfe6e0) Reply frame received for 5 I0112 22:47:00.410568 7 log.go:181] (0xc002dfe6e0) Data frame received for 3 I0112 22:47:00.410595 7 log.go:181] (0xc00275e280) (3) Data frame handling I0112 22:47:00.410611 7 log.go:181] (0xc00275e280) (3) Data frame sent I0112 22:47:00.412549 7 log.go:181] (0xc002dfe6e0) Data frame received for 5 I0112 22:47:00.412575 7 log.go:181] (0xc002d84820) (5) Data frame handling I0112 22:47:00.412691 7 log.go:181] (0xc002dfe6e0) Data frame received for 3 I0112 22:47:00.412737 7 log.go:181] (0xc00275e280) (3) Data frame handling I0112 22:47:00.414000 7 log.go:181] (0xc002dfe6e0) Data frame received for 1 I0112 22:47:00.414024 7 log.go:181] (0xc001a34960) (1) Data frame handling I0112 22:47:00.414045 7 log.go:181] (0xc001a34960) (1) Data frame sent I0112 22:47:00.414112 7 log.go:181] (0xc002dfe6e0) (0xc001a34960) Stream removed, broadcasting: 1 I0112 22:47:00.414176 7 log.go:181] (0xc002dfe6e0) (0xc001a34960) Stream removed, broadcasting: 1 I0112 22:47:00.414194 7 log.go:181] (0xc002dfe6e0) (0xc00275e280) Stream removed, broadcasting: 3 I0112 22:47:00.414206 7 log.go:181] (0xc002dfe6e0) (0xc002d84820) Stream removed, broadcasting: 5 Jan 12 22:47:00.414: INFO: Deleting pod test-dns-nameservers... I0112 22:47:00.414238 7 log.go:181] (0xc002dfe6e0) Go away received [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 22:47:00.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5284" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":309,"completed":34,"skipped":660,"failed":0} ------------------------------ [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 22:47:00.477: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating service in namespace services-4786 STEP: creating service affinity-clusterip in namespace services-4786 STEP: creating replication controller affinity-clusterip in namespace services-4786 I0112 22:47:00.950496 7 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: services-4786, replica count: 3 I0112 22:47:04.001033 7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0112 22:47:07.001318 7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 12 22:47:07.006: INFO: Creating new exec pod Jan 12 22:47:12.023: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-4786 exec execpod-affinityss4rb -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' Jan 12 22:47:15.861: INFO: stderr: "I0112 22:47:15.769630 495 log.go:181] (0xc000afcdc0) (0xc000ec2320) Create stream\nI0112 22:47:15.769707 495 log.go:181] (0xc000afcdc0) (0xc000ec2320) Stream added, broadcasting: 1\nI0112 22:47:15.773836 495 log.go:181] (0xc000afcdc0) Reply frame received for 1\nI0112 22:47:15.773939 495 log.go:181] (0xc000afcdc0) (0xc000b06000) Create stream\nI0112 22:47:15.773986 495 log.go:181] (0xc000afcdc0) (0xc000b06000) Stream added, broadcasting: 3\nI0112 22:47:15.776751 495 log.go:181] (0xc000afcdc0) Reply frame received for 3\nI0112 22:47:15.776803 495 log.go:181] (0xc000afcdc0) (0xc0006b8140) Create stream\nI0112 22:47:15.776817 495 log.go:181] (0xc000afcdc0) (0xc0006b8140) Stream added, broadcasting: 5\nI0112 22:47:15.777712 495 log.go:181] (0xc000afcdc0) Reply frame received for 5\nI0112 22:47:15.841397 495 log.go:181] (0xc000afcdc0) Data frame received for 5\nI0112 22:47:15.841444 495 log.go:181] (0xc0006b8140) (5) Data frame handling\nI0112 22:47:15.841489 495 log.go:181] (0xc0006b8140) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip 80\nI0112 22:47:15.853810 495 log.go:181] (0xc000afcdc0) Data frame received for 5\nI0112 22:47:15.853852 495 log.go:181] (0xc0006b8140) (5) Data frame handling\nI0112 22:47:15.853881 495 log.go:181] (0xc0006b8140) (5) Data frame sent\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\nI0112 22:47:15.854472 495 log.go:181] (0xc000afcdc0) Data frame received for 5\nI0112 22:47:15.854503 495 log.go:181] (0xc0006b8140) (5) Data frame handling\nI0112 22:47:15.854587 495 log.go:181] (0xc000afcdc0) Data frame received for 3\nI0112 22:47:15.854610 495 log.go:181] (0xc000b06000) (3) Data frame handling\nI0112 22:47:15.856289 495 log.go:181] (0xc000afcdc0) Data frame received for 1\nI0112 22:47:15.856312 495 log.go:181] (0xc000ec2320) (1) Data frame handling\nI0112 22:47:15.856325 495 log.go:181] (0xc000ec2320) (1) Data frame sent\nI0112 22:47:15.856335 495 log.go:181] (0xc000afcdc0) (0xc000ec2320) Stream removed, broadcasting: 1\nI0112 22:47:15.856349 495 log.go:181] (0xc000afcdc0) Go away received\nI0112 22:47:15.856653 495 log.go:181] (0xc000afcdc0) (0xc000ec2320) Stream removed, broadcasting: 1\nI0112 22:47:15.856666 495 log.go:181] (0xc000afcdc0) (0xc000b06000) Stream removed, broadcasting: 3\nI0112 22:47:15.856671 495 log.go:181] (0xc000afcdc0) (0xc0006b8140) Stream removed, broadcasting: 5\n" Jan 12 22:47:15.862: INFO: stdout: "" Jan 12 22:47:15.862: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-4786 exec execpod-affinityss4rb -- /bin/sh -x -c nc -zv -t -w 2 10.96.148.176 80' Jan 12 22:47:16.152: INFO: stderr: "I0112 22:47:16.054580 512 log.go:181] (0xc0000e8000) (0xc00014a140) Create stream\nI0112 22:47:16.054657 512 log.go:181] (0xc0000e8000) (0xc00014a140) Stream added, broadcasting: 1\nI0112 22:47:16.057067 512 log.go:181] (0xc0000e8000) Reply frame received for 1\nI0112 22:47:16.057138 512 log.go:181] (0xc0000e8000) (0xc0003bae60) Create stream\nI0112 22:47:16.057161 512 log.go:181] (0xc0000e8000) (0xc0003bae60) Stream added, broadcasting: 3\nI0112 22:47:16.058301 512 log.go:181] (0xc0000e8000) Reply frame received for 3\nI0112 22:47:16.058353 512 log.go:181] (0xc0000e8000) (0xc00041adc0) Create stream\nI0112 22:47:16.058372 512 log.go:181] (0xc0000e8000) (0xc00041adc0) Stream added, broadcasting: 5\nI0112 22:47:16.059184 512 log.go:181] (0xc0000e8000) Reply frame received for 5\nI0112 22:47:16.145035 512 log.go:181] (0xc0000e8000) Data frame received for 3\nI0112 22:47:16.145058 512 log.go:181] (0xc0003bae60) (3) Data frame handling\nI0112 22:47:16.145091 512 log.go:181] (0xc0000e8000) Data frame received for 5\nI0112 22:47:16.145113 512 log.go:181] (0xc00041adc0) (5) Data frame handling\nI0112 22:47:16.145136 512 log.go:181] (0xc00041adc0) (5) Data frame sent\nI0112 22:47:16.145152 512 log.go:181] (0xc0000e8000) Data frame received for 5\nI0112 22:47:16.145163 512 log.go:181] (0xc00041adc0) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.148.176 80\nConnection to 10.96.148.176 80 port [tcp/http] succeeded!\nI0112 22:47:16.146708 512 log.go:181] (0xc0000e8000) Data frame received for 1\nI0112 22:47:16.146721 512 log.go:181] (0xc00014a140) (1) Data frame handling\nI0112 22:47:16.146731 512 log.go:181] (0xc00014a140) (1) Data frame sent\nI0112 22:47:16.146742 512 log.go:181] (0xc0000e8000) (0xc00014a140) Stream removed, broadcasting: 1\nI0112 22:47:16.146752 512 log.go:181] (0xc0000e8000) Go away received\nI0112 22:47:16.147027 512 log.go:181] (0xc0000e8000) (0xc00014a140) Stream removed, broadcasting: 1\nI0112 22:47:16.147040 512 log.go:181] (0xc0000e8000) (0xc0003bae60) Stream removed, broadcasting: 3\nI0112 22:47:16.147046 512 log.go:181] (0xc0000e8000) (0xc00041adc0) Stream removed, broadcasting: 5\n" Jan 12 22:47:16.152: INFO: stdout: "" Jan 12 22:47:16.152: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-4786 exec execpod-affinityss4rb -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.96.148.176:80/ ; done' Jan 12 22:47:16.566: INFO: stderr: "I0112 22:47:16.394549 530 log.go:181] (0xc00001d080) (0xc000b9c1e0) Create stream\nI0112 22:47:16.394628 530 log.go:181] (0xc00001d080) (0xc000b9c1e0) Stream added, broadcasting: 1\nI0112 22:47:16.397286 530 log.go:181] (0xc00001d080) Reply frame received for 1\nI0112 22:47:16.397328 530 log.go:181] (0xc00001d080) (0xc0002740a0) Create stream\nI0112 22:47:16.397342 530 log.go:181] (0xc00001d080) (0xc0002740a0) Stream added, broadcasting: 3\nI0112 22:47:16.398239 530 log.go:181] (0xc00001d080) Reply frame received for 3\nI0112 22:47:16.398279 530 log.go:181] (0xc00001d080) (0xc00019fd60) Create stream\nI0112 22:47:16.398293 530 log.go:181] (0xc00001d080) (0xc00019fd60) Stream added, broadcasting: 5\nI0112 22:47:16.399110 530 log.go:181] (0xc00001d080) Reply frame received for 5\nI0112 22:47:16.472533 530 log.go:181] (0xc00001d080) Data frame received for 5\nI0112 22:47:16.472567 530 log.go:181] (0xc00019fd60) (5) Data frame handling\nI0112 22:47:16.472579 530 log.go:181] (0xc00019fd60) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.148.176:80/\nI0112 22:47:16.472601 530 log.go:181] (0xc00001d080) Data frame received for 3\nI0112 22:47:16.472613 530 log.go:181] (0xc0002740a0) (3) Data frame handling\nI0112 22:47:16.472623 530 log.go:181] (0xc0002740a0) (3) Data frame sent\nI0112 22:47:16.475914 530 log.go:181] (0xc00001d080) Data frame received for 3\nI0112 22:47:16.475945 530 log.go:181] (0xc0002740a0) (3) Data frame handling\nI0112 22:47:16.475967 530 log.go:181] (0xc0002740a0) (3) Data frame sent\nI0112 22:47:16.476442 530 log.go:181] (0xc00001d080) Data frame received for 5\nI0112 22:47:16.476463 530 log.go:181] (0xc00019fd60) (5) Data frame handling\nI0112 22:47:16.476481 530 log.go:181] (0xc00019fd60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.148.176:80/\nI0112 22:47:16.476492 530 log.go:181] (0xc00001d080) Data frame received for 3\nI0112 22:47:16.476501 530 log.go:181] (0xc0002740a0) (3) Data frame handling\nI0112 22:47:16.476513 530 log.go:181] (0xc0002740a0) (3) Data frame sent\nI0112 22:47:16.485661 530 log.go:181] (0xc00001d080) Data frame received for 3\nI0112 22:47:16.485674 530 log.go:181] (0xc0002740a0) (3) Data frame handling\nI0112 22:47:16.485681 530 log.go:181] (0xc0002740a0) (3) Data frame sent\nI0112 22:47:16.486354 530 log.go:181] (0xc00001d080) Data frame received for 5\nI0112 22:47:16.486382 530 log.go:181] (0xc00019fd60) (5) Data frame handling\nI0112 22:47:16.486403 530 log.go:181] (0xc00019fd60) (5) Data frame sent\n+ echo\n+ curlI0112 22:47:16.486425 530 log.go:181] (0xc00001d080) Data frame received for 5\nI0112 22:47:16.486440 530 log.go:181] (0xc00019fd60) (5) Data frame handling\nI0112 22:47:16.486450 530 log.go:181] (0xc00019fd60) (5) Data frame sent\n -q -s --connect-timeout 2 http://10.96.148.176:80/\nI0112 22:47:16.486458 530 log.go:181] (0xc00001d080) Data frame received for 3\nI0112 22:47:16.486463 530 log.go:181] (0xc0002740a0) (3) Data frame handling\nI0112 22:47:16.486468 530 log.go:181] (0xc0002740a0) (3) Data frame sent\nI0112 22:47:16.489819 530 log.go:181] (0xc00001d080) Data frame received for 3\nI0112 22:47:16.489835 530 log.go:181] (0xc0002740a0) (3) Data frame handling\nI0112 22:47:16.489845 530 log.go:181] (0xc0002740a0) (3) Data frame sent\nI0112 22:47:16.490674 530 log.go:181] (0xc00001d080) Data frame received for 5\nI0112 22:47:16.490705 530 log.go:181] (0xc00001d080) Data frame received for 3\nI0112 22:47:16.490795 530 log.go:181] (0xc0002740a0) (3) Data frame handling\nI0112 22:47:16.490850 530 log.go:181] (0xc0002740a0) (3) Data frame sent\nI0112 22:47:16.490879 530 log.go:181] (0xc00019fd60) (5) Data frame handling\nI0112 22:47:16.490892 530 log.go:181] (0xc00019fd60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.148.176:80/\nI0112 22:47:16.493993 530 log.go:181] (0xc00001d080) Data frame received for 3\nI0112 22:47:16.494005 530 log.go:181] (0xc0002740a0) (3) Data frame handling\nI0112 22:47:16.494016 530 log.go:181] (0xc0002740a0) (3) Data frame sent\nI0112 22:47:16.494595 530 log.go:181] (0xc00001d080) Data frame received for 3\nI0112 22:47:16.494624 530 log.go:181] (0xc0002740a0) (3) Data frame handling\nI0112 22:47:16.494632 530 log.go:181] (0xc0002740a0) (3) Data frame sent\nI0112 22:47:16.494649 530 log.go:181] (0xc00001d080) Data frame received for 5\nI0112 22:47:16.494665 530 log.go:181] (0xc00019fd60) (5) Data frame handling\nI0112 22:47:16.494686 530 log.go:181] (0xc00019fd60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.148.176:80/\nI0112 22:47:16.498716 530 log.go:181] (0xc00001d080) Data frame received for 3\nI0112 22:47:16.498733 530 log.go:181] (0xc0002740a0) (3) Data frame handling\nI0112 22:47:16.498741 530 log.go:181] (0xc0002740a0) (3) Data frame sent\nI0112 22:47:16.499066 530 log.go:181] (0xc00001d080) Data frame received for 5\nI0112 22:47:16.499087 530 log.go:181] (0xc00019fd60) (5) Data frame handling\nI0112 22:47:16.499100 530 log.go:181] (0xc00019fd60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.148.176:80/\nI0112 22:47:16.499113 530 log.go:181] (0xc00001d080) Data frame received for 3\nI0112 22:47:16.499123 530 log.go:181] (0xc0002740a0) (3) Data frame handling\nI0112 22:47:16.499131 530 log.go:181] (0xc0002740a0) (3) Data frame sent\nI0112 22:47:16.503205 530 log.go:181] (0xc00001d080) Data frame received for 3\nI0112 22:47:16.503217 530 log.go:181] (0xc0002740a0) (3) Data frame handling\nI0112 22:47:16.503224 530 log.go:181] (0xc0002740a0) (3) Data frame sent\nI0112 22:47:16.503841 530 log.go:181] (0xc00001d080) Data frame received for 3\nI0112 22:47:16.503859 530 log.go:181] (0xc00001d080) Data frame received for 5\nI0112 22:47:16.503880 530 log.go:181] (0xc00019fd60) (5) Data frame handling\nI0112 22:47:16.503891 530 log.go:181] (0xc00019fd60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.148.176:80/\nI0112 22:47:16.503905 530 log.go:181] (0xc0002740a0) (3) Data frame handling\nI0112 22:47:16.503920 530 log.go:181] (0xc0002740a0) (3) Data frame sent\nI0112 22:47:16.507637 530 log.go:181] (0xc00001d080) Data frame received for 3\nI0112 22:47:16.507653 530 log.go:181] (0xc0002740a0) (3) Data frame handling\nI0112 22:47:16.507671 530 log.go:181] (0xc0002740a0) (3) Data frame sent\nI0112 22:47:16.508129 530 log.go:181] (0xc00001d080) Data frame received for 3\nI0112 22:47:16.508144 530 log.go:181] (0xc0002740a0) (3) Data frame handling\nI0112 22:47:16.508163 530 log.go:181] (0xc00001d080) Data frame received for 5\nI0112 22:47:16.508188 530 log.go:181] (0xc00019fd60) (5) Data frame handling\nI0112 22:47:16.508201 530 log.go:181] (0xc00019fd60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.148.176:80/\nI0112 22:47:16.508227 530 log.go:181] (0xc0002740a0) (3) Data frame sent\nI0112 22:47:16.511913 530 log.go:181] (0xc00001d080) Data frame received for 3\nI0112 22:47:16.511925 530 log.go:181] (0xc0002740a0) (3) Data frame handling\nI0112 22:47:16.511932 530 log.go:181] (0xc0002740a0) (3) Data frame sent\nI0112 22:47:16.512646 530 log.go:181] (0xc00001d080) Data frame received for 3\nI0112 22:47:16.512670 530 log.go:181] (0xc0002740a0) (3) Data frame handling\nI0112 22:47:16.512683 530 log.go:181] (0xc0002740a0) (3) Data frame sent\nI0112 22:47:16.512718 530 log.go:181] (0xc00001d080) Data frame received for 5\nI0112 22:47:16.512747 530 log.go:181] (0xc00019fd60) (5) Data frame handling\nI0112 22:47:16.512775 530 log.go:181] (0xc00019fd60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.148.176:80/\nI0112 22:47:16.516663 530 log.go:181] (0xc00001d080) Data frame received for 3\nI0112 22:47:16.516675 530 log.go:181] (0xc0002740a0) (3) Data frame handling\nI0112 22:47:16.516685 530 log.go:181] (0xc0002740a0) (3) Data frame sent\nI0112 22:47:16.517577 530 log.go:181] (0xc00001d080) Data frame received for 3\nI0112 22:47:16.517605 530 log.go:181] (0xc0002740a0) (3) Data frame handling\nI0112 22:47:16.517620 530 log.go:181] (0xc0002740a0) (3) Data frame sent\nI0112 22:47:16.517640 530 log.go:181] (0xc00001d080) Data frame received for 5\nI0112 22:47:16.517657 530 log.go:181] (0xc00019fd60) (5) Data frame handling\nI0112 22:47:16.517669 530 log.go:181] (0xc00019fd60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.148.176:80/\nI0112 22:47:16.520929 530 log.go:181] (0xc00001d080) Data frame received for 3\nI0112 22:47:16.520958 530 log.go:181] (0xc0002740a0) (3) Data frame handling\nI0112 22:47:16.520985 530 log.go:181] (0xc0002740a0) (3) Data frame sent\nI0112 22:47:16.521720 530 log.go:181] (0xc00001d080) Data frame received for 5\nI0112 22:47:16.521748 530 log.go:181] (0xc00019fd60) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.148.176:80/\nI0112 22:47:16.521777 530 log.go:181] (0xc00001d080) Data frame received for 3\nI0112 22:47:16.521806 530 log.go:181] (0xc0002740a0) (3) Data frame handling\nI0112 22:47:16.521822 530 log.go:181] (0xc0002740a0) (3) Data frame sent\nI0112 22:47:16.521842 530 log.go:181] (0xc00019fd60) (5) Data frame sent\nI0112 22:47:16.525595 530 log.go:181] (0xc00001d080) Data frame received for 3\nI0112 22:47:16.525632 530 log.go:181] (0xc0002740a0) (3) Data frame handling\nI0112 22:47:16.525682 530 log.go:181] (0xc0002740a0) (3) Data frame sent\nI0112 22:47:16.526104 530 log.go:181] (0xc00001d080) Data frame received for 3\nI0112 22:47:16.526135 530 log.go:181] (0xc0002740a0) (3) Data frame handling\nI0112 22:47:16.526169 530 log.go:181] (0xc0002740a0) (3) Data frame sent\nI0112 22:47:16.526194 530 log.go:181] (0xc00001d080) Data frame received for 5\nI0112 22:47:16.526213 530 log.go:181] (0xc00019fd60) (5) Data frame handling\nI0112 22:47:16.526232 530 log.go:181] (0xc00019fd60) (5) Data frame sent\nI0112 22:47:16.526244 530 log.go:181] (0xc00001d080) Data frame received for 5\nI0112 22:47:16.526255 530 log.go:181] (0xc00019fd60) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.148.176:80/\nI0112 22:47:16.526280 530 log.go:181] (0xc00019fd60) (5) Data frame sent\nI0112 22:47:16.531953 530 log.go:181] (0xc00001d080) Data frame received for 3\nI0112 22:47:16.531974 530 log.go:181] (0xc0002740a0) (3) Data frame handling\nI0112 22:47:16.531991 530 log.go:181] (0xc0002740a0) (3) Data frame sent\nI0112 22:47:16.532925 530 log.go:181] (0xc00001d080) Data frame received for 5\nI0112 22:47:16.532940 530 log.go:181] (0xc00019fd60) (5) Data frame handling\nI0112 22:47:16.532948 530 log.go:181] (0xc00019fd60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2I0112 22:47:16.532978 530 log.go:181] (0xc00001d080) Data frame received for 5\nI0112 22:47:16.532999 530 log.go:181] (0xc00001d080) Data frame received for 3\nI0112 22:47:16.533026 530 log.go:181] (0xc0002740a0) (3) Data frame handling\nI0112 22:47:16.533046 530 log.go:181] (0xc0002740a0) (3) Data frame sent\nI0112 22:47:16.533061 530 log.go:181] (0xc00019fd60) (5) Data frame handling\nI0112 22:47:16.533086 530 log.go:181] (0xc00019fd60) (5) Data frame sent\n http://10.96.148.176:80/\nI0112 22:47:16.537702 530 log.go:181] (0xc00001d080) Data frame received for 3\nI0112 22:47:16.537727 530 log.go:181] (0xc0002740a0) (3) Data frame handling\nI0112 22:47:16.537742 530 log.go:181] (0xc0002740a0) (3) Data frame sent\nI0112 22:47:16.538584 530 log.go:181] (0xc00001d080) Data frame received for 3\nI0112 22:47:16.538725 530 log.go:181] (0xc0002740a0) (3) Data frame handling\nI0112 22:47:16.538753 530 log.go:181] (0xc0002740a0) (3) Data frame sent\nI0112 22:47:16.538775 530 log.go:181] (0xc00001d080) Data frame received for 5\nI0112 22:47:16.538788 530 log.go:181] (0xc00019fd60) (5) Data frame handling\nI0112 22:47:16.538801 530 log.go:181] (0xc00019fd60) (5) Data frame sent\nI0112 22:47:16.538814 530 log.go:181] (0xc00001d080) Data frame received for 5\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.148.176:80/I0112 22:47:16.538828 530 log.go:181] (0xc00019fd60) (5) Data frame handling\nI0112 22:47:16.538896 530 log.go:181] (0xc00019fd60) (5) Data frame sent\n\nI0112 22:47:16.544546 530 log.go:181] (0xc00001d080) Data frame received for 3\nI0112 22:47:16.544572 530 log.go:181] (0xc0002740a0) (3) Data frame handling\nI0112 22:47:16.544593 530 log.go:181] (0xc0002740a0) (3) Data frame sent\nI0112 22:47:16.545275 530 log.go:181] (0xc00001d080) Data frame received for 3\nI0112 22:47:16.545291 530 log.go:181] (0xc0002740a0) (3) Data frame handling\nI0112 22:47:16.545299 530 log.go:181] (0xc0002740a0) (3) Data frame sent\nI0112 22:47:16.545325 530 log.go:181] (0xc00001d080) Data frame received for 5\nI0112 22:47:16.545362 530 log.go:181] (0xc00019fd60) (5) Data frame handling\nI0112 22:47:16.545407 530 log.go:181] (0xc00019fd60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.148.176:80/\nI0112 22:47:16.550281 530 log.go:181] (0xc00001d080) Data frame received for 3\nI0112 22:47:16.550300 530 log.go:181] (0xc0002740a0) (3) Data frame handling\nI0112 22:47:16.550314 530 log.go:181] (0xc0002740a0) (3) Data frame sent\nI0112 22:47:16.551095 530 log.go:181] (0xc00001d080) Data frame received for 3\nI0112 22:47:16.551124 530 log.go:181] (0xc0002740a0) (3) Data frame handling\nI0112 22:47:16.551137 530 log.go:181] (0xc0002740a0) (3) Data frame sent\nI0112 22:47:16.551150 530 log.go:181] (0xc00001d080) Data frame received for 5\nI0112 22:47:16.551157 530 log.go:181] (0xc00019fd60) (5) Data frame handling\nI0112 22:47:16.551170 530 log.go:181] (0xc00019fd60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.148.176:80/\nI0112 22:47:16.556514 530 log.go:181] (0xc00001d080) Data frame received for 3\nI0112 22:47:16.556525 530 log.go:181] (0xc0002740a0) (3) Data frame handling\nI0112 22:47:16.556542 530 log.go:181] (0xc0002740a0) (3) Data frame sent\nI0112 22:47:16.557309 530 log.go:181] (0xc00001d080) Data frame received for 5\nI0112 22:47:16.557321 530 log.go:181] (0xc00019fd60) (5) Data frame handling\nI0112 22:47:16.557570 530 log.go:181] (0xc00001d080) Data frame received for 3\nI0112 22:47:16.557585 530 log.go:181] (0xc0002740a0) (3) Data frame handling\nI0112 22:47:16.559562 530 log.go:181] (0xc00001d080) Data frame received for 1\nI0112 22:47:16.559591 530 log.go:181] (0xc000b9c1e0) (1) Data frame handling\nI0112 22:47:16.559604 530 log.go:181] (0xc000b9c1e0) (1) Data frame sent\nI0112 22:47:16.559627 530 log.go:181] (0xc00001d080) (0xc000b9c1e0) Stream removed, broadcasting: 1\nI0112 22:47:16.559667 530 log.go:181] (0xc00001d080) Go away received\nI0112 22:47:16.560140 530 log.go:181] (0xc00001d080) (0xc000b9c1e0) Stream removed, broadcasting: 1\nI0112 22:47:16.560172 530 log.go:181] (0xc00001d080) (0xc0002740a0) Stream removed, broadcasting: 3\nI0112 22:47:16.560185 530 log.go:181] (0xc00001d080) (0xc00019fd60) Stream removed, broadcasting: 5\n" Jan 12 22:47:16.567: INFO: stdout: "\naffinity-clusterip-kk5wz\naffinity-clusterip-kk5wz\naffinity-clusterip-kk5wz\naffinity-clusterip-kk5wz\naffinity-clusterip-kk5wz\naffinity-clusterip-kk5wz\naffinity-clusterip-kk5wz\naffinity-clusterip-kk5wz\naffinity-clusterip-kk5wz\naffinity-clusterip-kk5wz\naffinity-clusterip-kk5wz\naffinity-clusterip-kk5wz\naffinity-clusterip-kk5wz\naffinity-clusterip-kk5wz\naffinity-clusterip-kk5wz\naffinity-clusterip-kk5wz" Jan 12 22:47:16.567: INFO: Received response from host: affinity-clusterip-kk5wz Jan 12 22:47:16.567: INFO: Received response from host: affinity-clusterip-kk5wz Jan 12 22:47:16.567: INFO: Received response from host: affinity-clusterip-kk5wz Jan 12 22:47:16.567: INFO: Received response from host: affinity-clusterip-kk5wz Jan 12 22:47:16.567: INFO: Received response from host: affinity-clusterip-kk5wz Jan 12 22:47:16.567: INFO: Received response from host: affinity-clusterip-kk5wz Jan 12 22:47:16.567: INFO: Received response from host: affinity-clusterip-kk5wz Jan 12 22:47:16.567: INFO: Received response from host: affinity-clusterip-kk5wz Jan 12 22:47:16.567: INFO: Received response from host: affinity-clusterip-kk5wz Jan 12 22:47:16.567: INFO: Received response from host: affinity-clusterip-kk5wz Jan 12 22:47:16.567: INFO: Received response from host: affinity-clusterip-kk5wz Jan 12 22:47:16.567: INFO: Received response from host: affinity-clusterip-kk5wz Jan 12 22:47:16.567: INFO: Received response from host: affinity-clusterip-kk5wz Jan 12 22:47:16.567: INFO: Received response from host: affinity-clusterip-kk5wz Jan 12 22:47:16.567: INFO: Received response from host: affinity-clusterip-kk5wz Jan 12 22:47:16.567: INFO: Received response from host: affinity-clusterip-kk5wz Jan 12 22:47:16.567: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip in namespace services-4786, will wait for the garbage collector to delete the pods Jan 12 22:47:17.051: INFO: Deleting ReplicationController affinity-clusterip took: 41.593053ms Jan 12 22:47:17.351: INFO: Terminating ReplicationController affinity-clusterip pods took: 300.251528ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 22:48:20.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4786" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 • [SLOW TEST:79.554 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":309,"completed":35,"skipped":660,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 22:48:20.032: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward API volume plugin Jan 12 22:48:20.204: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ff893746-4bc6-461e-b98d-2cf41c50ee30" in namespace "projected-2048" to be "Succeeded or Failed" Jan 12 22:48:20.207: INFO: Pod "downwardapi-volume-ff893746-4bc6-461e-b98d-2cf41c50ee30": Phase="Pending", Reason="", readiness=false. Elapsed: 2.810395ms Jan 12 22:48:22.211: INFO: Pod "downwardapi-volume-ff893746-4bc6-461e-b98d-2cf41c50ee30": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007038867s Jan 12 22:48:24.216: INFO: Pod "downwardapi-volume-ff893746-4bc6-461e-b98d-2cf41c50ee30": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012128757s STEP: Saw pod success Jan 12 22:48:24.216: INFO: Pod "downwardapi-volume-ff893746-4bc6-461e-b98d-2cf41c50ee30" satisfied condition "Succeeded or Failed" Jan 12 22:48:24.219: INFO: Trying to get logs from node leguer-worker pod downwardapi-volume-ff893746-4bc6-461e-b98d-2cf41c50ee30 container client-container: STEP: delete the pod Jan 12 22:48:24.280: INFO: Waiting for pod downwardapi-volume-ff893746-4bc6-461e-b98d-2cf41c50ee30 to disappear Jan 12 22:48:24.290: INFO: Pod downwardapi-volume-ff893746-4bc6-461e-b98d-2cf41c50ee30 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 22:48:24.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2048" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":309,"completed":36,"skipped":687,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 22:48:24.300: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating the pod Jan 12 22:48:24.395: INFO: PodSpec: initContainers in spec.initContainers Jan 12 22:49:13.994: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-145f9e8b-427e-48d1-ba78-a0af0adbba8d", GenerateName:"", Namespace:"init-container-8071", SelfLink:"", UID:"fe25b861-31f5-46ee-a24f-dfa2423a19fc", ResourceVersion:"415590", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63746088504, loc:(*time.Location)(0x7962e20)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"395626880"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc000dda240), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000dda2e0)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc000dda340), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000dda400)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-7bl9g", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc006346000), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-7bl9g", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-7bl9g", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-7bl9g", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0061a6098), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"leguer-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002222000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0061a6130)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0061a6150)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0061a6158), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0061a615c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc00243e030), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746088504, loc:(*time.Location)(0x7962e20)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746088504, loc:(*time.Location)(0x7962e20)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746088504, loc:(*time.Location)(0x7962e20)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746088504, loc:(*time.Location)(0x7962e20)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.13", PodIP:"10.244.2.195", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.195"}}, StartTime:(*v1.Time)(0xc000dda520), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0022220e0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002222150)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://9ed7ef1368d483c89c82af5a20dc9d420f1afa623b3b56013a9d915b323c3acf", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000dda660), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000dda540), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc0061a61df)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 22:49:13.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8071" for this suite. • [SLOW TEST:49.712 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":309,"completed":37,"skipped":704,"failed":0} [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 22:49:14.012: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating projection with secret that has name secret-emptykey-test-59203982-7ed5-40d0-823e-927b4d4a29df [AfterEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 22:49:14.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6023" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":309,"completed":38,"skipped":704,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 22:49:14.133: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating the pod STEP: waiting for pod running STEP: creating a file in subpath Jan 12 22:49:20.206: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-7505 PodName:var-expansion-967f7aca-0465-40f8-8748-ee75cbad67f0 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 12 22:49:20.206: INFO: >>> kubeConfig: /root/.kube/config I0112 22:49:20.238831 7 log.go:181] (0xc0007318c0) (0xc0013c5400) Create stream I0112 22:49:20.238862 7 log.go:181] (0xc0007318c0) (0xc0013c5400) Stream added, broadcasting: 1 I0112 22:49:20.241414 7 log.go:181] (0xc0007318c0) Reply frame received for 1 I0112 22:49:20.241473 7 log.go:181] (0xc0007318c0) (0xc002a3c320) Create stream I0112 22:49:20.241498 7 log.go:181] (0xc0007318c0) (0xc002a3c320) Stream added, broadcasting: 3 I0112 22:49:20.242526 7 log.go:181] (0xc0007318c0) Reply frame received for 3 I0112 22:49:20.242586 7 log.go:181] (0xc0007318c0) (0xc0013c5680) Create stream I0112 22:49:20.242603 7 log.go:181] (0xc0007318c0) (0xc0013c5680) Stream added, broadcasting: 5 I0112 22:49:20.243335 7 log.go:181] (0xc0007318c0) Reply frame received for 5 I0112 22:49:20.444664 7 log.go:181] (0xc0007318c0) Data frame received for 5 I0112 22:49:20.444771 7 log.go:181] (0xc0013c5680) (5) Data frame handling I0112 22:49:20.444808 7 log.go:181] (0xc0007318c0) Data frame received for 3 I0112 22:49:20.444827 7 log.go:181] (0xc002a3c320) (3) Data frame handling I0112 22:49:20.446425 7 log.go:181] (0xc0007318c0) Data frame received for 1 I0112 22:49:20.446444 7 log.go:181] (0xc0013c5400) (1) Data frame handling I0112 22:49:20.446453 7 log.go:181] (0xc0013c5400) (1) Data frame sent I0112 22:49:20.446460 7 log.go:181] (0xc0007318c0) (0xc0013c5400) Stream removed, broadcasting: 1 I0112 22:49:20.446538 7 log.go:181] (0xc0007318c0) (0xc0013c5400) Stream removed, broadcasting: 1 I0112 22:49:20.446554 7 log.go:181] (0xc0007318c0) (0xc002a3c320) Stream removed, broadcasting: 3 I0112 22:49:20.446679 7 log.go:181] (0xc0007318c0) (0xc0013c5680) Stream removed, broadcasting: 5 I0112 22:49:20.446725 7 log.go:181] (0xc0007318c0) Go away received STEP: test for file in mounted path Jan 12 22:49:20.451: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-7505 PodName:var-expansion-967f7aca-0465-40f8-8748-ee75cbad67f0 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 12 22:49:20.451: INFO: >>> kubeConfig: /root/.kube/config I0112 22:49:20.478708 7 log.go:181] (0xc001f4c4d0) (0xc0030ff360) Create stream I0112 22:49:20.478730 7 log.go:181] (0xc001f4c4d0) (0xc0030ff360) Stream added, broadcasting: 1 I0112 22:49:20.481287 7 log.go:181] (0xc001f4c4d0) Reply frame received for 1 I0112 22:49:20.481336 7 log.go:181] (0xc001f4c4d0) (0xc0030ff400) Create stream I0112 22:49:20.481361 7 log.go:181] (0xc001f4c4d0) (0xc0030ff400) Stream added, broadcasting: 3 I0112 22:49:20.482373 7 log.go:181] (0xc001f4c4d0) Reply frame received for 3 I0112 22:49:20.482409 7 log.go:181] (0xc001f4c4d0) (0xc0013c57c0) Create stream I0112 22:49:20.482421 7 log.go:181] (0xc001f4c4d0) (0xc0013c57c0) Stream added, broadcasting: 5 I0112 22:49:20.483346 7 log.go:181] (0xc001f4c4d0) Reply frame received for 5 I0112 22:49:20.558438 7 log.go:181] (0xc001f4c4d0) Data frame received for 5 I0112 22:49:20.558552 7 log.go:181] (0xc0013c57c0) (5) Data frame handling I0112 22:49:20.558594 7 log.go:181] (0xc001f4c4d0) Data frame received for 3 I0112 22:49:20.558650 7 log.go:181] (0xc0030ff400) (3) Data frame handling I0112 22:49:20.559895 7 log.go:181] (0xc001f4c4d0) Data frame received for 1 I0112 22:49:20.559917 7 log.go:181] (0xc0030ff360) (1) Data frame handling I0112 22:49:20.559938 7 log.go:181] (0xc0030ff360) (1) Data frame sent I0112 22:49:20.559948 7 log.go:181] (0xc001f4c4d0) (0xc0030ff360) Stream removed, broadcasting: 1 I0112 22:49:20.559958 7 log.go:181] (0xc001f4c4d0) Go away received I0112 22:49:20.560135 7 log.go:181] (0xc001f4c4d0) (0xc0030ff360) Stream removed, broadcasting: 1 I0112 22:49:20.560153 7 log.go:181] (0xc001f4c4d0) (0xc0030ff400) Stream removed, broadcasting: 3 I0112 22:49:20.560160 7 log.go:181] (0xc001f4c4d0) (0xc0013c57c0) Stream removed, broadcasting: 5 STEP: updating the annotation value Jan 12 22:49:21.077: INFO: Successfully updated pod "var-expansion-967f7aca-0465-40f8-8748-ee75cbad67f0" STEP: waiting for annotated pod running STEP: deleting the pod gracefully Jan 12 22:49:21.123: INFO: Deleting pod "var-expansion-967f7aca-0465-40f8-8748-ee75cbad67f0" in namespace "var-expansion-7505" Jan 12 22:49:21.130: INFO: Wait up to 5m0s for pod "var-expansion-967f7aca-0465-40f8-8748-ee75cbad67f0" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 22:50:01.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7505" for this suite. • [SLOW TEST:47.027 seconds] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]","total":309,"completed":39,"skipped":714,"failed":0} [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 22:50:01.160: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:129 [It] should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 12 22:50:01.268: INFO: Create a RollingUpdate DaemonSet Jan 12 22:50:01.272: INFO: Check that daemon pods launch on every node of the cluster Jan 12 22:50:01.277: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 22:50:01.279: INFO: Number of nodes with available pods: 0 Jan 12 22:50:01.279: INFO: Node leguer-worker is running more than one daemon pod Jan 12 22:50:02.283: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 22:50:02.286: INFO: Number of nodes with available pods: 0 Jan 12 22:50:02.286: INFO: Node leguer-worker is running more than one daemon pod Jan 12 22:50:03.318: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 22:50:03.322: INFO: Number of nodes with available pods: 0 Jan 12 22:50:03.322: INFO: Node leguer-worker is running more than one daemon pod Jan 12 22:50:04.285: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 22:50:04.290: INFO: Number of nodes with available pods: 0 Jan 12 22:50:04.290: INFO: Node leguer-worker is running more than one daemon pod Jan 12 22:50:05.287: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 22:50:05.290: INFO: Number of nodes with available pods: 1 Jan 12 22:50:05.290: INFO: Node leguer-worker is running more than one daemon pod Jan 12 22:50:06.295: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 22:50:06.305: INFO: Number of nodes with available pods: 2 Jan 12 22:50:06.305: INFO: Number of running nodes: 2, number of available pods: 2 Jan 12 22:50:06.305: INFO: Update the DaemonSet to trigger a rollout Jan 12 22:50:06.313: INFO: Updating DaemonSet daemon-set Jan 12 22:50:20.349: INFO: Roll back the DaemonSet before rollout is complete Jan 12 22:50:20.357: INFO: Updating DaemonSet daemon-set Jan 12 22:50:20.358: INFO: Make sure DaemonSet rollback is complete Jan 12 22:50:20.381: INFO: Wrong image for pod: daemon-set-8g7sd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 12 22:50:20.381: INFO: Pod daemon-set-8g7sd is not available Jan 12 22:50:20.398: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 22:50:21.403: INFO: Wrong image for pod: daemon-set-8g7sd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 12 22:50:21.403: INFO: Pod daemon-set-8g7sd is not available Jan 12 22:50:21.407: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 22:50:22.493: INFO: Wrong image for pod: daemon-set-8g7sd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 12 22:50:22.493: INFO: Pod daemon-set-8g7sd is not available Jan 12 22:50:22.497: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 22:50:23.405: INFO: Wrong image for pod: daemon-set-8g7sd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 12 22:50:23.405: INFO: Pod daemon-set-8g7sd is not available Jan 12 22:50:23.410: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 22:50:24.404: INFO: Wrong image for pod: daemon-set-8g7sd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 12 22:50:24.404: INFO: Pod daemon-set-8g7sd is not available Jan 12 22:50:24.409: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 22:50:25.403: INFO: Wrong image for pod: daemon-set-8g7sd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 12 22:50:25.403: INFO: Pod daemon-set-8g7sd is not available Jan 12 22:50:25.407: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 22:50:26.404: INFO: Wrong image for pod: daemon-set-8g7sd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 12 22:50:26.404: INFO: Pod daemon-set-8g7sd is not available Jan 12 22:50:26.408: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 22:50:27.403: INFO: Wrong image for pod: daemon-set-8g7sd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 12 22:50:27.403: INFO: Pod daemon-set-8g7sd is not available Jan 12 22:50:27.408: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 22:50:28.403: INFO: Wrong image for pod: daemon-set-8g7sd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 12 22:50:28.403: INFO: Pod daemon-set-8g7sd is not available Jan 12 22:50:28.407: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 22:50:29.404: INFO: Wrong image for pod: daemon-set-8g7sd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 12 22:50:29.404: INFO: Pod daemon-set-8g7sd is not available Jan 12 22:50:29.407: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 22:50:30.404: INFO: Wrong image for pod: daemon-set-8g7sd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 12 22:50:30.404: INFO: Pod daemon-set-8g7sd is not available Jan 12 22:50:30.408: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 22:50:31.403: INFO: Wrong image for pod: daemon-set-8g7sd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 12 22:50:31.403: INFO: Pod daemon-set-8g7sd is not available Jan 12 22:50:31.407: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 22:50:32.404: INFO: Wrong image for pod: daemon-set-8g7sd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 12 22:50:32.404: INFO: Pod daemon-set-8g7sd is not available Jan 12 22:50:32.409: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 22:50:33.404: INFO: Wrong image for pod: daemon-set-8g7sd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 12 22:50:33.404: INFO: Pod daemon-set-8g7sd is not available Jan 12 22:50:33.410: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 22:50:34.404: INFO: Wrong image for pod: daemon-set-8g7sd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 12 22:50:34.404: INFO: Pod daemon-set-8g7sd is not available Jan 12 22:50:34.409: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 22:50:35.405: INFO: Wrong image for pod: daemon-set-8g7sd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 12 22:50:35.405: INFO: Pod daemon-set-8g7sd is not available Jan 12 22:50:35.409: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 22:50:36.403: INFO: Wrong image for pod: daemon-set-8g7sd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 12 22:50:36.403: INFO: Pod daemon-set-8g7sd is not available Jan 12 22:50:36.407: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 22:50:37.403: INFO: Wrong image for pod: daemon-set-8g7sd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 12 22:50:37.403: INFO: Pod daemon-set-8g7sd is not available Jan 12 22:50:37.408: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 22:50:38.403: INFO: Wrong image for pod: daemon-set-8g7sd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 12 22:50:38.403: INFO: Pod daemon-set-8g7sd is not available Jan 12 22:50:38.407: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 22:50:39.403: INFO: Wrong image for pod: daemon-set-8g7sd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 12 22:50:39.403: INFO: Pod daemon-set-8g7sd is not available Jan 12 22:50:39.408: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 22:50:40.403: INFO: Wrong image for pod: daemon-set-8g7sd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 12 22:50:40.403: INFO: Pod daemon-set-8g7sd is not available Jan 12 22:50:40.408: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 22:50:41.403: INFO: Wrong image for pod: daemon-set-8g7sd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 12 22:50:41.403: INFO: Pod daemon-set-8g7sd is not available Jan 12 22:50:41.407: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 22:50:42.403: INFO: Wrong image for pod: daemon-set-8g7sd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 12 22:50:42.403: INFO: Pod daemon-set-8g7sd is not available Jan 12 22:50:42.407: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 22:50:43.404: INFO: Wrong image for pod: daemon-set-8g7sd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 12 22:50:43.404: INFO: Pod daemon-set-8g7sd is not available Jan 12 22:50:43.408: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 22:50:44.405: INFO: Wrong image for pod: daemon-set-8g7sd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 12 22:50:44.405: INFO: Pod daemon-set-8g7sd is not available Jan 12 22:50:44.409: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 22:50:45.404: INFO: Wrong image for pod: daemon-set-8g7sd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 12 22:50:45.404: INFO: Pod daemon-set-8g7sd is not available Jan 12 22:50:45.409: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 22:50:46.403: INFO: Wrong image for pod: daemon-set-8g7sd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 12 22:50:46.403: INFO: Pod daemon-set-8g7sd is not available Jan 12 22:50:46.407: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 22:50:47.403: INFO: Wrong image for pod: daemon-set-8g7sd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 12 22:50:47.403: INFO: Pod daemon-set-8g7sd is not available Jan 12 22:50:47.407: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 22:50:48.404: INFO: Wrong image for pod: daemon-set-8g7sd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 12 22:50:48.404: INFO: Pod daemon-set-8g7sd is not available Jan 12 22:50:48.408: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 22:50:49.405: INFO: Wrong image for pod: daemon-set-8g7sd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 12 22:50:49.405: INFO: Pod daemon-set-8g7sd is not available Jan 12 22:50:49.408: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 22:50:50.404: INFO: Wrong image for pod: daemon-set-8g7sd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 12 22:50:50.404: INFO: Pod daemon-set-8g7sd is not available Jan 12 22:50:50.409: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 22:50:51.404: INFO: Wrong image for pod: daemon-set-8g7sd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 12 22:50:51.404: INFO: Pod daemon-set-8g7sd is not available Jan 12 22:50:51.408: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 22:50:52.404: INFO: Wrong image for pod: daemon-set-8g7sd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 12 22:50:52.404: INFO: Pod daemon-set-8g7sd is not available Jan 12 22:50:52.409: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 22:50:53.403: INFO: Wrong image for pod: daemon-set-8g7sd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 12 22:50:53.403: INFO: Pod daemon-set-8g7sd is not available Jan 12 22:50:53.409: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 22:50:54.403: INFO: Wrong image for pod: daemon-set-8g7sd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 12 22:50:54.403: INFO: Pod daemon-set-8g7sd is not available Jan 12 22:50:54.408: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 22:50:55.404: INFO: Wrong image for pod: daemon-set-8g7sd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 12 22:50:55.404: INFO: Pod daemon-set-8g7sd is not available Jan 12 22:50:55.409: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 22:50:56.404: INFO: Wrong image for pod: daemon-set-8g7sd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 12 22:50:56.404: INFO: Pod daemon-set-8g7sd is not available Jan 12 22:50:56.408: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 22:50:57.403: INFO: Wrong image for pod: daemon-set-8g7sd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 12 22:50:57.403: INFO: Pod daemon-set-8g7sd is not available Jan 12 22:50:57.408: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 22:50:58.403: INFO: Wrong image for pod: daemon-set-8g7sd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 12 22:50:58.404: INFO: Pod daemon-set-8g7sd is not available Jan 12 22:50:58.409: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 22:50:59.405: INFO: Wrong image for pod: daemon-set-8g7sd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 12 22:50:59.405: INFO: Pod daemon-set-8g7sd is not available Jan 12 22:50:59.409: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 22:51:00.403: INFO: Wrong image for pod: daemon-set-8g7sd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 12 22:51:00.403: INFO: Pod daemon-set-8g7sd is not available Jan 12 22:51:00.408: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 22:51:01.403: INFO: Wrong image for pod: daemon-set-8g7sd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 12 22:51:01.403: INFO: Pod daemon-set-8g7sd is not available Jan 12 22:51:01.407: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 22:51:02.407: INFO: Wrong image for pod: daemon-set-8g7sd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 12 22:51:02.407: INFO: Pod daemon-set-8g7sd is not available Jan 12 22:51:02.411: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 22:51:03.403: INFO: Wrong image for pod: daemon-set-8g7sd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 12 22:51:03.403: INFO: Pod daemon-set-8g7sd is not available Jan 12 22:51:03.407: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 22:51:04.403: INFO: Wrong image for pod: daemon-set-8g7sd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 12 22:51:04.403: INFO: Pod daemon-set-8g7sd is not available Jan 12 22:51:04.408: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 22:51:05.403: INFO: Wrong image for pod: daemon-set-8g7sd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 12 22:51:05.403: INFO: Pod daemon-set-8g7sd is not available Jan 12 22:51:05.406: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 22:51:06.403: INFO: Wrong image for pod: daemon-set-8g7sd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 12 22:51:06.403: INFO: Pod daemon-set-8g7sd is not available Jan 12 22:51:06.408: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 22:51:07.409: INFO: Wrong image for pod: daemon-set-8g7sd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 12 22:51:07.409: INFO: Pod daemon-set-8g7sd is not available Jan 12 22:51:07.413: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 22:51:08.403: INFO: Wrong image for pod: daemon-set-8g7sd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 12 22:51:08.403: INFO: Pod daemon-set-8g7sd is not available Jan 12 22:51:08.407: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 22:51:09.405: INFO: Wrong image for pod: daemon-set-8g7sd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 12 22:51:09.405: INFO: Pod daemon-set-8g7sd is not available Jan 12 22:51:09.410: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 22:51:10.403: INFO: Wrong image for pod: daemon-set-8g7sd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 12 22:51:10.403: INFO: Pod daemon-set-8g7sd is not available Jan 12 22:51:10.407: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 22:51:11.403: INFO: Wrong image for pod: daemon-set-8g7sd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 12 22:51:11.403: INFO: Pod daemon-set-8g7sd is not available Jan 12 22:51:11.407: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 22:51:12.403: INFO: Wrong image for pod: daemon-set-8g7sd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 12 22:51:12.403: INFO: Pod daemon-set-8g7sd is not available Jan 12 22:51:12.408: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 22:51:13.404: INFO: Wrong image for pod: daemon-set-8g7sd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 12 22:51:13.405: INFO: Pod daemon-set-8g7sd is not available Jan 12 22:51:13.409: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 22:51:14.403: INFO: Wrong image for pod: daemon-set-8g7sd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 12 22:51:14.403: INFO: Pod daemon-set-8g7sd is not available Jan 12 22:51:14.408: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 22:51:15.404: INFO: Wrong image for pod: daemon-set-8g7sd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 12 22:51:15.404: INFO: Pod daemon-set-8g7sd is not available Jan 12 22:51:15.408: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 22:51:16.403: INFO: Wrong image for pod: daemon-set-8g7sd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 12 22:51:16.403: INFO: Pod daemon-set-8g7sd is not available Jan 12 22:51:16.408: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 22:51:17.415: INFO: Wrong image for pod: daemon-set-8g7sd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 12 22:51:17.415: INFO: Pod daemon-set-8g7sd is not available Jan 12 22:51:17.418: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 22:51:18.403: INFO: Wrong image for pod: daemon-set-8g7sd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 12 22:51:18.403: INFO: Pod daemon-set-8g7sd is not available Jan 12 22:51:18.408: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 22:51:19.404: INFO: Wrong image for pod: daemon-set-8g7sd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 12 22:51:19.404: INFO: Pod daemon-set-8g7sd is not available Jan 12 22:51:19.408: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 22:51:20.402: INFO: Pod daemon-set-wcnhw is not available Jan 12 22:51:20.406: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:95 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6166, will wait for the garbage collector to delete the pods Jan 12 22:51:20.470: INFO: Deleting DaemonSet.extensions daemon-set took: 5.834939ms Jan 12 22:51:21.071: INFO: Terminating DaemonSet.extensions daemon-set pods took: 600.164403ms Jan 12 22:52:19.894: INFO: Number of nodes with available pods: 0 Jan 12 22:52:19.894: INFO: Number of running nodes: 0, number of available pods: 0 Jan 12 22:52:19.900: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"416096"},"items":null} Jan 12 22:52:19.904: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"416096"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 22:52:19.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6166" for this suite. • [SLOW TEST:138.761 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":309,"completed":40,"skipped":714,"failed":0} S ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 22:52:19.922: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward API volume plugin Jan 12 22:52:20.060: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e760f437-8fa9-48de-9fd3-3332e3b63691" in namespace "projected-747" to be "Succeeded or Failed" Jan 12 22:52:20.063: INFO: Pod "downwardapi-volume-e760f437-8fa9-48de-9fd3-3332e3b63691": Phase="Pending", Reason="", readiness=false. Elapsed: 3.041495ms Jan 12 22:52:22.068: INFO: Pod "downwardapi-volume-e760f437-8fa9-48de-9fd3-3332e3b63691": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00791789s Jan 12 22:52:24.073: INFO: Pod "downwardapi-volume-e760f437-8fa9-48de-9fd3-3332e3b63691": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012756171s STEP: Saw pod success Jan 12 22:52:24.073: INFO: Pod "downwardapi-volume-e760f437-8fa9-48de-9fd3-3332e3b63691" satisfied condition "Succeeded or Failed" Jan 12 22:52:24.077: INFO: Trying to get logs from node leguer-worker pod downwardapi-volume-e760f437-8fa9-48de-9fd3-3332e3b63691 container client-container: STEP: delete the pod Jan 12 22:52:24.106: INFO: Waiting for pod downwardapi-volume-e760f437-8fa9-48de-9fd3-3332e3b63691 to disappear Jan 12 22:52:24.123: INFO: Pod downwardapi-volume-e760f437-8fa9-48de-9fd3-3332e3b63691 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 22:52:24.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-747" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":309,"completed":41,"skipped":715,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 22:52:24.130: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Jan 12 22:52:24.231: INFO: >>> kubeConfig: /root/.kube/config Jan 12 22:52:27.786: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 22:52:40.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6541" for this suite. • [SLOW TEST:15.938 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":309,"completed":42,"skipped":750,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 22:52:40.068: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should test the lifecycle of a ReplicationController [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a ReplicationController STEP: waiting for RC to be added STEP: waiting for available Replicas STEP: patching ReplicationController STEP: waiting for RC to be modified STEP: patching ReplicationController status STEP: waiting for RC to be modified STEP: waiting for available Replicas STEP: fetching ReplicationController status STEP: patching ReplicationController scale STEP: waiting for RC to be modified STEP: waiting for ReplicationController's scale to be the max amount STEP: fetching ReplicationController; ensuring that it's patched STEP: updating ReplicationController status STEP: waiting for RC to be modified STEP: listing all ReplicationControllers STEP: checking that ReplicationController has expected values STEP: deleting ReplicationControllers by collection STEP: waiting for ReplicationController to have a DELETED watchEvent [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 22:52:46.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-4526" for this suite. • [SLOW TEST:6.507 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should test the lifecycle of a ReplicationController [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]","total":309,"completed":43,"skipped":761,"failed":0} SSSSSSSSS ------------------------------ [sig-network] Services should test the lifecycle of an Endpoint [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 22:52:46.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should test the lifecycle of an Endpoint [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating an Endpoint STEP: waiting for available Endpoint STEP: listing all Endpoints STEP: updating the Endpoint STEP: fetching the Endpoint STEP: patching the Endpoint STEP: fetching the Endpoint STEP: deleting the Endpoint by Collection STEP: waiting for Endpoint deletion STEP: fetching the Endpoint [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 22:52:46.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-156" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 •{"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":309,"completed":44,"skipped":770,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 22:52:46.735: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Jan 12 22:52:59.137: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4572 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 12 22:52:59.137: INFO: >>> kubeConfig: /root/.kube/config I0112 22:52:59.176724 7 log.go:181] (0xc0078db600) (0xc002f71a40) Create stream I0112 22:52:59.176762 7 log.go:181] (0xc0078db600) (0xc002f71a40) Stream added, broadcasting: 1 I0112 22:52:59.178597 7 log.go:181] (0xc0078db600) Reply frame received for 1 I0112 22:52:59.178645 7 log.go:181] (0xc0078db600) (0xc002f71ae0) Create stream I0112 22:52:59.178665 7 log.go:181] (0xc0078db600) (0xc002f71ae0) Stream added, broadcasting: 3 I0112 22:52:59.179528 7 log.go:181] (0xc0078db600) Reply frame received for 3 I0112 22:52:59.179568 7 log.go:181] (0xc0078db600) (0xc0020de820) Create stream I0112 22:52:59.179581 7 log.go:181] (0xc0078db600) (0xc0020de820) Stream added, broadcasting: 5 I0112 22:52:59.180403 7 log.go:181] (0xc0078db600) Reply frame received for 5 I0112 22:52:59.226685 7 log.go:181] (0xc0078db600) Data frame received for 3 I0112 22:52:59.226725 7 log.go:181] (0xc002f71ae0) (3) Data frame handling I0112 22:52:59.226748 7 log.go:181] (0xc002f71ae0) (3) Data frame sent I0112 22:52:59.226766 7 log.go:181] (0xc0078db600) Data frame received for 3 I0112 22:52:59.226780 7 log.go:181] (0xc002f71ae0) (3) Data frame handling I0112 22:52:59.226834 7 log.go:181] (0xc0078db600) Data frame received for 5 I0112 22:52:59.226860 7 log.go:181] (0xc0020de820) (5) Data frame handling I0112 22:52:59.227691 7 log.go:181] (0xc0078db600) Data frame received for 1 I0112 22:52:59.227714 7 log.go:181] (0xc002f71a40) (1) Data frame handling I0112 22:52:59.227735 7 log.go:181] (0xc002f71a40) (1) Data frame sent I0112 22:52:59.227747 7 log.go:181] (0xc0078db600) (0xc002f71a40) Stream removed, broadcasting: 1 I0112 22:52:59.227759 7 log.go:181] (0xc0078db600) Go away received I0112 22:52:59.227804 7 log.go:181] (0xc0078db600) (0xc002f71a40) Stream removed, broadcasting: 1 I0112 22:52:59.227818 7 log.go:181] (0xc0078db600) (0xc002f71ae0) Stream removed, broadcasting: 3 I0112 22:52:59.227824 7 log.go:181] (0xc0078db600) (0xc0020de820) Stream removed, broadcasting: 5 Jan 12 22:52:59.227: INFO: Exec stderr: "" Jan 12 22:52:59.227: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4572 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 12 22:52:59.227: INFO: >>> kubeConfig: /root/.kube/config I0112 22:52:59.305738 7 log.go:181] (0xc000141ef0) (0xc0013c4b40) Create stream I0112 22:52:59.305797 7 log.go:181] (0xc000141ef0) (0xc0013c4b40) Stream added, broadcasting: 1 I0112 22:52:59.307777 7 log.go:181] (0xc000141ef0) Reply frame received for 1 I0112 22:52:59.307835 7 log.go:181] (0xc000141ef0) (0xc002f71c20) Create stream I0112 22:52:59.307863 7 log.go:181] (0xc000141ef0) (0xc002f71c20) Stream added, broadcasting: 3 I0112 22:52:59.308642 7 log.go:181] (0xc000141ef0) Reply frame received for 3 I0112 22:52:59.308666 7 log.go:181] (0xc000141ef0) (0xc0013c4c80) Create stream I0112 22:52:59.308672 7 log.go:181] (0xc000141ef0) (0xc0013c4c80) Stream added, broadcasting: 5 I0112 22:52:59.309566 7 log.go:181] (0xc000141ef0) Reply frame received for 5 I0112 22:52:59.377063 7 log.go:181] (0xc000141ef0) Data frame received for 5 I0112 22:52:59.377107 7 log.go:181] (0xc0013c4c80) (5) Data frame handling I0112 22:52:59.377132 7 log.go:181] (0xc000141ef0) Data frame received for 3 I0112 22:52:59.377228 7 log.go:181] (0xc002f71c20) (3) Data frame handling I0112 22:52:59.377261 7 log.go:181] (0xc002f71c20) (3) Data frame sent I0112 22:52:59.377280 7 log.go:181] (0xc000141ef0) Data frame received for 3 I0112 22:52:59.377293 7 log.go:181] (0xc002f71c20) (3) Data frame handling I0112 22:52:59.378723 7 log.go:181] (0xc000141ef0) Data frame received for 1 I0112 22:52:59.378740 7 log.go:181] (0xc0013c4b40) (1) Data frame handling I0112 22:52:59.378752 7 log.go:181] (0xc0013c4b40) (1) Data frame sent I0112 22:52:59.378769 7 log.go:181] (0xc000141ef0) (0xc0013c4b40) Stream removed, broadcasting: 1 I0112 22:52:59.378841 7 log.go:181] (0xc000141ef0) Go away received I0112 22:52:59.378898 7 log.go:181] (0xc000141ef0) (0xc0013c4b40) Stream removed, broadcasting: 1 I0112 22:52:59.378920 7 log.go:181] (0xc000141ef0) (0xc002f71c20) Stream removed, broadcasting: 3 I0112 22:52:59.378934 7 log.go:181] (0xc000141ef0) (0xc0013c4c80) Stream removed, broadcasting: 5 Jan 12 22:52:59.378: INFO: Exec stderr: "" Jan 12 22:52:59.378: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4572 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 12 22:52:59.379: INFO: >>> kubeConfig: /root/.kube/config I0112 22:52:59.413283 7 log.go:181] (0xc000947ad0) (0xc0013c52c0) Create stream I0112 22:52:59.413368 7 log.go:181] (0xc000947ad0) (0xc0013c52c0) Stream added, broadcasting: 1 I0112 22:52:59.416673 7 log.go:181] (0xc000947ad0) Reply frame received for 1 I0112 22:52:59.416734 7 log.go:181] (0xc000947ad0) (0xc002d854a0) Create stream I0112 22:52:59.416760 7 log.go:181] (0xc000947ad0) (0xc002d854a0) Stream added, broadcasting: 3 I0112 22:52:59.418206 7 log.go:181] (0xc000947ad0) Reply frame received for 3 I0112 22:52:59.418229 7 log.go:181] (0xc000947ad0) (0xc0019125a0) Create stream I0112 22:52:59.418240 7 log.go:181] (0xc000947ad0) (0xc0019125a0) Stream added, broadcasting: 5 I0112 22:52:59.419197 7 log.go:181] (0xc000947ad0) Reply frame received for 5 I0112 22:52:59.483136 7 log.go:181] (0xc000947ad0) Data frame received for 5 I0112 22:52:59.483202 7 log.go:181] (0xc0019125a0) (5) Data frame handling I0112 22:52:59.483241 7 log.go:181] (0xc000947ad0) Data frame received for 3 I0112 22:52:59.483264 7 log.go:181] (0xc002d854a0) (3) Data frame handling I0112 22:52:59.483284 7 log.go:181] (0xc002d854a0) (3) Data frame sent I0112 22:52:59.483301 7 log.go:181] (0xc000947ad0) Data frame received for 3 I0112 22:52:59.483317 7 log.go:181] (0xc002d854a0) (3) Data frame handling I0112 22:52:59.484994 7 log.go:181] (0xc000947ad0) Data frame received for 1 I0112 22:52:59.485026 7 log.go:181] (0xc0013c52c0) (1) Data frame handling I0112 22:52:59.485060 7 log.go:181] (0xc0013c52c0) (1) Data frame sent I0112 22:52:59.485087 7 log.go:181] (0xc000947ad0) (0xc0013c52c0) Stream removed, broadcasting: 1 I0112 22:52:59.485112 7 log.go:181] (0xc000947ad0) Go away received I0112 22:52:59.485200 7 log.go:181] (0xc000947ad0) (0xc0013c52c0) Stream removed, broadcasting: 1 I0112 22:52:59.485214 7 log.go:181] (0xc000947ad0) (0xc002d854a0) Stream removed, broadcasting: 3 I0112 22:52:59.485220 7 log.go:181] (0xc000947ad0) (0xc0019125a0) Stream removed, broadcasting: 5 Jan 12 22:52:59.485: INFO: Exec stderr: "" Jan 12 22:52:59.485: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4572 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 12 22:52:59.485: INFO: >>> kubeConfig: /root/.kube/config I0112 22:52:59.517881 7 log.go:181] (0xc000731e40) (0xc002d857c0) Create stream I0112 22:52:59.517905 7 log.go:181] (0xc000731e40) (0xc002d857c0) Stream added, broadcasting: 1 I0112 22:52:59.519852 7 log.go:181] (0xc000731e40) Reply frame received for 1 I0112 22:52:59.519895 7 log.go:181] (0xc000731e40) (0xc002f71cc0) Create stream I0112 22:52:59.519911 7 log.go:181] (0xc000731e40) (0xc002f71cc0) Stream added, broadcasting: 3 I0112 22:52:59.521085 7 log.go:181] (0xc000731e40) Reply frame received for 3 I0112 22:52:59.521129 7 log.go:181] (0xc000731e40) (0xc0020de8c0) Create stream I0112 22:52:59.521149 7 log.go:181] (0xc000731e40) (0xc0020de8c0) Stream added, broadcasting: 5 I0112 22:52:59.522093 7 log.go:181] (0xc000731e40) Reply frame received for 5 I0112 22:52:59.597776 7 log.go:181] (0xc000731e40) Data frame received for 5 I0112 22:52:59.597799 7 log.go:181] (0xc0020de8c0) (5) Data frame handling I0112 22:52:59.597817 7 log.go:181] (0xc000731e40) Data frame received for 3 I0112 22:52:59.597823 7 log.go:181] (0xc002f71cc0) (3) Data frame handling I0112 22:52:59.597839 7 log.go:181] (0xc002f71cc0) (3) Data frame sent I0112 22:52:59.597847 7 log.go:181] (0xc000731e40) Data frame received for 3 I0112 22:52:59.597851 7 log.go:181] (0xc002f71cc0) (3) Data frame handling I0112 22:52:59.599622 7 log.go:181] (0xc000731e40) Data frame received for 1 I0112 22:52:59.599659 7 log.go:181] (0xc002d857c0) (1) Data frame handling I0112 22:52:59.599687 7 log.go:181] (0xc002d857c0) (1) Data frame sent I0112 22:52:59.599714 7 log.go:181] (0xc000731e40) (0xc002d857c0) Stream removed, broadcasting: 1 I0112 22:52:59.599748 7 log.go:181] (0xc000731e40) Go away received I0112 22:52:59.599821 7 log.go:181] (0xc000731e40) (0xc002d857c0) Stream removed, broadcasting: 1 I0112 22:52:59.599848 7 log.go:181] (0xc000731e40) (0xc002f71cc0) Stream removed, broadcasting: 3 I0112 22:52:59.599863 7 log.go:181] (0xc000731e40) (0xc0020de8c0) Stream removed, broadcasting: 5 Jan 12 22:52:59.599: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Jan 12 22:52:59.599: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4572 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 12 22:52:59.599: INFO: >>> kubeConfig: /root/.kube/config I0112 22:52:59.630889 7 log.go:181] (0xc002dfea50) (0xc0020debe0) Create stream I0112 22:52:59.630930 7 log.go:181] (0xc002dfea50) (0xc0020debe0) Stream added, broadcasting: 1 I0112 22:52:59.632819 7 log.go:181] (0xc002dfea50) Reply frame received for 1 I0112 22:52:59.633099 7 log.go:181] (0xc002dfea50) (0xc002f71d60) Create stream I0112 22:52:59.633121 7 log.go:181] (0xc002dfea50) (0xc002f71d60) Stream added, broadcasting: 3 I0112 22:52:59.634109 7 log.go:181] (0xc002dfea50) Reply frame received for 3 I0112 22:52:59.634158 7 log.go:181] (0xc002dfea50) (0xc002f71e00) Create stream I0112 22:52:59.634171 7 log.go:181] (0xc002dfea50) (0xc002f71e00) Stream added, broadcasting: 5 I0112 22:52:59.635084 7 log.go:181] (0xc002dfea50) Reply frame received for 5 I0112 22:52:59.705145 7 log.go:181] (0xc002dfea50) Data frame received for 3 I0112 22:52:59.705173 7 log.go:181] (0xc002f71d60) (3) Data frame handling I0112 22:52:59.705183 7 log.go:181] (0xc002f71d60) (3) Data frame sent I0112 22:52:59.705189 7 log.go:181] (0xc002dfea50) Data frame received for 3 I0112 22:52:59.705197 7 log.go:181] (0xc002f71d60) (3) Data frame handling I0112 22:52:59.705249 7 log.go:181] (0xc002dfea50) Data frame received for 5 I0112 22:52:59.705275 7 log.go:181] (0xc002f71e00) (5) Data frame handling I0112 22:52:59.706758 7 log.go:181] (0xc002dfea50) Data frame received for 1 I0112 22:52:59.706786 7 log.go:181] (0xc0020debe0) (1) Data frame handling I0112 22:52:59.706810 7 log.go:181] (0xc0020debe0) (1) Data frame sent I0112 22:52:59.706827 7 log.go:181] (0xc002dfea50) (0xc0020debe0) Stream removed, broadcasting: 1 I0112 22:52:59.706846 7 log.go:181] (0xc002dfea50) Go away received I0112 22:52:59.707005 7 log.go:181] (0xc002dfea50) (0xc0020debe0) Stream removed, broadcasting: 1 I0112 22:52:59.707052 7 log.go:181] (0xc002dfea50) (0xc002f71d60) Stream removed, broadcasting: 3 I0112 22:52:59.707071 7 log.go:181] (0xc002dfea50) (0xc002f71e00) Stream removed, broadcasting: 5 Jan 12 22:52:59.707: INFO: Exec stderr: "" Jan 12 22:52:59.707: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4572 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 12 22:52:59.707: INFO: >>> kubeConfig: /root/.kube/config I0112 22:52:59.744731 7 log.go:181] (0xc0006aa420) (0xc0005aa1e0) Create stream I0112 22:52:59.744756 7 log.go:181] (0xc0006aa420) (0xc0005aa1e0) Stream added, broadcasting: 1 I0112 22:52:59.746797 7 log.go:181] (0xc0006aa420) Reply frame received for 1 I0112 22:52:59.746850 7 log.go:181] (0xc0006aa420) (0xc002d85860) Create stream I0112 22:52:59.746868 7 log.go:181] (0xc0006aa420) (0xc002d85860) Stream added, broadcasting: 3 I0112 22:52:59.747724 7 log.go:181] (0xc0006aa420) Reply frame received for 3 I0112 22:52:59.747745 7 log.go:181] (0xc0006aa420) (0xc0005aa320) Create stream I0112 22:52:59.747752 7 log.go:181] (0xc0006aa420) (0xc0005aa320) Stream added, broadcasting: 5 I0112 22:52:59.748376 7 log.go:181] (0xc0006aa420) Reply frame received for 5 I0112 22:52:59.815654 7 log.go:181] (0xc0006aa420) Data frame received for 3 I0112 22:52:59.815676 7 log.go:181] (0xc002d85860) (3) Data frame handling I0112 22:52:59.815692 7 log.go:181] (0xc0006aa420) Data frame received for 5 I0112 22:52:59.815708 7 log.go:181] (0xc0005aa320) (5) Data frame handling I0112 22:52:59.815727 7 log.go:181] (0xc002d85860) (3) Data frame sent I0112 22:52:59.815740 7 log.go:181] (0xc0006aa420) Data frame received for 3 I0112 22:52:59.815746 7 log.go:181] (0xc002d85860) (3) Data frame handling I0112 22:52:59.817253 7 log.go:181] (0xc0006aa420) Data frame received for 1 I0112 22:52:59.817266 7 log.go:181] (0xc0005aa1e0) (1) Data frame handling I0112 22:52:59.817272 7 log.go:181] (0xc0005aa1e0) (1) Data frame sent I0112 22:52:59.817280 7 log.go:181] (0xc0006aa420) (0xc0005aa1e0) Stream removed, broadcasting: 1 I0112 22:52:59.817293 7 log.go:181] (0xc0006aa420) Go away received I0112 22:52:59.817404 7 log.go:181] (0xc0006aa420) (0xc0005aa1e0) Stream removed, broadcasting: 1 I0112 22:52:59.817427 7 log.go:181] (0xc0006aa420) (0xc002d85860) Stream removed, broadcasting: 3 I0112 22:52:59.817443 7 log.go:181] (0xc0006aa420) (0xc0005aa320) Stream removed, broadcasting: 5 Jan 12 22:52:59.817: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Jan 12 22:52:59.817: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4572 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 12 22:52:59.817: INFO: >>> kubeConfig: /root/.kube/config I0112 22:52:59.857580 7 log.go:181] (0xc002dfed10) (0xc0020dedc0) Create stream I0112 22:52:59.857612 7 log.go:181] (0xc002dfed10) (0xc0020dedc0) Stream added, broadcasting: 1 I0112 22:52:59.859505 7 log.go:181] (0xc002dfed10) Reply frame received for 1 I0112 22:52:59.859567 7 log.go:181] (0xc002dfed10) (0xc002d85900) Create stream I0112 22:52:59.859591 7 log.go:181] (0xc002dfed10) (0xc002d85900) Stream added, broadcasting: 3 I0112 22:52:59.860607 7 log.go:181] (0xc002dfed10) Reply frame received for 3 I0112 22:52:59.860646 7 log.go:181] (0xc002dfed10) (0xc0005aa460) Create stream I0112 22:52:59.860658 7 log.go:181] (0xc002dfed10) (0xc0005aa460) Stream added, broadcasting: 5 I0112 22:52:59.861521 7 log.go:181] (0xc002dfed10) Reply frame received for 5 I0112 22:52:59.944005 7 log.go:181] (0xc002dfed10) Data frame received for 5 I0112 22:52:59.944039 7 log.go:181] (0xc0005aa460) (5) Data frame handling I0112 22:52:59.944074 7 log.go:181] (0xc002dfed10) Data frame received for 3 I0112 22:52:59.944109 7 log.go:181] (0xc002d85900) (3) Data frame handling I0112 22:52:59.944131 7 log.go:181] (0xc002d85900) (3) Data frame sent I0112 22:52:59.944148 7 log.go:181] (0xc002dfed10) Data frame received for 3 I0112 22:52:59.944160 7 log.go:181] (0xc002d85900) (3) Data frame handling I0112 22:52:59.945884 7 log.go:181] (0xc002dfed10) Data frame received for 1 I0112 22:52:59.945907 7 log.go:181] (0xc0020dedc0) (1) Data frame handling I0112 22:52:59.945916 7 log.go:181] (0xc0020dedc0) (1) Data frame sent I0112 22:52:59.945927 7 log.go:181] (0xc002dfed10) (0xc0020dedc0) Stream removed, broadcasting: 1 I0112 22:52:59.945945 7 log.go:181] (0xc002dfed10) Go away received I0112 22:52:59.946037 7 log.go:181] (0xc002dfed10) (0xc0020dedc0) Stream removed, broadcasting: 1 I0112 22:52:59.946061 7 log.go:181] (0xc002dfed10) (0xc002d85900) Stream removed, broadcasting: 3 I0112 22:52:59.946076 7 log.go:181] (0xc002dfed10) (0xc0005aa460) Stream removed, broadcasting: 5 Jan 12 22:52:59.946: INFO: Exec stderr: "" Jan 12 22:52:59.946: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4572 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 12 22:52:59.946: INFO: >>> kubeConfig: /root/.kube/config I0112 22:52:59.974823 7 log.go:181] (0xc002dff3f0) (0xc0020df040) Create stream I0112 22:52:59.974847 7 log.go:181] (0xc002dff3f0) (0xc0020df040) Stream added, broadcasting: 1 I0112 22:52:59.976613 7 log.go:181] (0xc002dff3f0) Reply frame received for 1 I0112 22:52:59.976700 7 log.go:181] (0xc002dff3f0) (0xc0013c5680) Create stream I0112 22:52:59.976719 7 log.go:181] (0xc002dff3f0) (0xc0013c5680) Stream added, broadcasting: 3 I0112 22:52:59.977754 7 log.go:181] (0xc002dff3f0) Reply frame received for 3 I0112 22:52:59.977798 7 log.go:181] (0xc002dff3f0) (0xc0020df0e0) Create stream I0112 22:52:59.977812 7 log.go:181] (0xc002dff3f0) (0xc0020df0e0) Stream added, broadcasting: 5 I0112 22:52:59.978704 7 log.go:181] (0xc002dff3f0) Reply frame received for 5 I0112 22:53:00.047967 7 log.go:181] (0xc002dff3f0) Data frame received for 5 I0112 22:53:00.048004 7 log.go:181] (0xc0020df0e0) (5) Data frame handling I0112 22:53:00.048043 7 log.go:181] (0xc002dff3f0) Data frame received for 3 I0112 22:53:00.048078 7 log.go:181] (0xc0013c5680) (3) Data frame handling I0112 22:53:00.048095 7 log.go:181] (0xc0013c5680) (3) Data frame sent I0112 22:53:00.048110 7 log.go:181] (0xc002dff3f0) Data frame received for 3 I0112 22:53:00.048123 7 log.go:181] (0xc0013c5680) (3) Data frame handling I0112 22:53:00.049389 7 log.go:181] (0xc002dff3f0) Data frame received for 1 I0112 22:53:00.049412 7 log.go:181] (0xc0020df040) (1) Data frame handling I0112 22:53:00.049427 7 log.go:181] (0xc0020df040) (1) Data frame sent I0112 22:53:00.049459 7 log.go:181] (0xc002dff3f0) (0xc0020df040) Stream removed, broadcasting: 1 I0112 22:53:00.049481 7 log.go:181] (0xc002dff3f0) Go away received I0112 22:53:00.049573 7 log.go:181] (0xc002dff3f0) (0xc0020df040) Stream removed, broadcasting: 1 I0112 22:53:00.049589 7 log.go:181] (0xc002dff3f0) (0xc0013c5680) Stream removed, broadcasting: 3 I0112 22:53:00.049598 7 log.go:181] (0xc002dff3f0) (0xc0020df0e0) Stream removed, broadcasting: 5 Jan 12 22:53:00.049: INFO: Exec stderr: "" Jan 12 22:53:00.049: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4572 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 12 22:53:00.049: INFO: >>> kubeConfig: /root/.kube/config I0112 22:53:00.084162 7 log.go:181] (0xc001f4c580) (0xc001912960) Create stream I0112 22:53:00.084198 7 log.go:181] (0xc001f4c580) (0xc001912960) Stream added, broadcasting: 1 I0112 22:53:00.086903 7 log.go:181] (0xc001f4c580) Reply frame received for 1 I0112 22:53:00.086940 7 log.go:181] (0xc001f4c580) (0xc0013c57c0) Create stream I0112 22:53:00.086951 7 log.go:181] (0xc001f4c580) (0xc0013c57c0) Stream added, broadcasting: 3 I0112 22:53:00.088249 7 log.go:181] (0xc001f4c580) Reply frame received for 3 I0112 22:53:00.088292 7 log.go:181] (0xc001f4c580) (0xc002d859a0) Create stream I0112 22:53:00.088301 7 log.go:181] (0xc001f4c580) (0xc002d859a0) Stream added, broadcasting: 5 I0112 22:53:00.089113 7 log.go:181] (0xc001f4c580) Reply frame received for 5 I0112 22:53:00.144366 7 log.go:181] (0xc001f4c580) Data frame received for 3 I0112 22:53:00.144405 7 log.go:181] (0xc0013c57c0) (3) Data frame handling I0112 22:53:00.144416 7 log.go:181] (0xc0013c57c0) (3) Data frame sent I0112 22:53:00.144427 7 log.go:181] (0xc001f4c580) Data frame received for 3 I0112 22:53:00.144437 7 log.go:181] (0xc0013c57c0) (3) Data frame handling I0112 22:53:00.144463 7 log.go:181] (0xc001f4c580) Data frame received for 5 I0112 22:53:00.144481 7 log.go:181] (0xc002d859a0) (5) Data frame handling I0112 22:53:00.146033 7 log.go:181] (0xc001f4c580) Data frame received for 1 I0112 22:53:00.146047 7 log.go:181] (0xc001912960) (1) Data frame handling I0112 22:53:00.146058 7 log.go:181] (0xc001912960) (1) Data frame sent I0112 22:53:00.146066 7 log.go:181] (0xc001f4c580) (0xc001912960) Stream removed, broadcasting: 1 I0112 22:53:00.146240 7 log.go:181] (0xc001f4c580) (0xc001912960) Stream removed, broadcasting: 1 I0112 22:53:00.146277 7 log.go:181] (0xc001f4c580) (0xc0013c57c0) Stream removed, broadcasting: 3 I0112 22:53:00.146336 7 log.go:181] (0xc001f4c580) Go away received I0112 22:53:00.146386 7 log.go:181] (0xc001f4c580) (0xc002d859a0) Stream removed, broadcasting: 5 Jan 12 22:53:00.146: INFO: Exec stderr: "" Jan 12 22:53:00.146: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4572 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 12 22:53:00.146: INFO: >>> kubeConfig: /root/.kube/config I0112 22:53:00.180551 7 log.go:181] (0xc001f4cc60) (0xc00452a140) Create stream I0112 22:53:00.180584 7 log.go:181] (0xc001f4cc60) (0xc00452a140) Stream added, broadcasting: 1 I0112 22:53:00.182658 7 log.go:181] (0xc001f4cc60) Reply frame received for 1 I0112 22:53:00.182688 7 log.go:181] (0xc001f4cc60) (0xc00452a1e0) Create stream I0112 22:53:00.182706 7 log.go:181] (0xc001f4cc60) (0xc00452a1e0) Stream added, broadcasting: 3 I0112 22:53:00.183633 7 log.go:181] (0xc001f4cc60) Reply frame received for 3 I0112 22:53:00.183678 7 log.go:181] (0xc001f4cc60) (0xc0020df180) Create stream I0112 22:53:00.183695 7 log.go:181] (0xc001f4cc60) (0xc0020df180) Stream added, broadcasting: 5 I0112 22:53:00.184744 7 log.go:181] (0xc001f4cc60) Reply frame received for 5 I0112 22:53:00.252157 7 log.go:181] (0xc001f4cc60) Data frame received for 3 I0112 22:53:00.252221 7 log.go:181] (0xc00452a1e0) (3) Data frame handling I0112 22:53:00.252245 7 log.go:181] (0xc00452a1e0) (3) Data frame sent I0112 22:53:00.252278 7 log.go:181] (0xc001f4cc60) Data frame received for 3 I0112 22:53:00.252301 7 log.go:181] (0xc00452a1e0) (3) Data frame handling I0112 22:53:00.252338 7 log.go:181] (0xc001f4cc60) Data frame received for 5 I0112 22:53:00.252375 7 log.go:181] (0xc0020df180) (5) Data frame handling I0112 22:53:00.253991 7 log.go:181] (0xc001f4cc60) Data frame received for 1 I0112 22:53:00.254024 7 log.go:181] (0xc00452a140) (1) Data frame handling I0112 22:53:00.254045 7 log.go:181] (0xc00452a140) (1) Data frame sent I0112 22:53:00.254080 7 log.go:181] (0xc001f4cc60) (0xc00452a140) Stream removed, broadcasting: 1 I0112 22:53:00.254132 7 log.go:181] (0xc001f4cc60) Go away received I0112 22:53:00.254237 7 log.go:181] (0xc001f4cc60) (0xc00452a140) Stream removed, broadcasting: 1 I0112 22:53:00.254260 7 log.go:181] (0xc001f4cc60) (0xc00452a1e0) Stream removed, broadcasting: 3 I0112 22:53:00.254279 7 log.go:181] (0xc001f4cc60) (0xc0020df180) Stream removed, broadcasting: 5 Jan 12 22:53:00.254: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 22:53:00.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-4572" for this suite. • [SLOW TEST:13.528 seconds] [k8s.io] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":45,"skipped":791,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 22:53:00.263: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 12 22:53:01.315: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 12 22:53:03.447: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746088781, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746088781, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746088781, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746088781, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 12 22:53:06.933: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 22:53:07.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1721" for this suite. STEP: Destroying namespace "webhook-1721-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:7.509 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":309,"completed":46,"skipped":792,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 22:53:07.772: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward API volume plugin Jan 12 22:53:07.971: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a051f74a-7511-4f60-b4ef-7db44523b496" in namespace "downward-api-7563" to be "Succeeded or Failed" Jan 12 22:53:08.320: INFO: Pod "downwardapi-volume-a051f74a-7511-4f60-b4ef-7db44523b496": Phase="Pending", Reason="", readiness=false. Elapsed: 349.208414ms Jan 12 22:53:10.326: INFO: Pod "downwardapi-volume-a051f74a-7511-4f60-b4ef-7db44523b496": Phase="Pending", Reason="", readiness=false. Elapsed: 2.354751567s Jan 12 22:53:12.329: INFO: Pod "downwardapi-volume-a051f74a-7511-4f60-b4ef-7db44523b496": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.357819851s STEP: Saw pod success Jan 12 22:53:12.329: INFO: Pod "downwardapi-volume-a051f74a-7511-4f60-b4ef-7db44523b496" satisfied condition "Succeeded or Failed" Jan 12 22:53:12.345: INFO: Trying to get logs from node leguer-worker pod downwardapi-volume-a051f74a-7511-4f60-b4ef-7db44523b496 container client-container: STEP: delete the pod Jan 12 22:53:12.390: INFO: Waiting for pod downwardapi-volume-a051f74a-7511-4f60-b4ef-7db44523b496 to disappear Jan 12 22:53:12.421: INFO: Pod downwardapi-volume-a051f74a-7511-4f60-b4ef-7db44523b496 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 22:53:12.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7563" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":47,"skipped":810,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount projected service account token [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 22:53:12.429: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount projected service account token [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test service account token: Jan 12 22:53:12.545: INFO: Waiting up to 5m0s for pod "test-pod-dee4c442-e97d-4328-9610-21c457565844" in namespace "svcaccounts-2991" to be "Succeeded or Failed" Jan 12 22:53:12.549: INFO: Pod "test-pod-dee4c442-e97d-4328-9610-21c457565844": Phase="Pending", Reason="", readiness=false. Elapsed: 4.12031ms Jan 12 22:53:14.554: INFO: Pod "test-pod-dee4c442-e97d-4328-9610-21c457565844": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00906202s Jan 12 22:53:16.565: INFO: Pod "test-pod-dee4c442-e97d-4328-9610-21c457565844": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020301563s STEP: Saw pod success Jan 12 22:53:16.565: INFO: Pod "test-pod-dee4c442-e97d-4328-9610-21c457565844" satisfied condition "Succeeded or Failed" Jan 12 22:53:16.568: INFO: Trying to get logs from node leguer-worker pod test-pod-dee4c442-e97d-4328-9610-21c457565844 container agnhost-container: STEP: delete the pod Jan 12 22:53:16.601: INFO: Waiting for pod test-pod-dee4c442-e97d-4328-9610-21c457565844 to disappear Jan 12 22:53:16.615: INFO: Pod test-pod-dee4c442-e97d-4328-9610-21c457565844 no longer exists [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 22:53:16.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-2991" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should mount projected service account token [Conformance]","total":309,"completed":48,"skipped":832,"failed":0} ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 22:53:16.625: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name configmap-test-volume-1d3555d9-cb24-4407-b7f9-a623f32caab8 STEP: Creating a pod to test consume configMaps Jan 12 22:53:17.065: INFO: Waiting up to 5m0s for pod "pod-configmaps-131dd98f-e283-42f3-9145-2f80294aaec1" in namespace "configmap-9766" to be "Succeeded or Failed" Jan 12 22:53:17.077: INFO: Pod "pod-configmaps-131dd98f-e283-42f3-9145-2f80294aaec1": Phase="Pending", Reason="", readiness=false. Elapsed: 12.322462ms Jan 12 22:53:19.082: INFO: Pod "pod-configmaps-131dd98f-e283-42f3-9145-2f80294aaec1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017208696s Jan 12 22:53:21.087: INFO: Pod "pod-configmaps-131dd98f-e283-42f3-9145-2f80294aaec1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022206397s STEP: Saw pod success Jan 12 22:53:21.087: INFO: Pod "pod-configmaps-131dd98f-e283-42f3-9145-2f80294aaec1" satisfied condition "Succeeded or Failed" Jan 12 22:53:21.091: INFO: Trying to get logs from node leguer-worker pod pod-configmaps-131dd98f-e283-42f3-9145-2f80294aaec1 container agnhost-container: STEP: delete the pod Jan 12 22:53:21.190: INFO: Waiting for pod pod-configmaps-131dd98f-e283-42f3-9145-2f80294aaec1 to disappear Jan 12 22:53:21.209: INFO: Pod pod-configmaps-131dd98f-e283-42f3-9145-2f80294aaec1 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 22:53:21.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9766" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":49,"skipped":832,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 22:53:21.322: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should provide secure master service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 22:53:21.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1513" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":309,"completed":50,"skipped":851,"failed":0} SSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 22:53:21.398: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should be able to change the type from ExternalName to NodePort [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a service externalname-service with the type=ExternalName in namespace services-8699 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-8699 I0112 22:53:21.670420 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-8699, replica count: 2 I0112 22:53:24.720964 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0112 22:53:27.721197 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 12 22:53:27.721: INFO: Creating new exec pod Jan 12 22:53:32.744: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-8699 exec execpodkklpp -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Jan 12 22:53:32.975: INFO: stderr: "I0112 22:53:32.891258 548 log.go:181] (0xc00073ed10) (0xc0006ca500) Create stream\nI0112 22:53:32.891359 548 log.go:181] (0xc00073ed10) (0xc0006ca500) Stream added, broadcasting: 1\nI0112 22:53:32.893284 548 log.go:181] (0xc00073ed10) Reply frame received for 1\nI0112 22:53:32.893319 548 log.go:181] (0xc00073ed10) (0xc000b76000) Create stream\nI0112 22:53:32.893328 548 log.go:181] (0xc00073ed10) (0xc000b76000) Stream added, broadcasting: 3\nI0112 22:53:32.894210 548 log.go:181] (0xc00073ed10) Reply frame received for 3\nI0112 22:53:32.894267 548 log.go:181] (0xc00073ed10) (0xc000487900) Create stream\nI0112 22:53:32.894289 548 log.go:181] (0xc00073ed10) (0xc000487900) Stream added, broadcasting: 5\nI0112 22:53:32.895195 548 log.go:181] (0xc00073ed10) Reply frame received for 5\nI0112 22:53:32.967270 548 log.go:181] (0xc00073ed10) Data frame received for 5\nI0112 22:53:32.967309 548 log.go:181] (0xc000487900) (5) Data frame handling\nI0112 22:53:32.967332 548 log.go:181] (0xc000487900) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0112 22:53:32.967643 548 log.go:181] (0xc00073ed10) Data frame received for 5\nI0112 22:53:32.967670 548 log.go:181] (0xc000487900) (5) Data frame handling\nI0112 22:53:32.967688 548 log.go:181] (0xc000487900) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0112 22:53:32.967840 548 log.go:181] (0xc00073ed10) Data frame received for 3\nI0112 22:53:32.967865 548 log.go:181] (0xc000b76000) (3) Data frame handling\nI0112 22:53:32.968036 548 log.go:181] (0xc00073ed10) Data frame received for 5\nI0112 22:53:32.968051 548 log.go:181] (0xc000487900) (5) Data frame handling\nI0112 22:53:32.969897 548 log.go:181] (0xc00073ed10) Data frame received for 1\nI0112 22:53:32.969913 548 log.go:181] (0xc0006ca500) (1) Data frame handling\nI0112 22:53:32.969924 548 log.go:181] (0xc0006ca500) (1) Data frame sent\nI0112 22:53:32.969937 548 log.go:181] (0xc00073ed10) (0xc0006ca500) Stream removed, broadcasting: 1\nI0112 22:53:32.970065 548 log.go:181] (0xc00073ed10) Go away received\nI0112 22:53:32.970302 548 log.go:181] (0xc00073ed10) (0xc0006ca500) Stream removed, broadcasting: 1\nI0112 22:53:32.970325 548 log.go:181] (0xc00073ed10) (0xc000b76000) Stream removed, broadcasting: 3\nI0112 22:53:32.970335 548 log.go:181] (0xc00073ed10) (0xc000487900) Stream removed, broadcasting: 5\n" Jan 12 22:53:32.975: INFO: stdout: "" Jan 12 22:53:32.977: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-8699 exec execpodkklpp -- /bin/sh -x -c nc -zv -t -w 2 10.96.225.60 80' Jan 12 22:53:33.175: INFO: stderr: "I0112 22:53:33.100331 566 log.go:181] (0xc00016c0b0) (0xc000c703c0) Create stream\nI0112 22:53:33.100403 566 log.go:181] (0xc00016c0b0) (0xc000c703c0) Stream added, broadcasting: 1\nI0112 22:53:33.102256 566 log.go:181] (0xc00016c0b0) Reply frame received for 1\nI0112 22:53:33.102303 566 log.go:181] (0xc00016c0b0) (0xc000314aa0) Create stream\nI0112 22:53:33.102315 566 log.go:181] (0xc00016c0b0) (0xc000314aa0) Stream added, broadcasting: 3\nI0112 22:53:33.102982 566 log.go:181] (0xc00016c0b0) Reply frame received for 3\nI0112 22:53:33.103013 566 log.go:181] (0xc00016c0b0) (0xc000302320) Create stream\nI0112 22:53:33.103024 566 log.go:181] (0xc00016c0b0) (0xc000302320) Stream added, broadcasting: 5\nI0112 22:53:33.103970 566 log.go:181] (0xc00016c0b0) Reply frame received for 5\nI0112 22:53:33.166252 566 log.go:181] (0xc00016c0b0) Data frame received for 5\nI0112 22:53:33.166302 566 log.go:181] (0xc000302320) (5) Data frame handling\nI0112 22:53:33.166347 566 log.go:181] (0xc000302320) (5) Data frame sent\nI0112 22:53:33.166386 566 log.go:181] (0xc00016c0b0) Data frame received for 5\nI0112 22:53:33.166409 566 log.go:181] (0xc000302320) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.225.60 80\nConnection to 10.96.225.60 80 port [tcp/http] succeeded!\nI0112 22:53:33.166874 566 log.go:181] (0xc00016c0b0) Data frame received for 3\nI0112 22:53:33.166899 566 log.go:181] (0xc000314aa0) (3) Data frame handling\nI0112 22:53:33.169105 566 log.go:181] (0xc00016c0b0) Data frame received for 1\nI0112 22:53:33.169143 566 log.go:181] (0xc000c703c0) (1) Data frame handling\nI0112 22:53:33.169225 566 log.go:181] (0xc000c703c0) (1) Data frame sent\nI0112 22:53:33.169306 566 log.go:181] (0xc00016c0b0) (0xc000c703c0) Stream removed, broadcasting: 1\nI0112 22:53:33.169399 566 log.go:181] (0xc00016c0b0) Go away received\nI0112 22:53:33.169911 566 log.go:181] (0xc00016c0b0) (0xc000c703c0) Stream removed, broadcasting: 1\nI0112 22:53:33.169944 566 log.go:181] (0xc00016c0b0) (0xc000314aa0) Stream removed, broadcasting: 3\nI0112 22:53:33.169956 566 log.go:181] (0xc00016c0b0) (0xc000302320) Stream removed, broadcasting: 5\n" Jan 12 22:53:33.175: INFO: stdout: "" Jan 12 22:53:33.175: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-8699 exec execpodkklpp -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.13 32646' Jan 12 22:53:33.399: INFO: stderr: "I0112 22:53:33.312913 584 log.go:181] (0xc00003b4a0) (0xc000bcc640) Create stream\nI0112 22:53:33.312971 584 log.go:181] (0xc00003b4a0) (0xc000bcc640) Stream added, broadcasting: 1\nI0112 22:53:33.314532 584 log.go:181] (0xc00003b4a0) Reply frame received for 1\nI0112 22:53:33.314565 584 log.go:181] (0xc00003b4a0) (0xc0005c2000) Create stream\nI0112 22:53:33.314578 584 log.go:181] (0xc00003b4a0) (0xc0005c2000) Stream added, broadcasting: 3\nI0112 22:53:33.315444 584 log.go:181] (0xc00003b4a0) Reply frame received for 3\nI0112 22:53:33.315471 584 log.go:181] (0xc00003b4a0) (0xc0005c20a0) Create stream\nI0112 22:53:33.315478 584 log.go:181] (0xc00003b4a0) (0xc0005c20a0) Stream added, broadcasting: 5\nI0112 22:53:33.316205 584 log.go:181] (0xc00003b4a0) Reply frame received for 5\nI0112 22:53:33.392612 584 log.go:181] (0xc00003b4a0) Data frame received for 5\nI0112 22:53:33.392651 584 log.go:181] (0xc0005c20a0) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.13 32646\nConnection to 172.18.0.13 32646 port [tcp/32646] succeeded!\nI0112 22:53:33.392693 584 log.go:181] (0xc00003b4a0) Data frame received for 3\nI0112 22:53:33.392739 584 log.go:181] (0xc0005c2000) (3) Data frame handling\nI0112 22:53:33.392779 584 log.go:181] (0xc0005c20a0) (5) Data frame sent\nI0112 22:53:33.392794 584 log.go:181] (0xc00003b4a0) Data frame received for 5\nI0112 22:53:33.392804 584 log.go:181] (0xc0005c20a0) (5) Data frame handling\nI0112 22:53:33.393749 584 log.go:181] (0xc00003b4a0) Data frame received for 1\nI0112 22:53:33.393766 584 log.go:181] (0xc000bcc640) (1) Data frame handling\nI0112 22:53:33.393776 584 log.go:181] (0xc000bcc640) (1) Data frame sent\nI0112 22:53:33.393788 584 log.go:181] (0xc00003b4a0) (0xc000bcc640) Stream removed, broadcasting: 1\nI0112 22:53:33.393798 584 log.go:181] (0xc00003b4a0) Go away received\nI0112 22:53:33.394221 584 log.go:181] (0xc00003b4a0) (0xc000bcc640) Stream removed, broadcasting: 1\nI0112 22:53:33.394242 584 log.go:181] (0xc00003b4a0) (0xc0005c2000) Stream removed, broadcasting: 3\nI0112 22:53:33.394255 584 log.go:181] (0xc00003b4a0) (0xc0005c20a0) Stream removed, broadcasting: 5\n" Jan 12 22:53:33.399: INFO: stdout: "" Jan 12 22:53:33.399: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-8699 exec execpodkklpp -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.12 32646' Jan 12 22:53:33.611: INFO: stderr: "I0112 22:53:33.527415 602 log.go:181] (0xc000024000) (0xc000c86000) Create stream\nI0112 22:53:33.527499 602 log.go:181] (0xc000024000) (0xc000c86000) Stream added, broadcasting: 1\nI0112 22:53:33.529564 602 log.go:181] (0xc000024000) Reply frame received for 1\nI0112 22:53:33.529609 602 log.go:181] (0xc000024000) (0xc0005d4000) Create stream\nI0112 22:53:33.529621 602 log.go:181] (0xc000024000) (0xc0005d4000) Stream added, broadcasting: 3\nI0112 22:53:33.530612 602 log.go:181] (0xc000024000) Reply frame received for 3\nI0112 22:53:33.530652 602 log.go:181] (0xc000024000) (0xc00089a3c0) Create stream\nI0112 22:53:33.530665 602 log.go:181] (0xc000024000) (0xc00089a3c0) Stream added, broadcasting: 5\nI0112 22:53:33.531644 602 log.go:181] (0xc000024000) Reply frame received for 5\nI0112 22:53:33.602320 602 log.go:181] (0xc000024000) Data frame received for 3\nI0112 22:53:33.602371 602 log.go:181] (0xc0005d4000) (3) Data frame handling\nI0112 22:53:33.602592 602 log.go:181] (0xc000024000) Data frame received for 5\nI0112 22:53:33.602621 602 log.go:181] (0xc00089a3c0) (5) Data frame handling\nI0112 22:53:33.602655 602 log.go:181] (0xc00089a3c0) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.12 32646\nConnection to 172.18.0.12 32646 port [tcp/32646] succeeded!\nI0112 22:53:33.602716 602 log.go:181] (0xc000024000) Data frame received for 5\nI0112 22:53:33.602744 602 log.go:181] (0xc00089a3c0) (5) Data frame handling\nI0112 22:53:33.606150 602 log.go:181] (0xc000024000) Data frame received for 1\nI0112 22:53:33.606164 602 log.go:181] (0xc000c86000) (1) Data frame handling\nI0112 22:53:33.606176 602 log.go:181] (0xc000c86000) (1) Data frame sent\nI0112 22:53:33.606276 602 log.go:181] (0xc000024000) (0xc000c86000) Stream removed, broadcasting: 1\nI0112 22:53:33.606290 602 log.go:181] (0xc000024000) Go away received\nI0112 22:53:33.606699 602 log.go:181] (0xc000024000) (0xc000c86000) Stream removed, broadcasting: 1\nI0112 22:53:33.606719 602 log.go:181] (0xc000024000) (0xc0005d4000) Stream removed, broadcasting: 3\nI0112 22:53:33.606730 602 log.go:181] (0xc000024000) (0xc00089a3c0) Stream removed, broadcasting: 5\n" Jan 12 22:53:33.611: INFO: stdout: "" Jan 12 22:53:33.612: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 22:53:33.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8699" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 • [SLOW TEST:12.308 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":309,"completed":51,"skipped":855,"failed":0} [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 22:53:33.706: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Jan 12 22:53:38.304: INFO: Successfully updated pod "adopt-release-4zplw" STEP: Checking that the Job readopts the Pod Jan 12 22:53:38.304: INFO: Waiting up to 15m0s for pod "adopt-release-4zplw" in namespace "job-1732" to be "adopted" Jan 12 22:53:38.334: INFO: Pod "adopt-release-4zplw": Phase="Running", Reason="", readiness=true. Elapsed: 30.06002ms Jan 12 22:53:40.338: INFO: Pod "adopt-release-4zplw": Phase="Running", Reason="", readiness=true. Elapsed: 2.034777522s Jan 12 22:53:40.338: INFO: Pod "adopt-release-4zplw" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Jan 12 22:53:40.853: INFO: Successfully updated pod "adopt-release-4zplw" STEP: Checking that the Job releases the Pod Jan 12 22:53:40.853: INFO: Waiting up to 15m0s for pod "adopt-release-4zplw" in namespace "job-1732" to be "released" Jan 12 22:53:40.905: INFO: Pod "adopt-release-4zplw": Phase="Running", Reason="", readiness=true. Elapsed: 51.833416ms Jan 12 22:53:43.027: INFO: Pod "adopt-release-4zplw": Phase="Running", Reason="", readiness=true. Elapsed: 2.174161936s Jan 12 22:53:43.028: INFO: Pod "adopt-release-4zplw" satisfied condition "released" [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 22:53:43.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-1732" for this suite. • [SLOW TEST:9.332 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":309,"completed":52,"skipped":855,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 22:53:43.038: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name configmap-test-volume-802e24c2-5045-4c29-9efa-dbb6f634f8d5 STEP: Creating a pod to test consume configMaps Jan 12 22:53:43.434: INFO: Waiting up to 5m0s for pod "pod-configmaps-536d9c2f-4622-478c-af7f-d5a66021969c" in namespace "configmap-3320" to be "Succeeded or Failed" Jan 12 22:53:43.581: INFO: Pod "pod-configmaps-536d9c2f-4622-478c-af7f-d5a66021969c": Phase="Pending", Reason="", readiness=false. Elapsed: 146.887123ms Jan 12 22:53:45.587: INFO: Pod "pod-configmaps-536d9c2f-4622-478c-af7f-d5a66021969c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.152663019s Jan 12 22:53:47.595: INFO: Pod "pod-configmaps-536d9c2f-4622-478c-af7f-d5a66021969c": Phase="Running", Reason="", readiness=true. Elapsed: 4.161406787s Jan 12 22:53:49.601: INFO: Pod "pod-configmaps-536d9c2f-4622-478c-af7f-d5a66021969c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.166464891s STEP: Saw pod success Jan 12 22:53:49.601: INFO: Pod "pod-configmaps-536d9c2f-4622-478c-af7f-d5a66021969c" satisfied condition "Succeeded or Failed" Jan 12 22:53:49.603: INFO: Trying to get logs from node leguer-worker2 pod pod-configmaps-536d9c2f-4622-478c-af7f-d5a66021969c container agnhost-container: STEP: delete the pod Jan 12 22:53:49.653: INFO: Waiting for pod pod-configmaps-536d9c2f-4622-478c-af7f-d5a66021969c to disappear Jan 12 22:53:49.671: INFO: Pod pod-configmaps-536d9c2f-4622-478c-af7f-d5a66021969c no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 22:53:49.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3320" for this suite. • [SLOW TEST:6.641 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":309,"completed":53,"skipped":867,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 22:53:49.679: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 12 22:53:50.387: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 12 22:53:52.398: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746088830, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746088830, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746088830, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746088830, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 12 22:53:55.431: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 22:53:55.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5617" for this suite. STEP: Destroying namespace "webhook-5617-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:6.048 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":309,"completed":54,"skipped":884,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 22:53:55.728: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test emptydir 0666 on tmpfs Jan 12 22:53:55.904: INFO: Waiting up to 5m0s for pod "pod-ce1e304b-fb15-434f-a3a1-e9a38190e669" in namespace "emptydir-9253" to be "Succeeded or Failed" Jan 12 22:53:55.911: INFO: Pod "pod-ce1e304b-fb15-434f-a3a1-e9a38190e669": Phase="Pending", Reason="", readiness=false. Elapsed: 7.345161ms Jan 12 22:53:57.915: INFO: Pod "pod-ce1e304b-fb15-434f-a3a1-e9a38190e669": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011285127s Jan 12 22:53:59.919: INFO: Pod "pod-ce1e304b-fb15-434f-a3a1-e9a38190e669": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015343601s STEP: Saw pod success Jan 12 22:53:59.919: INFO: Pod "pod-ce1e304b-fb15-434f-a3a1-e9a38190e669" satisfied condition "Succeeded or Failed" Jan 12 22:53:59.923: INFO: Trying to get logs from node leguer-worker2 pod pod-ce1e304b-fb15-434f-a3a1-e9a38190e669 container test-container: STEP: delete the pod Jan 12 22:53:59.978: INFO: Waiting for pod pod-ce1e304b-fb15-434f-a3a1-e9a38190e669 to disappear Jan 12 22:53:59.988: INFO: Pod pod-ce1e304b-fb15-434f-a3a1-e9a38190e669 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 22:53:59.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9253" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":55,"skipped":886,"failed":0} SSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 22:53:59.995: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating projection with secret that has name projected-secret-test-859ff7d3-1568-485f-af8b-0fcd31c7acaa STEP: Creating a pod to test consume secrets Jan 12 22:54:00.063: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-40eafc71-6adc-4f8f-a4e5-f0a386ed3d33" in namespace "projected-9425" to be "Succeeded or Failed" Jan 12 22:54:00.066: INFO: Pod "pod-projected-secrets-40eafc71-6adc-4f8f-a4e5-f0a386ed3d33": Phase="Pending", Reason="", readiness=false. Elapsed: 3.26164ms Jan 12 22:54:02.071: INFO: Pod "pod-projected-secrets-40eafc71-6adc-4f8f-a4e5-f0a386ed3d33": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008339276s Jan 12 22:54:04.076: INFO: Pod "pod-projected-secrets-40eafc71-6adc-4f8f-a4e5-f0a386ed3d33": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012719196s STEP: Saw pod success Jan 12 22:54:04.076: INFO: Pod "pod-projected-secrets-40eafc71-6adc-4f8f-a4e5-f0a386ed3d33" satisfied condition "Succeeded or Failed" Jan 12 22:54:04.078: INFO: Trying to get logs from node leguer-worker pod pod-projected-secrets-40eafc71-6adc-4f8f-a4e5-f0a386ed3d33 container projected-secret-volume-test: STEP: delete the pod Jan 12 22:54:04.111: INFO: Waiting for pod pod-projected-secrets-40eafc71-6adc-4f8f-a4e5-f0a386ed3d33 to disappear Jan 12 22:54:04.126: INFO: Pod pod-projected-secrets-40eafc71-6adc-4f8f-a4e5-f0a386ed3d33 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 22:54:04.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9425" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":56,"skipped":889,"failed":0} SSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 22:54:04.154: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward API volume plugin Jan 12 22:54:04.219: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b63d5db6-3bf2-4fe9-89c5-e28b99665082" in namespace "projected-6206" to be "Succeeded or Failed" Jan 12 22:54:04.273: INFO: Pod "downwardapi-volume-b63d5db6-3bf2-4fe9-89c5-e28b99665082": Phase="Pending", Reason="", readiness=false. Elapsed: 53.542324ms Jan 12 22:54:06.278: INFO: Pod "downwardapi-volume-b63d5db6-3bf2-4fe9-89c5-e28b99665082": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05893212s Jan 12 22:54:08.284: INFO: Pod "downwardapi-volume-b63d5db6-3bf2-4fe9-89c5-e28b99665082": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.06480821s STEP: Saw pod success Jan 12 22:54:08.284: INFO: Pod "downwardapi-volume-b63d5db6-3bf2-4fe9-89c5-e28b99665082" satisfied condition "Succeeded or Failed" Jan 12 22:54:08.288: INFO: Trying to get logs from node leguer-worker pod downwardapi-volume-b63d5db6-3bf2-4fe9-89c5-e28b99665082 container client-container: STEP: delete the pod Jan 12 22:54:08.330: INFO: Waiting for pod downwardapi-volume-b63d5db6-3bf2-4fe9-89c5-e28b99665082 to disappear Jan 12 22:54:08.343: INFO: Pod downwardapi-volume-b63d5db6-3bf2-4fe9-89c5-e28b99665082 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 22:54:08.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6206" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":309,"completed":57,"skipped":892,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 22:54:08.351: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:129 [It] should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jan 12 22:54:08.471: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 22:54:08.486: INFO: Number of nodes with available pods: 0 Jan 12 22:54:08.486: INFO: Node leguer-worker is running more than one daemon pod Jan 12 22:54:09.490: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 22:54:09.492: INFO: Number of nodes with available pods: 0 Jan 12 22:54:09.492: INFO: Node leguer-worker is running more than one daemon pod Jan 12 22:54:10.517: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 22:54:10.521: INFO: Number of nodes with available pods: 0 Jan 12 22:54:10.521: INFO: Node leguer-worker is running more than one daemon pod Jan 12 22:54:11.493: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 22:54:11.496: INFO: Number of nodes with available pods: 0 Jan 12 22:54:11.496: INFO: Node leguer-worker is running more than one daemon pod Jan 12 22:54:12.493: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 22:54:12.498: INFO: Number of nodes with available pods: 1 Jan 12 22:54:12.498: INFO: Node leguer-worker is running more than one daemon pod Jan 12 22:54:13.491: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 22:54:13.495: INFO: Number of nodes with available pods: 2 Jan 12 22:54:13.495: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Jan 12 22:54:13.565: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 22:54:13.615: INFO: Number of nodes with available pods: 1 Jan 12 22:54:13.615: INFO: Node leguer-worker2 is running more than one daemon pod Jan 12 22:54:14.621: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 22:54:14.836: INFO: Number of nodes with available pods: 1 Jan 12 22:54:14.836: INFO: Node leguer-worker2 is running more than one daemon pod Jan 12 22:54:15.622: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 22:54:15.625: INFO: Number of nodes with available pods: 1 Jan 12 22:54:15.625: INFO: Node leguer-worker2 is running more than one daemon pod Jan 12 22:54:16.645: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 22:54:16.648: INFO: Number of nodes with available pods: 1 Jan 12 22:54:16.648: INFO: Node leguer-worker2 is running more than one daemon pod Jan 12 22:54:17.620: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 22:54:17.623: INFO: Number of nodes with available pods: 2 Jan 12 22:54:17.623: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:95 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8884, will wait for the garbage collector to delete the pods Jan 12 22:54:17.688: INFO: Deleting DaemonSet.extensions daemon-set took: 8.251515ms Jan 12 22:54:18.289: INFO: Terminating DaemonSet.extensions daemon-set pods took: 600.358929ms Jan 12 22:55:29.892: INFO: Number of nodes with available pods: 0 Jan 12 22:55:29.892: INFO: Number of running nodes: 0, number of available pods: 0 Jan 12 22:55:29.895: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"417249"},"items":null} Jan 12 22:55:29.898: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"417249"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 22:55:29.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8884" for this suite. • [SLOW TEST:81.566 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":309,"completed":58,"skipped":907,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 22:55:29.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating projection with secret that has name projected-secret-test-map-ed495076-0d96-4ab9-a3b2-59f0dd812448 STEP: Creating a pod to test consume secrets Jan 12 22:55:30.028: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9cf6ef62-26a2-44b6-aeab-2b778183d6b6" in namespace "projected-5264" to be "Succeeded or Failed" Jan 12 22:55:30.039: INFO: Pod "pod-projected-secrets-9cf6ef62-26a2-44b6-aeab-2b778183d6b6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.832934ms Jan 12 22:55:32.044: INFO: Pod "pod-projected-secrets-9cf6ef62-26a2-44b6-aeab-2b778183d6b6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015367387s Jan 12 22:55:34.050: INFO: Pod "pod-projected-secrets-9cf6ef62-26a2-44b6-aeab-2b778183d6b6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021242367s STEP: Saw pod success Jan 12 22:55:34.050: INFO: Pod "pod-projected-secrets-9cf6ef62-26a2-44b6-aeab-2b778183d6b6" satisfied condition "Succeeded or Failed" Jan 12 22:55:34.053: INFO: Trying to get logs from node leguer-worker2 pod pod-projected-secrets-9cf6ef62-26a2-44b6-aeab-2b778183d6b6 container projected-secret-volume-test: STEP: delete the pod Jan 12 22:55:34.228: INFO: Waiting for pod pod-projected-secrets-9cf6ef62-26a2-44b6-aeab-2b778183d6b6 to disappear Jan 12 22:55:34.291: INFO: Pod pod-projected-secrets-9cf6ef62-26a2-44b6-aeab-2b778183d6b6 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 22:55:34.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5264" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":309,"completed":59,"skipped":940,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 22:55:34.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 12 22:55:34.897: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 12 22:55:36.907: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746088934, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746088934, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746088934, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746088934, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 12 22:55:39.956: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 22:55:40.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9531" for this suite. STEP: Destroying namespace "webhook-9531-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:5.978 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":309,"completed":60,"skipped":961,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 22:55:40.281: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should serve a basic endpoint from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating service endpoint-test2 in namespace services-4161 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4161 to expose endpoints map[] Jan 12 22:55:40.435: INFO: Failed go get Endpoints object: endpoints "endpoint-test2" not found Jan 12 22:55:41.443: INFO: successfully validated that service endpoint-test2 in namespace services-4161 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-4161 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4161 to expose endpoints map[pod1:[80]] Jan 12 22:55:45.541: INFO: successfully validated that service endpoint-test2 in namespace services-4161 exposes endpoints map[pod1:[80]] STEP: Creating pod pod2 in namespace services-4161 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4161 to expose endpoints map[pod1:[80] pod2:[80]] Jan 12 22:55:49.594: INFO: successfully validated that service endpoint-test2 in namespace services-4161 exposes endpoints map[pod1:[80] pod2:[80]] STEP: Deleting pod pod1 in namespace services-4161 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4161 to expose endpoints map[pod2:[80]] Jan 12 22:55:49.705: INFO: successfully validated that service endpoint-test2 in namespace services-4161 exposes endpoints map[pod2:[80]] STEP: Deleting pod pod2 in namespace services-4161 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4161 to expose endpoints map[] Jan 12 22:55:50.046: INFO: successfully validated that service endpoint-test2 in namespace services-4161 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 22:55:50.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4161" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 • [SLOW TEST:9.922 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":309,"completed":61,"skipped":995,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Ingress API should support creating Ingress API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] Ingress API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 22:55:50.204: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingress STEP: Waiting for a default service account to be provisioned in namespace [It] should support creating Ingress API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Jan 12 22:55:50.329: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching Jan 12 22:55:50.333: INFO: starting watch STEP: patching STEP: updating Jan 12 22:55:50.613: INFO: waiting for watch events with expected annotations Jan 12 22:55:50.614: INFO: saw patched and updated annotations STEP: patching /status STEP: updating /status STEP: get /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] Ingress API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 22:55:50.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingress-65" for this suite. •{"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":309,"completed":62,"skipped":1064,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 22:55:50.774: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating service in namespace services-4736 Jan 12 22:55:52.889: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-4736 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Jan 12 22:55:53.108: INFO: stderr: "I0112 22:55:53.018269 620 log.go:181] (0xc000c8a000) (0xc0008c34a0) Create stream\nI0112 22:55:53.018341 620 log.go:181] (0xc000c8a000) (0xc0008c34a0) Stream added, broadcasting: 1\nI0112 22:55:53.020575 620 log.go:181] (0xc000c8a000) Reply frame received for 1\nI0112 22:55:53.020618 620 log.go:181] (0xc000c8a000) (0xc00019da40) Create stream\nI0112 22:55:53.020639 620 log.go:181] (0xc000c8a000) (0xc00019da40) Stream added, broadcasting: 3\nI0112 22:55:53.021623 620 log.go:181] (0xc000c8a000) Reply frame received for 3\nI0112 22:55:53.021649 620 log.go:181] (0xc000c8a000) (0xc0008c3cc0) Create stream\nI0112 22:55:53.021657 620 log.go:181] (0xc000c8a000) (0xc0008c3cc0) Stream added, broadcasting: 5\nI0112 22:55:53.022616 620 log.go:181] (0xc000c8a000) Reply frame received for 5\nI0112 22:55:53.095758 620 log.go:181] (0xc000c8a000) Data frame received for 5\nI0112 22:55:53.095783 620 log.go:181] (0xc0008c3cc0) (5) Data frame handling\nI0112 22:55:53.095796 620 log.go:181] (0xc0008c3cc0) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI0112 22:55:53.100918 620 log.go:181] (0xc000c8a000) Data frame received for 3\nI0112 22:55:53.100952 620 log.go:181] (0xc00019da40) (3) Data frame handling\nI0112 22:55:53.100978 620 log.go:181] (0xc00019da40) (3) Data frame sent\nI0112 22:55:53.101345 620 log.go:181] (0xc000c8a000) Data frame received for 5\nI0112 22:55:53.101368 620 log.go:181] (0xc0008c3cc0) (5) Data frame handling\nI0112 22:55:53.101483 620 log.go:181] (0xc000c8a000) Data frame received for 3\nI0112 22:55:53.101495 620 log.go:181] (0xc00019da40) (3) Data frame handling\nI0112 22:55:53.103020 620 log.go:181] (0xc000c8a000) Data frame received for 1\nI0112 22:55:53.103038 620 log.go:181] (0xc0008c34a0) (1) Data frame handling\nI0112 22:55:53.103047 620 log.go:181] (0xc0008c34a0) (1) Data frame sent\nI0112 22:55:53.103058 620 log.go:181] (0xc000c8a000) (0xc0008c34a0) Stream removed, broadcasting: 1\nI0112 22:55:53.103072 620 log.go:181] (0xc000c8a000) Go away received\nI0112 22:55:53.103469 620 log.go:181] (0xc000c8a000) (0xc0008c34a0) Stream removed, broadcasting: 1\nI0112 22:55:53.103484 620 log.go:181] (0xc000c8a000) (0xc00019da40) Stream removed, broadcasting: 3\nI0112 22:55:53.103492 620 log.go:181] (0xc000c8a000) (0xc0008c3cc0) Stream removed, broadcasting: 5\n" Jan 12 22:55:53.108: INFO: stdout: "iptables" Jan 12 22:55:53.108: INFO: proxyMode: iptables Jan 12 22:55:53.149: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jan 12 22:55:53.244: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-nodeport-timeout in namespace services-4736 STEP: creating replication controller affinity-nodeport-timeout in namespace services-4736 I0112 22:55:53.688417 7 runners.go:190] Created replication controller with name: affinity-nodeport-timeout, namespace: services-4736, replica count: 3 I0112 22:55:56.738867 7 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0112 22:55:59.739076 7 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0112 22:56:02.739282 7 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 12 22:56:02.747: INFO: Creating new exec pod Jan 12 22:56:07.794: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-4736 exec execpod-affinityw75sb -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-timeout 80' Jan 12 22:56:08.021: INFO: stderr: "I0112 22:56:07.946124 638 log.go:181] (0xc0000df1e0) (0xc000abe500) Create stream\nI0112 22:56:07.946190 638 log.go:181] (0xc0000df1e0) (0xc000abe500) Stream added, broadcasting: 1\nI0112 22:56:07.951506 638 log.go:181] (0xc0000df1e0) Reply frame received for 1\nI0112 22:56:07.951548 638 log.go:181] (0xc0000df1e0) (0xc000abe5a0) Create stream\nI0112 22:56:07.951565 638 log.go:181] (0xc0000df1e0) (0xc000abe5a0) Stream added, broadcasting: 3\nI0112 22:56:07.952424 638 log.go:181] (0xc0000df1e0) Reply frame received for 3\nI0112 22:56:07.952452 638 log.go:181] (0xc0000df1e0) (0xc000207ea0) Create stream\nI0112 22:56:07.952461 638 log.go:181] (0xc0000df1e0) (0xc000207ea0) Stream added, broadcasting: 5\nI0112 22:56:07.953489 638 log.go:181] (0xc0000df1e0) Reply frame received for 5\nI0112 22:56:08.013068 638 log.go:181] (0xc0000df1e0) Data frame received for 5\nI0112 22:56:08.013094 638 log.go:181] (0xc000207ea0) (5) Data frame handling\nI0112 22:56:08.013109 638 log.go:181] (0xc000207ea0) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport-timeout 80\nI0112 22:56:08.013415 638 log.go:181] (0xc0000df1e0) Data frame received for 5\nI0112 22:56:08.013434 638 log.go:181] (0xc000207ea0) (5) Data frame handling\nI0112 22:56:08.013446 638 log.go:181] (0xc000207ea0) (5) Data frame sent\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\nI0112 22:56:08.013773 638 log.go:181] (0xc0000df1e0) Data frame received for 5\nI0112 22:56:08.013794 638 log.go:181] (0xc000207ea0) (5) Data frame handling\nI0112 22:56:08.014197 638 log.go:181] (0xc0000df1e0) Data frame received for 3\nI0112 22:56:08.014228 638 log.go:181] (0xc000abe5a0) (3) Data frame handling\nI0112 22:56:08.015700 638 log.go:181] (0xc0000df1e0) Data frame received for 1\nI0112 22:56:08.015720 638 log.go:181] (0xc000abe500) (1) Data frame handling\nI0112 22:56:08.015735 638 log.go:181] (0xc000abe500) (1) Data frame sent\nI0112 22:56:08.015744 638 log.go:181] (0xc0000df1e0) (0xc000abe500) Stream removed, broadcasting: 1\nI0112 22:56:08.015755 638 log.go:181] (0xc0000df1e0) Go away received\nI0112 22:56:08.016348 638 log.go:181] (0xc0000df1e0) (0xc000abe500) Stream removed, broadcasting: 1\nI0112 22:56:08.016373 638 log.go:181] (0xc0000df1e0) (0xc000abe5a0) Stream removed, broadcasting: 3\nI0112 22:56:08.016387 638 log.go:181] (0xc0000df1e0) (0xc000207ea0) Stream removed, broadcasting: 5\n" Jan 12 22:56:08.022: INFO: stdout: "" Jan 12 22:56:08.022: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-4736 exec execpod-affinityw75sb -- /bin/sh -x -c nc -zv -t -w 2 10.96.95.175 80' Jan 12 22:56:08.253: INFO: stderr: "I0112 22:56:08.175866 656 log.go:181] (0xc0002dc000) (0xc000260280) Create stream\nI0112 22:56:08.175943 656 log.go:181] (0xc0002dc000) (0xc000260280) Stream added, broadcasting: 1\nI0112 22:56:08.177892 656 log.go:181] (0xc0002dc000) Reply frame received for 1\nI0112 22:56:08.177936 656 log.go:181] (0xc0002dc000) (0xc000be41e0) Create stream\nI0112 22:56:08.177949 656 log.go:181] (0xc0002dc000) (0xc000be41e0) Stream added, broadcasting: 3\nI0112 22:56:08.178879 656 log.go:181] (0xc0002dc000) Reply frame received for 3\nI0112 22:56:08.178922 656 log.go:181] (0xc0002dc000) (0xc000aa8000) Create stream\nI0112 22:56:08.178938 656 log.go:181] (0xc0002dc000) (0xc000aa8000) Stream added, broadcasting: 5\nI0112 22:56:08.179714 656 log.go:181] (0xc0002dc000) Reply frame received for 5\nI0112 22:56:08.243969 656 log.go:181] (0xc0002dc000) Data frame received for 5\nI0112 22:56:08.244076 656 log.go:181] (0xc000aa8000) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.95.175 80\nConnection to 10.96.95.175 80 port [tcp/http] succeeded!\nI0112 22:56:08.244109 656 log.go:181] (0xc0002dc000) Data frame received for 3\nI0112 22:56:08.244155 656 log.go:181] (0xc000be41e0) (3) Data frame handling\nI0112 22:56:08.244177 656 log.go:181] (0xc000aa8000) (5) Data frame sent\nI0112 22:56:08.244187 656 log.go:181] (0xc0002dc000) Data frame received for 5\nI0112 22:56:08.244192 656 log.go:181] (0xc000aa8000) (5) Data frame handling\nI0112 22:56:08.246174 656 log.go:181] (0xc0002dc000) Data frame received for 1\nI0112 22:56:08.246192 656 log.go:181] (0xc000260280) (1) Data frame handling\nI0112 22:56:08.246223 656 log.go:181] (0xc000260280) (1) Data frame sent\nI0112 22:56:08.246241 656 log.go:181] (0xc0002dc000) (0xc000260280) Stream removed, broadcasting: 1\nI0112 22:56:08.246252 656 log.go:181] (0xc0002dc000) Go away received\nI0112 22:56:08.246771 656 log.go:181] (0xc0002dc000) (0xc000260280) Stream removed, broadcasting: 1\nI0112 22:56:08.246807 656 log.go:181] (0xc0002dc000) (0xc000be41e0) Stream removed, broadcasting: 3\nI0112 22:56:08.246823 656 log.go:181] (0xc0002dc000) (0xc000aa8000) Stream removed, broadcasting: 5\n" Jan 12 22:56:08.253: INFO: stdout: "" Jan 12 22:56:08.253: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-4736 exec execpod-affinityw75sb -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.13 32474' Jan 12 22:56:08.439: INFO: stderr: "I0112 22:56:08.372555 674 log.go:181] (0xc000c8a210) (0xc000c841e0) Create stream\nI0112 22:56:08.372606 674 log.go:181] (0xc000c8a210) (0xc000c841e0) Stream added, broadcasting: 1\nI0112 22:56:08.375437 674 log.go:181] (0xc000c8a210) Reply frame received for 1\nI0112 22:56:08.375527 674 log.go:181] (0xc000c8a210) (0xc000c84280) Create stream\nI0112 22:56:08.375551 674 log.go:181] (0xc000c8a210) (0xc000c84280) Stream added, broadcasting: 3\nI0112 22:56:08.377596 674 log.go:181] (0xc000c8a210) Reply frame received for 3\nI0112 22:56:08.377636 674 log.go:181] (0xc000c8a210) (0xc00063c000) Create stream\nI0112 22:56:08.377646 674 log.go:181] (0xc000c8a210) (0xc00063c000) Stream added, broadcasting: 5\nI0112 22:56:08.378681 674 log.go:181] (0xc000c8a210) Reply frame received for 5\nI0112 22:56:08.431804 674 log.go:181] (0xc000c8a210) Data frame received for 3\nI0112 22:56:08.431846 674 log.go:181] (0xc000c84280) (3) Data frame handling\nI0112 22:56:08.431870 674 log.go:181] (0xc000c8a210) Data frame received for 5\nI0112 22:56:08.431881 674 log.go:181] (0xc00063c000) (5) Data frame handling\nI0112 22:56:08.431893 674 log.go:181] (0xc00063c000) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.13 32474\nConnection to 172.18.0.13 32474 port [tcp/32474] succeeded!\nI0112 22:56:08.432056 674 log.go:181] (0xc000c8a210) Data frame received for 5\nI0112 22:56:08.432090 674 log.go:181] (0xc00063c000) (5) Data frame handling\nI0112 22:56:08.433973 674 log.go:181] (0xc000c8a210) Data frame received for 1\nI0112 22:56:08.434010 674 log.go:181] (0xc000c841e0) (1) Data frame handling\nI0112 22:56:08.434053 674 log.go:181] (0xc000c841e0) (1) Data frame sent\nI0112 22:56:08.434074 674 log.go:181] (0xc000c8a210) (0xc000c841e0) Stream removed, broadcasting: 1\nI0112 22:56:08.434093 674 log.go:181] (0xc000c8a210) Go away received\nI0112 22:56:08.434512 674 log.go:181] (0xc000c8a210) (0xc000c841e0) Stream removed, broadcasting: 1\nI0112 22:56:08.434542 674 log.go:181] (0xc000c8a210) (0xc000c84280) Stream removed, broadcasting: 3\nI0112 22:56:08.434568 674 log.go:181] (0xc000c8a210) (0xc00063c000) Stream removed, broadcasting: 5\n" Jan 12 22:56:08.440: INFO: stdout: "" Jan 12 22:56:08.440: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-4736 exec execpod-affinityw75sb -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.12 32474' Jan 12 22:56:08.640: INFO: stderr: "I0112 22:56:08.571723 693 log.go:181] (0xc0000aa000) (0xc000d16000) Create stream\nI0112 22:56:08.571791 693 log.go:181] (0xc0000aa000) (0xc000d16000) Stream added, broadcasting: 1\nI0112 22:56:08.574433 693 log.go:181] (0xc0000aa000) Reply frame received for 1\nI0112 22:56:08.574470 693 log.go:181] (0xc0000aa000) (0xc00019fd60) Create stream\nI0112 22:56:08.574482 693 log.go:181] (0xc0000aa000) (0xc00019fd60) Stream added, broadcasting: 3\nI0112 22:56:08.575499 693 log.go:181] (0xc0000aa000) Reply frame received for 3\nI0112 22:56:08.575539 693 log.go:181] (0xc0000aa000) (0xc000b281e0) Create stream\nI0112 22:56:08.575553 693 log.go:181] (0xc0000aa000) (0xc000b281e0) Stream added, broadcasting: 5\nI0112 22:56:08.576621 693 log.go:181] (0xc0000aa000) Reply frame received for 5\nI0112 22:56:08.633107 693 log.go:181] (0xc0000aa000) Data frame received for 3\nI0112 22:56:08.633153 693 log.go:181] (0xc00019fd60) (3) Data frame handling\nI0112 22:56:08.633175 693 log.go:181] (0xc0000aa000) Data frame received for 5\nI0112 22:56:08.633193 693 log.go:181] (0xc000b281e0) (5) Data frame handling\nI0112 22:56:08.633212 693 log.go:181] (0xc000b281e0) (5) Data frame sent\nI0112 22:56:08.633222 693 log.go:181] (0xc0000aa000) Data frame received for 5\n+ nc -zv -t -w 2 172.18.0.12 32474\nConnection to 172.18.0.12 32474 port [tcp/32474] succeeded!\nI0112 22:56:08.633231 693 log.go:181] (0xc000b281e0) (5) Data frame handling\nI0112 22:56:08.634591 693 log.go:181] (0xc0000aa000) Data frame received for 1\nI0112 22:56:08.634614 693 log.go:181] (0xc000d16000) (1) Data frame handling\nI0112 22:56:08.634625 693 log.go:181] (0xc000d16000) (1) Data frame sent\nI0112 22:56:08.634637 693 log.go:181] (0xc0000aa000) (0xc000d16000) Stream removed, broadcasting: 1\nI0112 22:56:08.634731 693 log.go:181] (0xc0000aa000) Go away received\nI0112 22:56:08.635043 693 log.go:181] (0xc0000aa000) (0xc000d16000) Stream removed, broadcasting: 1\nI0112 22:56:08.635070 693 log.go:181] (0xc0000aa000) (0xc00019fd60) Stream removed, broadcasting: 3\nI0112 22:56:08.635086 693 log.go:181] (0xc0000aa000) (0xc000b281e0) Stream removed, broadcasting: 5\n" Jan 12 22:56:08.640: INFO: stdout: "" Jan 12 22:56:08.641: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-4736 exec execpod-affinityw75sb -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.13:32474/ ; done' Jan 12 22:56:08.938: INFO: stderr: "I0112 22:56:08.772317 711 log.go:181] (0xc000d18c60) (0xc00069cdc0) Create stream\nI0112 22:56:08.772368 711 log.go:181] (0xc000d18c60) (0xc00069cdc0) Stream added, broadcasting: 1\nI0112 22:56:08.774757 711 log.go:181] (0xc000d18c60) Reply frame received for 1\nI0112 22:56:08.774812 711 log.go:181] (0xc000d18c60) (0xc00021e1e0) Create stream\nI0112 22:56:08.774828 711 log.go:181] (0xc000d18c60) (0xc00021e1e0) Stream added, broadcasting: 3\nI0112 22:56:08.775733 711 log.go:181] (0xc000d18c60) Reply frame received for 3\nI0112 22:56:08.775760 711 log.go:181] (0xc000d18c60) (0xc00069d400) Create stream\nI0112 22:56:08.775779 711 log.go:181] (0xc000d18c60) (0xc00069d400) Stream added, broadcasting: 5\nI0112 22:56:08.776659 711 log.go:181] (0xc000d18c60) Reply frame received for 5\nI0112 22:56:08.825579 711 log.go:181] (0xc000d18c60) Data frame received for 5\nI0112 22:56:08.825605 711 log.go:181] (0xc00069d400) (5) Data frame handling\nI0112 22:56:08.825613 711 log.go:181] (0xc00069d400) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32474/\nI0112 22:56:08.825626 711 log.go:181] (0xc000d18c60) Data frame received for 3\nI0112 22:56:08.825631 711 log.go:181] (0xc00021e1e0) (3) Data frame handling\nI0112 22:56:08.825640 711 log.go:181] (0xc00021e1e0) (3) Data frame sent\nI0112 22:56:08.829423 711 log.go:181] (0xc000d18c60) Data frame received for 3\nI0112 22:56:08.829448 711 log.go:181] (0xc00021e1e0) (3) Data frame handling\nI0112 22:56:08.829466 711 log.go:181] (0xc00021e1e0) (3) Data frame sent\nI0112 22:56:08.829692 711 log.go:181] (0xc000d18c60) Data frame received for 3\nI0112 22:56:08.829721 711 log.go:181] (0xc00021e1e0) (3) Data frame handling\nI0112 22:56:08.829733 711 log.go:181] (0xc00021e1e0) (3) Data frame sent\nI0112 22:56:08.829760 711 log.go:181] (0xc000d18c60) Data frame received for 5\nI0112 22:56:08.829781 711 log.go:181] (0xc00069d400) (5) Data frame handling\nI0112 22:56:08.829801 711 log.go:181] (0xc00069d400) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32474/\nI0112 22:56:08.835972 711 log.go:181] (0xc000d18c60) Data frame received for 3\nI0112 22:56:08.836006 711 log.go:181] (0xc00021e1e0) (3) Data frame handling\nI0112 22:56:08.836039 711 log.go:181] (0xc00021e1e0) (3) Data frame sent\nI0112 22:56:08.836476 711 log.go:181] (0xc000d18c60) Data frame received for 3\nI0112 22:56:08.836499 711 log.go:181] (0xc00021e1e0) (3) Data frame handling\nI0112 22:56:08.836516 711 log.go:181] (0xc00021e1e0) (3) Data frame sent\nI0112 22:56:08.836553 711 log.go:181] (0xc000d18c60) Data frame received for 5\nI0112 22:56:08.836567 711 log.go:181] (0xc00069d400) (5) Data frame handling\nI0112 22:56:08.836588 711 log.go:181] (0xc00069d400) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32474/\nI0112 22:56:08.842296 711 log.go:181] (0xc000d18c60) Data frame received for 3\nI0112 22:56:08.842328 711 log.go:181] (0xc00021e1e0) (3) Data frame handling\nI0112 22:56:08.842361 711 log.go:181] (0xc00021e1e0) (3) Data frame sent\nI0112 22:56:08.842884 711 log.go:181] (0xc000d18c60) Data frame received for 3\nI0112 22:56:08.842910 711 log.go:181] (0xc00021e1e0) (3) Data frame handling\nI0112 22:56:08.842947 711 log.go:181] (0xc00021e1e0) (3) Data frame sent\nI0112 22:56:08.842975 711 log.go:181] (0xc000d18c60) Data frame received for 5\nI0112 22:56:08.842988 711 log.go:181] (0xc00069d400) (5) Data frame handling\nI0112 22:56:08.842997 711 log.go:181] (0xc00069d400) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32474/\nI0112 22:56:08.848805 711 log.go:181] (0xc000d18c60) Data frame received for 3\nI0112 22:56:08.848823 711 log.go:181] (0xc00021e1e0) (3) Data frame handling\nI0112 22:56:08.848913 711 log.go:181] (0xc00021e1e0) (3) Data frame sent\nI0112 22:56:08.849385 711 log.go:181] (0xc000d18c60) Data frame received for 5\nI0112 22:56:08.849425 711 log.go:181] (0xc00069d400) (5) Data frame handling\nI0112 22:56:08.849439 711 log.go:181] (0xc00069d400) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32474/\nI0112 22:56:08.849458 711 log.go:181] (0xc000d18c60) Data frame received for 3\nI0112 22:56:08.849474 711 log.go:181] (0xc00021e1e0) (3) Data frame handling\nI0112 22:56:08.849514 711 log.go:181] (0xc00021e1e0) (3) Data frame sent\nI0112 22:56:08.854797 711 log.go:181] (0xc000d18c60) Data frame received for 3\nI0112 22:56:08.854839 711 log.go:181] (0xc00021e1e0) (3) Data frame handling\nI0112 22:56:08.854866 711 log.go:181] (0xc00021e1e0) (3) Data frame sent\nI0112 22:56:08.855466 711 log.go:181] (0xc000d18c60) Data frame received for 5\nI0112 22:56:08.855501 711 log.go:181] (0xc000d18c60) Data frame received for 3\nI0112 22:56:08.855541 711 log.go:181] (0xc00021e1e0) (3) Data frame handling\nI0112 22:56:08.855566 711 log.go:181] (0xc00021e1e0) (3) Data frame sent\nI0112 22:56:08.855596 711 log.go:181] (0xc00069d400) (5) Data frame handling\nI0112 22:56:08.855622 711 log.go:181] (0xc00069d400) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32474/\nI0112 22:56:08.861657 711 log.go:181] (0xc000d18c60) Data frame received for 3\nI0112 22:56:08.861680 711 log.go:181] (0xc00021e1e0) (3) Data frame handling\nI0112 22:56:08.861699 711 log.go:181] (0xc00021e1e0) (3) Data frame sent\nI0112 22:56:08.862335 711 log.go:181] (0xc000d18c60) Data frame received for 3\nI0112 22:56:08.862365 711 log.go:181] (0xc00021e1e0) (3) Data frame handling\nI0112 22:56:08.862377 711 log.go:181] (0xc00021e1e0) (3) Data frame sent\nI0112 22:56:08.862388 711 log.go:181] (0xc000d18c60) Data frame received for 5\nI0112 22:56:08.862397 711 log.go:181] (0xc00069d400) (5) Data frame handling\nI0112 22:56:08.862406 711 log.go:181] (0xc00069d400) (5) Data frame sent\nI0112 22:56:08.862414 711 log.go:181] (0xc000d18c60) Data frame received for 5\nI0112 22:56:08.862423 711 log.go:181] (0xc00069d400) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32474/\nI0112 22:56:08.862444 711 log.go:181] (0xc00069d400) (5) Data frame sent\nI0112 22:56:08.869770 711 log.go:181] (0xc000d18c60) Data frame received for 3\nI0112 22:56:08.869786 711 log.go:181] (0xc00021e1e0) (3) Data frame handling\nI0112 22:56:08.869794 711 log.go:181] (0xc00021e1e0) (3) Data frame sent\nI0112 22:56:08.870663 711 log.go:181] (0xc000d18c60) Data frame received for 5\nI0112 22:56:08.870680 711 log.go:181] (0xc00069d400) (5) Data frame handling\nI0112 22:56:08.870688 711 log.go:181] (0xc00069d400) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32474/\nI0112 22:56:08.870715 711 log.go:181] (0xc000d18c60) Data frame received for 3\nI0112 22:56:08.870742 711 log.go:181] (0xc00021e1e0) (3) Data frame handling\nI0112 22:56:08.870761 711 log.go:181] (0xc00021e1e0) (3) Data frame sent\nI0112 22:56:08.875462 711 log.go:181] (0xc000d18c60) Data frame received for 3\nI0112 22:56:08.875485 711 log.go:181] (0xc00021e1e0) (3) Data frame handling\nI0112 22:56:08.875505 711 log.go:181] (0xc00021e1e0) (3) Data frame sent\nI0112 22:56:08.876428 711 log.go:181] (0xc000d18c60) Data frame received for 5\nI0112 22:56:08.876461 711 log.go:181] (0xc00069d400) (5) Data frame handling\nI0112 22:56:08.876471 711 log.go:181] (0xc00069d400) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32474/\nI0112 22:56:08.876501 711 log.go:181] (0xc000d18c60) Data frame received for 3\nI0112 22:56:08.876523 711 log.go:181] (0xc00021e1e0) (3) Data frame handling\nI0112 22:56:08.876538 711 log.go:181] (0xc00021e1e0) (3) Data frame sent\nI0112 22:56:08.880801 711 log.go:181] (0xc000d18c60) Data frame received for 3\nI0112 22:56:08.880829 711 log.go:181] (0xc00021e1e0) (3) Data frame handling\nI0112 22:56:08.880969 711 log.go:181] (0xc00021e1e0) (3) Data frame sent\nI0112 22:56:08.881941 711 log.go:181] (0xc000d18c60) Data frame received for 3\nI0112 22:56:08.881974 711 log.go:181] (0xc000d18c60) Data frame received for 5\nI0112 22:56:08.882007 711 log.go:181] (0xc00069d400) (5) Data frame handling\nI0112 22:56:08.882051 711 log.go:181] (0xc00069d400) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32474/\nI0112 22:56:08.882082 711 log.go:181] (0xc00021e1e0) (3) Data frame handling\nI0112 22:56:08.882093 711 log.go:181] (0xc00021e1e0) (3) Data frame sent\nI0112 22:56:08.888461 711 log.go:181] (0xc000d18c60) Data frame received for 3\nI0112 22:56:08.888484 711 log.go:181] (0xc00021e1e0) (3) Data frame handling\nI0112 22:56:08.888503 711 log.go:181] (0xc00021e1e0) (3) Data frame sent\nI0112 22:56:08.889545 711 log.go:181] (0xc000d18c60) Data frame received for 3\nI0112 22:56:08.889593 711 log.go:181] (0xc00021e1e0) (3) Data frame handling\nI0112 22:56:08.889618 711 log.go:181] (0xc00021e1e0) (3) Data frame sent\nI0112 22:56:08.889655 711 log.go:181] (0xc000d18c60) Data frame received for 5\nI0112 22:56:08.889681 711 log.go:181] (0xc00069d400) (5) Data frame handling\nI0112 22:56:08.889708 711 log.go:181] (0xc00069d400) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32474/\nI0112 22:56:08.895688 711 log.go:181] (0xc000d18c60) Data frame received for 3\nI0112 22:56:08.895717 711 log.go:181] (0xc00021e1e0) (3) Data frame handling\nI0112 22:56:08.895737 711 log.go:181] (0xc00021e1e0) (3) Data frame sent\nI0112 22:56:08.896272 711 log.go:181] (0xc000d18c60) Data frame received for 3\nI0112 22:56:08.896303 711 log.go:181] (0xc000d18c60) Data frame received for 5\nI0112 22:56:08.896343 711 log.go:181] (0xc00069d400) (5) Data frame handling\nI0112 22:56:08.896358 711 log.go:181] (0xc00069d400) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32474/\nI0112 22:56:08.896379 711 log.go:181] (0xc00021e1e0) (3) Data frame handling\nI0112 22:56:08.896392 711 log.go:181] (0xc00021e1e0) (3) Data frame sent\nI0112 22:56:08.902480 711 log.go:181] (0xc000d18c60) Data frame received for 3\nI0112 22:56:08.902505 711 log.go:181] (0xc00021e1e0) (3) Data frame handling\nI0112 22:56:08.902518 711 log.go:181] (0xc00021e1e0) (3) Data frame sent\nI0112 22:56:08.903370 711 log.go:181] (0xc000d18c60) Data frame received for 5\nI0112 22:56:08.903388 711 log.go:181] (0xc00069d400) (5) Data frame handling\nI0112 22:56:08.903400 711 log.go:181] (0xc00069d400) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32474/\nI0112 22:56:08.903437 711 log.go:181] (0xc000d18c60) Data frame received for 3\nI0112 22:56:08.903468 711 log.go:181] (0xc00021e1e0) (3) Data frame handling\nI0112 22:56:08.903498 711 log.go:181] (0xc00021e1e0) (3) Data frame sent\nI0112 22:56:08.910412 711 log.go:181] (0xc000d18c60) Data frame received for 3\nI0112 22:56:08.910434 711 log.go:181] (0xc00021e1e0) (3) Data frame handling\nI0112 22:56:08.910451 711 log.go:181] (0xc00021e1e0) (3) Data frame sent\nI0112 22:56:08.911313 711 log.go:181] (0xc000d18c60) Data frame received for 5\nI0112 22:56:08.911345 711 log.go:181] (0xc00069d400) (5) Data frame handling\nI0112 22:56:08.911358 711 log.go:181] (0xc00069d400) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32474/\nI0112 22:56:08.911376 711 log.go:181] (0xc000d18c60) Data frame received for 3\nI0112 22:56:08.911387 711 log.go:181] (0xc00021e1e0) (3) Data frame handling\nI0112 22:56:08.911397 711 log.go:181] (0xc00021e1e0) (3) Data frame sent\nI0112 22:56:08.916055 711 log.go:181] (0xc000d18c60) Data frame received for 3\nI0112 22:56:08.916079 711 log.go:181] (0xc00021e1e0) (3) Data frame handling\nI0112 22:56:08.916101 711 log.go:181] (0xc00021e1e0) (3) Data frame sent\nI0112 22:56:08.916938 711 log.go:181] (0xc000d18c60) Data frame received for 5\nI0112 22:56:08.916964 711 log.go:181] (0xc00069d400) (5) Data frame handling\nI0112 22:56:08.916975 711 log.go:181] (0xc00069d400) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32474/\nI0112 22:56:08.917046 711 log.go:181] (0xc000d18c60) Data frame received for 3\nI0112 22:56:08.917089 711 log.go:181] (0xc00021e1e0) (3) Data frame handling\nI0112 22:56:08.917118 711 log.go:181] (0xc00021e1e0) (3) Data frame sent\nI0112 22:56:08.924115 711 log.go:181] (0xc000d18c60) Data frame received for 3\nI0112 22:56:08.924128 711 log.go:181] (0xc00021e1e0) (3) Data frame handling\nI0112 22:56:08.924135 711 log.go:181] (0xc00021e1e0) (3) Data frame sent\nI0112 22:56:08.924435 711 log.go:181] (0xc000d18c60) Data frame received for 5\nI0112 22:56:08.924455 711 log.go:181] (0xc00069d400) (5) Data frame handling\nI0112 22:56:08.924473 711 log.go:181] (0xc00069d400) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeoutI0112 22:56:08.924589 711 log.go:181] (0xc000d18c60) Data frame received for 3\nI0112 22:56:08.924613 711 log.go:181] (0xc000d18c60) Data frame received for 5\nI0112 22:56:08.924642 711 log.go:181] (0xc00069d400) (5) Data frame handling\nI0112 22:56:08.924655 711 log.go:181] (0xc00069d400) (5) Data frame sent\n 2 http://172.18.0.13:32474/\nI0112 22:56:08.924676 711 log.go:181] (0xc00021e1e0) (3) Data frame handling\nI0112 22:56:08.924687 711 log.go:181] (0xc00021e1e0) (3) Data frame sent\nI0112 22:56:08.929474 711 log.go:181] (0xc000d18c60) Data frame received for 3\nI0112 22:56:08.929515 711 log.go:181] (0xc00021e1e0) (3) Data frame handling\nI0112 22:56:08.929551 711 log.go:181] (0xc00021e1e0) (3) Data frame sent\nI0112 22:56:08.930253 711 log.go:181] (0xc000d18c60) Data frame received for 3\nI0112 22:56:08.930375 711 log.go:181] (0xc00021e1e0) (3) Data frame handling\nI0112 22:56:08.930423 711 log.go:181] (0xc000d18c60) Data frame received for 5\nI0112 22:56:08.930454 711 log.go:181] (0xc00069d400) (5) Data frame handling\nI0112 22:56:08.931959 711 log.go:181] (0xc000d18c60) Data frame received for 1\nI0112 22:56:08.931987 711 log.go:181] (0xc00069cdc0) (1) Data frame handling\nI0112 22:56:08.932017 711 log.go:181] (0xc00069cdc0) (1) Data frame sent\nI0112 22:56:08.932035 711 log.go:181] (0xc000d18c60) (0xc00069cdc0) Stream removed, broadcasting: 1\nI0112 22:56:08.932101 711 log.go:181] (0xc000d18c60) Go away received\nI0112 22:56:08.932494 711 log.go:181] (0xc000d18c60) (0xc00069cdc0) Stream removed, broadcasting: 1\nI0112 22:56:08.932512 711 log.go:181] (0xc000d18c60) (0xc00021e1e0) Stream removed, broadcasting: 3\nI0112 22:56:08.932525 711 log.go:181] (0xc000d18c60) (0xc00069d400) Stream removed, broadcasting: 5\n" Jan 12 22:56:08.938: INFO: stdout: "\naffinity-nodeport-timeout-z4296\naffinity-nodeport-timeout-z4296\naffinity-nodeport-timeout-z4296\naffinity-nodeport-timeout-z4296\naffinity-nodeport-timeout-z4296\naffinity-nodeport-timeout-z4296\naffinity-nodeport-timeout-z4296\naffinity-nodeport-timeout-z4296\naffinity-nodeport-timeout-z4296\naffinity-nodeport-timeout-z4296\naffinity-nodeport-timeout-z4296\naffinity-nodeport-timeout-z4296\naffinity-nodeport-timeout-z4296\naffinity-nodeport-timeout-z4296\naffinity-nodeport-timeout-z4296\naffinity-nodeport-timeout-z4296" Jan 12 22:56:08.938: INFO: Received response from host: affinity-nodeport-timeout-z4296 Jan 12 22:56:08.938: INFO: Received response from host: affinity-nodeport-timeout-z4296 Jan 12 22:56:08.938: INFO: Received response from host: affinity-nodeport-timeout-z4296 Jan 12 22:56:08.938: INFO: Received response from host: affinity-nodeport-timeout-z4296 Jan 12 22:56:08.938: INFO: Received response from host: affinity-nodeport-timeout-z4296 Jan 12 22:56:08.938: INFO: Received response from host: affinity-nodeport-timeout-z4296 Jan 12 22:56:08.938: INFO: Received response from host: affinity-nodeport-timeout-z4296 Jan 12 22:56:08.938: INFO: Received response from host: affinity-nodeport-timeout-z4296 Jan 12 22:56:08.938: INFO: Received response from host: affinity-nodeport-timeout-z4296 Jan 12 22:56:08.938: INFO: Received response from host: affinity-nodeport-timeout-z4296 Jan 12 22:56:08.938: INFO: Received response from host: affinity-nodeport-timeout-z4296 Jan 12 22:56:08.938: INFO: Received response from host: affinity-nodeport-timeout-z4296 Jan 12 22:56:08.938: INFO: Received response from host: affinity-nodeport-timeout-z4296 Jan 12 22:56:08.938: INFO: Received response from host: affinity-nodeport-timeout-z4296 Jan 12 22:56:08.938: INFO: Received response from host: affinity-nodeport-timeout-z4296 Jan 12 22:56:08.939: INFO: Received response from host: affinity-nodeport-timeout-z4296 Jan 12 22:56:08.939: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-4736 exec execpod-affinityw75sb -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.18.0.13:32474/' Jan 12 22:56:09.149: INFO: stderr: "I0112 22:56:09.067179 729 log.go:181] (0xc00003a0b0) (0xc000819720) Create stream\nI0112 22:56:09.067363 729 log.go:181] (0xc00003a0b0) (0xc000819720) Stream added, broadcasting: 1\nI0112 22:56:09.069620 729 log.go:181] (0xc00003a0b0) Reply frame received for 1\nI0112 22:56:09.069681 729 log.go:181] (0xc00003a0b0) (0xc000bfa1e0) Create stream\nI0112 22:56:09.069691 729 log.go:181] (0xc00003a0b0) (0xc000bfa1e0) Stream added, broadcasting: 3\nI0112 22:56:09.070532 729 log.go:181] (0xc00003a0b0) Reply frame received for 3\nI0112 22:56:09.070574 729 log.go:181] (0xc00003a0b0) (0xc0008199a0) Create stream\nI0112 22:56:09.070585 729 log.go:181] (0xc00003a0b0) (0xc0008199a0) Stream added, broadcasting: 5\nI0112 22:56:09.071604 729 log.go:181] (0xc00003a0b0) Reply frame received for 5\nI0112 22:56:09.137506 729 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0112 22:56:09.137536 729 log.go:181] (0xc0008199a0) (5) Data frame handling\nI0112 22:56:09.137577 729 log.go:181] (0xc0008199a0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32474/\nI0112 22:56:09.141079 729 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0112 22:56:09.141096 729 log.go:181] (0xc000bfa1e0) (3) Data frame handling\nI0112 22:56:09.141113 729 log.go:181] (0xc000bfa1e0) (3) Data frame sent\nI0112 22:56:09.142072 729 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0112 22:56:09.142112 729 log.go:181] (0xc000bfa1e0) (3) Data frame handling\nI0112 22:56:09.142135 729 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0112 22:56:09.142161 729 log.go:181] (0xc0008199a0) (5) Data frame handling\nI0112 22:56:09.143406 729 log.go:181] (0xc00003a0b0) Data frame received for 1\nI0112 22:56:09.143430 729 log.go:181] (0xc000819720) (1) Data frame handling\nI0112 22:56:09.143448 729 log.go:181] (0xc000819720) (1) Data frame sent\nI0112 22:56:09.143470 729 log.go:181] (0xc00003a0b0) (0xc000819720) Stream removed, broadcasting: 1\nI0112 22:56:09.143541 729 log.go:181] (0xc00003a0b0) Go away received\nI0112 22:56:09.143815 729 log.go:181] (0xc00003a0b0) (0xc000819720) Stream removed, broadcasting: 1\nI0112 22:56:09.143832 729 log.go:181] (0xc00003a0b0) (0xc000bfa1e0) Stream removed, broadcasting: 3\nI0112 22:56:09.143845 729 log.go:181] (0xc00003a0b0) (0xc0008199a0) Stream removed, broadcasting: 5\n" Jan 12 22:56:09.149: INFO: stdout: "affinity-nodeport-timeout-z4296" Jan 12 22:56:29.149: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-4736 exec execpod-affinityw75sb -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.18.0.13:32474/' Jan 12 22:56:29.383: INFO: stderr: "I0112 22:56:29.282869 747 log.go:181] (0xc000142370) (0xc000c12000) Create stream\nI0112 22:56:29.282924 747 log.go:181] (0xc000142370) (0xc000c12000) Stream added, broadcasting: 1\nI0112 22:56:29.284412 747 log.go:181] (0xc000142370) Reply frame received for 1\nI0112 22:56:29.284474 747 log.go:181] (0xc000142370) (0xc00071cc80) Create stream\nI0112 22:56:29.284490 747 log.go:181] (0xc000142370) (0xc00071cc80) Stream added, broadcasting: 3\nI0112 22:56:29.285342 747 log.go:181] (0xc000142370) Reply frame received for 3\nI0112 22:56:29.285373 747 log.go:181] (0xc000142370) (0xc000a04500) Create stream\nI0112 22:56:29.285381 747 log.go:181] (0xc000142370) (0xc000a04500) Stream added, broadcasting: 5\nI0112 22:56:29.286051 747 log.go:181] (0xc000142370) Reply frame received for 5\nI0112 22:56:29.370802 747 log.go:181] (0xc000142370) Data frame received for 5\nI0112 22:56:29.370846 747 log.go:181] (0xc000a04500) (5) Data frame handling\nI0112 22:56:29.370876 747 log.go:181] (0xc000a04500) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32474/\nI0112 22:56:29.374722 747 log.go:181] (0xc000142370) Data frame received for 3\nI0112 22:56:29.374763 747 log.go:181] (0xc00071cc80) (3) Data frame handling\nI0112 22:56:29.374802 747 log.go:181] (0xc00071cc80) (3) Data frame sent\nI0112 22:56:29.375523 747 log.go:181] (0xc000142370) Data frame received for 3\nI0112 22:56:29.375542 747 log.go:181] (0xc00071cc80) (3) Data frame handling\nI0112 22:56:29.375564 747 log.go:181] (0xc000142370) Data frame received for 5\nI0112 22:56:29.375599 747 log.go:181] (0xc000a04500) (5) Data frame handling\nI0112 22:56:29.377410 747 log.go:181] (0xc000142370) Data frame received for 1\nI0112 22:56:29.377437 747 log.go:181] (0xc000c12000) (1) Data frame handling\nI0112 22:56:29.377449 747 log.go:181] (0xc000c12000) (1) Data frame sent\nI0112 22:56:29.377463 747 log.go:181] (0xc000142370) (0xc000c12000) Stream removed, broadcasting: 1\nI0112 22:56:29.377479 747 log.go:181] (0xc000142370) Go away received\nI0112 22:56:29.377853 747 log.go:181] (0xc000142370) (0xc000c12000) Stream removed, broadcasting: 1\nI0112 22:56:29.377870 747 log.go:181] (0xc000142370) (0xc00071cc80) Stream removed, broadcasting: 3\nI0112 22:56:29.377879 747 log.go:181] (0xc000142370) (0xc000a04500) Stream removed, broadcasting: 5\n" Jan 12 22:56:29.383: INFO: stdout: "affinity-nodeport-timeout-s94xd" Jan 12 22:56:29.384: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-timeout in namespace services-4736, will wait for the garbage collector to delete the pods Jan 12 22:56:29.575: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 116.469551ms Jan 12 22:56:30.176: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 600.215472ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 22:57:29.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4736" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 • [SLOW TEST:99.195 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":309,"completed":63,"skipped":1076,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 22:57:29.969: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating secret with name secret-test-1ad38586-1a14-42d9-a45b-e2e4ba23d812 STEP: Creating a pod to test consume secrets Jan 12 22:57:30.153: INFO: Waiting up to 5m0s for pod "pod-secrets-2eb8b431-97e1-4f4d-895e-784520c0724b" in namespace "secrets-2883" to be "Succeeded or Failed" Jan 12 22:57:30.156: INFO: Pod "pod-secrets-2eb8b431-97e1-4f4d-895e-784520c0724b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.877337ms Jan 12 22:57:32.161: INFO: Pod "pod-secrets-2eb8b431-97e1-4f4d-895e-784520c0724b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007856167s Jan 12 22:57:34.173: INFO: Pod "pod-secrets-2eb8b431-97e1-4f4d-895e-784520c0724b": Phase="Running", Reason="", readiness=true. Elapsed: 4.019897781s Jan 12 22:57:36.178: INFO: Pod "pod-secrets-2eb8b431-97e1-4f4d-895e-784520c0724b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.025104308s STEP: Saw pod success Jan 12 22:57:36.178: INFO: Pod "pod-secrets-2eb8b431-97e1-4f4d-895e-784520c0724b" satisfied condition "Succeeded or Failed" Jan 12 22:57:36.181: INFO: Trying to get logs from node leguer-worker2 pod pod-secrets-2eb8b431-97e1-4f4d-895e-784520c0724b container secret-env-test: STEP: delete the pod Jan 12 22:57:36.223: INFO: Waiting for pod pod-secrets-2eb8b431-97e1-4f4d-895e-784520c0724b to disappear Jan 12 22:57:36.227: INFO: Pod pod-secrets-2eb8b431-97e1-4f4d-895e-784520c0724b no longer exists [AfterEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 22:57:36.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2883" for this suite. • [SLOW TEST:6.265 seconds] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:36 should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":309,"completed":64,"skipped":1093,"failed":0} S ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 22:57:36.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [AfterEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 22:57:40.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9034" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":309,"completed":65,"skipped":1094,"failed":0} SSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 22:57:40.440: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-3037 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a new StatefulSet Jan 12 22:57:40.548: INFO: Found 0 stateful pods, waiting for 3 Jan 12 22:57:50.553: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 12 22:57:50.553: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 12 22:57:50.553: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 12 22:58:00.553: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 12 22:58:00.553: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 12 22:58:00.553: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Jan 12 22:58:00.579: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Jan 12 22:58:10.689: INFO: Updating stateful set ss2 Jan 12 22:58:10.770: INFO: Waiting for Pod statefulset-3037/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Jan 12 22:58:21.264: INFO: Found 2 stateful pods, waiting for 3 Jan 12 22:58:31.270: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 12 22:58:31.270: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 12 22:58:31.270: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Jan 12 22:58:31.299: INFO: Updating stateful set ss2 Jan 12 22:58:31.345: INFO: Waiting for Pod statefulset-3037/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 12 22:58:41.371: INFO: Updating stateful set ss2 Jan 12 22:58:41.704: INFO: Waiting for StatefulSet statefulset-3037/ss2 to complete update Jan 12 22:58:41.704: INFO: Waiting for Pod statefulset-3037/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 12 22:58:51.713: INFO: Waiting for StatefulSet statefulset-3037/ss2 to complete update Jan 12 22:58:51.713: INFO: Waiting for Pod statefulset-3037/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 12 22:59:01.713: INFO: Waiting for StatefulSet statefulset-3037/ss2 to complete update Jan 12 22:59:01.713: INFO: Waiting for Pod statefulset-3037/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 12 22:59:11.713: INFO: Waiting for StatefulSet statefulset-3037/ss2 to complete update Jan 12 22:59:11.714: INFO: Waiting for Pod statefulset-3037/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 12 22:59:21.714: INFO: Waiting for StatefulSet statefulset-3037/ss2 to complete update Jan 12 22:59:21.714: INFO: Waiting for Pod statefulset-3037/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 12 22:59:31.714: INFO: Waiting for StatefulSet statefulset-3037/ss2 to complete update Jan 12 22:59:31.714: INFO: Waiting for Pod statefulset-3037/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 12 22:59:41.714: INFO: Waiting for StatefulSet statefulset-3037/ss2 to complete update Jan 12 22:59:41.714: INFO: Waiting for Pod statefulset-3037/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 12 22:59:51.716: INFO: Waiting for StatefulSet statefulset-3037/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Jan 12 23:00:01.712: INFO: Deleting all statefulset in ns statefulset-3037 Jan 12 23:00:01.715: INFO: Scaling statefulset ss2 to 0 Jan 12 23:01:51.740: INFO: Waiting for statefulset status.replicas updated to 0 Jan 12 23:01:51.743: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:01:51.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3037" for this suite. • [SLOW TEST:251.348 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":309,"completed":66,"skipped":1102,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:01:51.788: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test emptydir volume type on tmpfs Jan 12 23:01:51.909: INFO: Waiting up to 5m0s for pod "pod-3540f75c-bab5-448f-b5fc-310ac0983486" in namespace "emptydir-3067" to be "Succeeded or Failed" Jan 12 23:01:51.942: INFO: Pod "pod-3540f75c-bab5-448f-b5fc-310ac0983486": Phase="Pending", Reason="", readiness=false. Elapsed: 33.766519ms Jan 12 23:01:53.946: INFO: Pod "pod-3540f75c-bab5-448f-b5fc-310ac0983486": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037809255s Jan 12 23:01:55.951: INFO: Pod "pod-3540f75c-bab5-448f-b5fc-310ac0983486": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042547262s STEP: Saw pod success Jan 12 23:01:55.951: INFO: Pod "pod-3540f75c-bab5-448f-b5fc-310ac0983486" satisfied condition "Succeeded or Failed" Jan 12 23:01:55.954: INFO: Trying to get logs from node leguer-worker pod pod-3540f75c-bab5-448f-b5fc-310ac0983486 container test-container: STEP: delete the pod Jan 12 23:01:56.004: INFO: Waiting for pod pod-3540f75c-bab5-448f-b5fc-310ac0983486 to disappear Jan 12 23:01:56.007: INFO: Pod pod-3540f75c-bab5-448f-b5fc-310ac0983486 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:01:56.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3067" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":67,"skipped":1121,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:01:56.119: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:01:56.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-8056" for this suite. STEP: Destroying namespace "nspatchtest-3534a562-fe22-431d-b983-30d3ee8f1e92-5110" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":309,"completed":68,"skipped":1162,"failed":0} S ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:01:56.444: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-5681 [It] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a new StatefulSet Jan 12 23:01:56.557: INFO: Found 0 stateful pods, waiting for 3 Jan 12 23:02:06.582: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 12 23:02:06.582: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 12 23:02:06.582: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 12 23:02:16.563: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 12 23:02:16.563: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 12 23:02:16.563: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Jan 12 23:02:16.574: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-5681 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 12 23:02:21.197: INFO: stderr: "I0112 23:02:21.071018 765 log.go:181] (0xc0006b0000) (0xc000ae5d60) Create stream\nI0112 23:02:21.071089 765 log.go:181] (0xc0006b0000) (0xc000ae5d60) Stream added, broadcasting: 1\nI0112 23:02:21.074998 765 log.go:181] (0xc0006b0000) Reply frame received for 1\nI0112 23:02:21.075046 765 log.go:181] (0xc0006b0000) (0xc0008ec000) Create stream\nI0112 23:02:21.075068 765 log.go:181] (0xc0006b0000) (0xc0008ec000) Stream added, broadcasting: 3\nI0112 23:02:21.076207 765 log.go:181] (0xc0006b0000) Reply frame received for 3\nI0112 23:02:21.076241 765 log.go:181] (0xc0006b0000) (0xc000972320) Create stream\nI0112 23:02:21.076248 765 log.go:181] (0xc0006b0000) (0xc000972320) Stream added, broadcasting: 5\nI0112 23:02:21.077376 765 log.go:181] (0xc0006b0000) Reply frame received for 5\nI0112 23:02:21.158664 765 log.go:181] (0xc0006b0000) Data frame received for 5\nI0112 23:02:21.158686 765 log.go:181] (0xc000972320) (5) Data frame handling\nI0112 23:02:21.158698 765 log.go:181] (0xc000972320) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0112 23:02:21.188502 765 log.go:181] (0xc0006b0000) Data frame received for 3\nI0112 23:02:21.188543 765 log.go:181] (0xc0008ec000) (3) Data frame handling\nI0112 23:02:21.188579 765 log.go:181] (0xc0008ec000) (3) Data frame sent\nI0112 23:02:21.188658 765 log.go:181] (0xc0006b0000) Data frame received for 5\nI0112 23:02:21.188682 765 log.go:181] (0xc000972320) (5) Data frame handling\nI0112 23:02:21.188732 765 log.go:181] (0xc0006b0000) Data frame received for 3\nI0112 23:02:21.188786 765 log.go:181] (0xc0008ec000) (3) Data frame handling\nI0112 23:02:21.190895 765 log.go:181] (0xc0006b0000) Data frame received for 1\nI0112 23:02:21.190927 765 log.go:181] (0xc000ae5d60) (1) Data frame handling\nI0112 23:02:21.190946 765 log.go:181] (0xc000ae5d60) (1) Data frame sent\nI0112 23:02:21.190965 765 log.go:181] (0xc0006b0000) (0xc000ae5d60) Stream removed, broadcasting: 1\nI0112 23:02:21.190986 765 log.go:181] (0xc0006b0000) Go away received\nI0112 23:02:21.191417 765 log.go:181] (0xc0006b0000) (0xc000ae5d60) Stream removed, broadcasting: 1\nI0112 23:02:21.191449 765 log.go:181] (0xc0006b0000) (0xc0008ec000) Stream removed, broadcasting: 3\nI0112 23:02:21.191465 765 log.go:181] (0xc0006b0000) (0xc000972320) Stream removed, broadcasting: 5\n" Jan 12 23:02:21.197: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 12 23:02:21.197: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Jan 12 23:02:31.234: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Jan 12 23:02:41.300: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-5681 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 12 23:02:41.538: INFO: stderr: "I0112 23:02:41.440227 783 log.go:181] (0xc0001f9130) (0xc000b0c3c0) Create stream\nI0112 23:02:41.440279 783 log.go:181] (0xc0001f9130) (0xc000b0c3c0) Stream added, broadcasting: 1\nI0112 23:02:41.444447 783 log.go:181] (0xc0001f9130) Reply frame received for 1\nI0112 23:02:41.444494 783 log.go:181] (0xc0001f9130) (0xc0004e8000) Create stream\nI0112 23:02:41.444529 783 log.go:181] (0xc0001f9130) (0xc0004e8000) Stream added, broadcasting: 3\nI0112 23:02:41.445509 783 log.go:181] (0xc0001f9130) Reply frame received for 3\nI0112 23:02:41.445540 783 log.go:181] (0xc0001f9130) (0xc000b0c460) Create stream\nI0112 23:02:41.445549 783 log.go:181] (0xc0001f9130) (0xc000b0c460) Stream added, broadcasting: 5\nI0112 23:02:41.446325 783 log.go:181] (0xc0001f9130) Reply frame received for 5\nI0112 23:02:41.528777 783 log.go:181] (0xc0001f9130) Data frame received for 5\nI0112 23:02:41.529001 783 log.go:181] (0xc000b0c460) (5) Data frame handling\nI0112 23:02:41.529107 783 log.go:181] (0xc000b0c460) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0112 23:02:41.529269 783 log.go:181] (0xc0001f9130) Data frame received for 5\nI0112 23:02:41.529419 783 log.go:181] (0xc000b0c460) (5) Data frame handling\nI0112 23:02:41.529488 783 log.go:181] (0xc0001f9130) Data frame received for 3\nI0112 23:02:41.529534 783 log.go:181] (0xc0004e8000) (3) Data frame handling\nI0112 23:02:41.529551 783 log.go:181] (0xc0004e8000) (3) Data frame sent\nI0112 23:02:41.529572 783 log.go:181] (0xc0001f9130) Data frame received for 3\nI0112 23:02:41.529592 783 log.go:181] (0xc0004e8000) (3) Data frame handling\nI0112 23:02:41.533051 783 log.go:181] (0xc0001f9130) Data frame received for 1\nI0112 23:02:41.533091 783 log.go:181] (0xc000b0c3c0) (1) Data frame handling\nI0112 23:02:41.533126 783 log.go:181] (0xc000b0c3c0) (1) Data frame sent\nI0112 23:02:41.533145 783 log.go:181] (0xc0001f9130) (0xc000b0c3c0) Stream removed, broadcasting: 1\nI0112 23:02:41.533167 783 log.go:181] (0xc0001f9130) Go away received\nI0112 23:02:41.533611 783 log.go:181] (0xc0001f9130) (0xc000b0c3c0) Stream removed, broadcasting: 1\nI0112 23:02:41.533642 783 log.go:181] (0xc0001f9130) (0xc0004e8000) Stream removed, broadcasting: 3\nI0112 23:02:41.533654 783 log.go:181] (0xc0001f9130) (0xc000b0c460) Stream removed, broadcasting: 5\n" Jan 12 23:02:41.538: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 12 23:02:41.538: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 12 23:02:51.558: INFO: Waiting for StatefulSet statefulset-5681/ss2 to complete update Jan 12 23:02:51.558: INFO: Waiting for Pod statefulset-5681/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 12 23:02:51.558: INFO: Waiting for Pod statefulset-5681/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 12 23:02:51.558: INFO: Waiting for Pod statefulset-5681/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 12 23:03:01.568: INFO: Waiting for StatefulSet statefulset-5681/ss2 to complete update Jan 12 23:03:01.568: INFO: Waiting for Pod statefulset-5681/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 12 23:03:01.568: INFO: Waiting for Pod statefulset-5681/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 12 23:03:01.568: INFO: Waiting for Pod statefulset-5681/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 12 23:03:11.566: INFO: Waiting for StatefulSet statefulset-5681/ss2 to complete update Jan 12 23:03:11.566: INFO: Waiting for Pod statefulset-5681/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 12 23:03:11.566: INFO: Waiting for Pod statefulset-5681/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 12 23:03:11.566: INFO: Waiting for Pod statefulset-5681/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 12 23:03:21.566: INFO: Waiting for StatefulSet statefulset-5681/ss2 to complete update Jan 12 23:03:21.566: INFO: Waiting for Pod statefulset-5681/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 12 23:03:21.566: INFO: Waiting for Pod statefulset-5681/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 12 23:03:21.566: INFO: Waiting for Pod statefulset-5681/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 12 23:03:31.571: INFO: Waiting for StatefulSet statefulset-5681/ss2 to complete update Jan 12 23:03:31.571: INFO: Waiting for Pod statefulset-5681/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 12 23:03:31.571: INFO: Waiting for Pod statefulset-5681/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 12 23:03:31.571: INFO: Waiting for Pod statefulset-5681/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 12 23:03:41.576: INFO: Waiting for StatefulSet statefulset-5681/ss2 to complete update Jan 12 23:03:41.576: INFO: Waiting for Pod statefulset-5681/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 12 23:03:41.576: INFO: Waiting for Pod statefulset-5681/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 12 23:03:51.567: INFO: Waiting for StatefulSet statefulset-5681/ss2 to complete update Jan 12 23:03:51.567: INFO: Waiting for Pod statefulset-5681/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 12 23:04:01.566: INFO: Waiting for StatefulSet statefulset-5681/ss2 to complete update Jan 12 23:04:01.566: INFO: Waiting for Pod statefulset-5681/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 12 23:04:11.568: INFO: Waiting for StatefulSet statefulset-5681/ss2 to complete update Jan 12 23:04:11.568: INFO: Waiting for Pod statefulset-5681/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 12 23:04:21.566: INFO: Waiting for StatefulSet statefulset-5681/ss2 to complete update Jan 12 23:04:21.566: INFO: Waiting for Pod statefulset-5681/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 12 23:04:31.565: INFO: Waiting for StatefulSet statefulset-5681/ss2 to complete update Jan 12 23:04:31.565: INFO: Waiting for Pod statefulset-5681/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 12 23:04:41.567: INFO: Waiting for StatefulSet statefulset-5681/ss2 to complete update STEP: Rolling back to a previous revision Jan 12 23:04:51.566: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-5681 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 12 23:04:51.843: INFO: stderr: "I0112 23:04:51.705634 801 log.go:181] (0xc0001c8000) (0xc000aa41e0) Create stream\nI0112 23:04:51.705710 801 log.go:181] (0xc0001c8000) (0xc000aa41e0) Stream added, broadcasting: 1\nI0112 23:04:51.709345 801 log.go:181] (0xc0001c8000) Reply frame received for 1\nI0112 23:04:51.709390 801 log.go:181] (0xc0001c8000) (0xc00019c960) Create stream\nI0112 23:04:51.709402 801 log.go:181] (0xc0001c8000) (0xc00019c960) Stream added, broadcasting: 3\nI0112 23:04:51.710367 801 log.go:181] (0xc0001c8000) Reply frame received for 3\nI0112 23:04:51.710424 801 log.go:181] (0xc0001c8000) (0xc00019d400) Create stream\nI0112 23:04:51.710438 801 log.go:181] (0xc0001c8000) (0xc00019d400) Stream added, broadcasting: 5\nI0112 23:04:51.711430 801 log.go:181] (0xc0001c8000) Reply frame received for 5\nI0112 23:04:51.805650 801 log.go:181] (0xc0001c8000) Data frame received for 5\nI0112 23:04:51.805684 801 log.go:181] (0xc00019d400) (5) Data frame handling\nI0112 23:04:51.805702 801 log.go:181] (0xc00019d400) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0112 23:04:51.834563 801 log.go:181] (0xc0001c8000) Data frame received for 5\nI0112 23:04:51.834618 801 log.go:181] (0xc00019d400) (5) Data frame handling\nI0112 23:04:51.834651 801 log.go:181] (0xc0001c8000) Data frame received for 3\nI0112 23:04:51.834671 801 log.go:181] (0xc00019c960) (3) Data frame handling\nI0112 23:04:51.834695 801 log.go:181] (0xc00019c960) (3) Data frame sent\nI0112 23:04:51.834966 801 log.go:181] (0xc0001c8000) Data frame received for 3\nI0112 23:04:51.834995 801 log.go:181] (0xc00019c960) (3) Data frame handling\nI0112 23:04:51.836542 801 log.go:181] (0xc0001c8000) Data frame received for 1\nI0112 23:04:51.836576 801 log.go:181] (0xc000aa41e0) (1) Data frame handling\nI0112 23:04:51.836607 801 log.go:181] (0xc000aa41e0) (1) Data frame sent\nI0112 23:04:51.836623 801 log.go:181] (0xc0001c8000) (0xc000aa41e0) Stream removed, broadcasting: 1\nI0112 23:04:51.836638 801 log.go:181] (0xc0001c8000) Go away received\nI0112 23:04:51.837344 801 log.go:181] (0xc0001c8000) (0xc000aa41e0) Stream removed, broadcasting: 1\nI0112 23:04:51.837380 801 log.go:181] (0xc0001c8000) (0xc00019c960) Stream removed, broadcasting: 3\nI0112 23:04:51.837394 801 log.go:181] (0xc0001c8000) (0xc00019d400) Stream removed, broadcasting: 5\n" Jan 12 23:04:51.843: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 12 23:04:51.843: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 12 23:05:01.876: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Jan 12 23:05:11.981: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-5681 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 12 23:05:12.164: INFO: stderr: "I0112 23:05:12.103389 819 log.go:181] (0xc00056b080) (0xc000562500) Create stream\nI0112 23:05:12.103434 819 log.go:181] (0xc00056b080) (0xc000562500) Stream added, broadcasting: 1\nI0112 23:05:12.104643 819 log.go:181] (0xc00056b080) Reply frame received for 1\nI0112 23:05:12.104670 819 log.go:181] (0xc00056b080) (0xc0005625a0) Create stream\nI0112 23:05:12.104680 819 log.go:181] (0xc00056b080) (0xc0005625a0) Stream added, broadcasting: 3\nI0112 23:05:12.105249 819 log.go:181] (0xc00056b080) Reply frame received for 3\nI0112 23:05:12.105268 819 log.go:181] (0xc00056b080) (0xc0001585a0) Create stream\nI0112 23:05:12.105274 819 log.go:181] (0xc00056b080) (0xc0001585a0) Stream added, broadcasting: 5\nI0112 23:05:12.105757 819 log.go:181] (0xc00056b080) Reply frame received for 5\nI0112 23:05:12.157729 819 log.go:181] (0xc00056b080) Data frame received for 5\nI0112 23:05:12.157789 819 log.go:181] (0xc0001585a0) (5) Data frame handling\nI0112 23:05:12.157810 819 log.go:181] (0xc0001585a0) (5) Data frame sent\nI0112 23:05:12.157822 819 log.go:181] (0xc00056b080) Data frame received for 5\nI0112 23:05:12.157835 819 log.go:181] (0xc0001585a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0112 23:05:12.157901 819 log.go:181] (0xc00056b080) Data frame received for 3\nI0112 23:05:12.157930 819 log.go:181] (0xc0005625a0) (3) Data frame handling\nI0112 23:05:12.157952 819 log.go:181] (0xc0005625a0) (3) Data frame sent\nI0112 23:05:12.157969 819 log.go:181] (0xc00056b080) Data frame received for 3\nI0112 23:05:12.157994 819 log.go:181] (0xc0005625a0) (3) Data frame handling\nI0112 23:05:12.159144 819 log.go:181] (0xc00056b080) Data frame received for 1\nI0112 23:05:12.159166 819 log.go:181] (0xc000562500) (1) Data frame handling\nI0112 23:05:12.159172 819 log.go:181] (0xc000562500) (1) Data frame sent\nI0112 23:05:12.159242 819 log.go:181] (0xc00056b080) (0xc000562500) Stream removed, broadcasting: 1\nI0112 23:05:12.159284 819 log.go:181] (0xc00056b080) Go away received\nI0112 23:05:12.159532 819 log.go:181] (0xc00056b080) (0xc000562500) Stream removed, broadcasting: 1\nI0112 23:05:12.159544 819 log.go:181] (0xc00056b080) (0xc0005625a0) Stream removed, broadcasting: 3\nI0112 23:05:12.159553 819 log.go:181] (0xc00056b080) (0xc0001585a0) Stream removed, broadcasting: 5\n" Jan 12 23:05:12.164: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 12 23:05:12.164: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 12 23:05:22.190: INFO: Waiting for StatefulSet statefulset-5681/ss2 to complete update Jan 12 23:05:22.190: INFO: Waiting for Pod statefulset-5681/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 12 23:05:22.190: INFO: Waiting for Pod statefulset-5681/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 12 23:05:22.190: INFO: Waiting for Pod statefulset-5681/ss2-2 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 12 23:05:32.199: INFO: Waiting for StatefulSet statefulset-5681/ss2 to complete update Jan 12 23:05:32.199: INFO: Waiting for Pod statefulset-5681/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 12 23:05:32.199: INFO: Waiting for Pod statefulset-5681/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 12 23:05:32.199: INFO: Waiting for Pod statefulset-5681/ss2-2 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 12 23:05:42.198: INFO: Waiting for StatefulSet statefulset-5681/ss2 to complete update Jan 12 23:05:42.199: INFO: Waiting for Pod statefulset-5681/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 12 23:05:42.199: INFO: Waiting for Pod statefulset-5681/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 12 23:05:42.199: INFO: Waiting for Pod statefulset-5681/ss2-2 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 12 23:05:52.197: INFO: Waiting for StatefulSet statefulset-5681/ss2 to complete update Jan 12 23:05:52.197: INFO: Waiting for Pod statefulset-5681/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 12 23:05:52.197: INFO: Waiting for Pod statefulset-5681/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 12 23:06:02.198: INFO: Waiting for StatefulSet statefulset-5681/ss2 to complete update Jan 12 23:06:02.198: INFO: Waiting for Pod statefulset-5681/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 12 23:06:12.201: INFO: Waiting for StatefulSet statefulset-5681/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Jan 12 23:06:22.199: INFO: Deleting all statefulset in ns statefulset-5681 Jan 12 23:06:22.202: INFO: Scaling statefulset ss2 to 0 Jan 12 23:07:12.239: INFO: Waiting for statefulset status.replicas updated to 0 Jan 12 23:07:12.242: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:07:12.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5681" for this suite. • [SLOW TEST:315.822 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should perform rolling updates and roll backs of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":309,"completed":69,"skipped":1163,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:07:12.266: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:07:29.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6254" for this suite. • [SLOW TEST:17.146 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":309,"completed":70,"skipped":1174,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:07:29.413: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [BeforeEach] Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1314 STEP: creating the pod Jan 12 23:07:29.516: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-8508 create -f -' Jan 12 23:07:29.918: INFO: stderr: "" Jan 12 23:07:29.918: INFO: stdout: "pod/pause created\n" Jan 12 23:07:29.918: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Jan 12 23:07:29.918: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-8508" to be "running and ready" Jan 12 23:07:29.923: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.636177ms Jan 12 23:07:31.928: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009463041s Jan 12 23:07:33.933: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.014595948s Jan 12 23:07:33.933: INFO: Pod "pause" satisfied condition "running and ready" Jan 12 23:07:33.933: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: adding the label testing-label with value testing-label-value to a pod Jan 12 23:07:33.933: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-8508 label pods pause testing-label=testing-label-value' Jan 12 23:07:34.051: INFO: stderr: "" Jan 12 23:07:34.051: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Jan 12 23:07:34.051: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-8508 get pod pause -L testing-label' Jan 12 23:07:34.204: INFO: stderr: "" Jan 12 23:07:34.204: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s testing-label-value\n" STEP: removing the label testing-label of a pod Jan 12 23:07:34.204: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-8508 label pods pause testing-label-' Jan 12 23:07:34.319: INFO: stderr: "" Jan 12 23:07:34.319: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Jan 12 23:07:34.319: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-8508 get pod pause -L testing-label' Jan 12 23:07:34.421: INFO: stderr: "" Jan 12 23:07:34.421: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s \n" [AfterEach] Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1320 STEP: using delete to clean up resources Jan 12 23:07:34.421: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-8508 delete --grace-period=0 --force -f -' Jan 12 23:07:34.596: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 12 23:07:34.596: INFO: stdout: "pod \"pause\" force deleted\n" Jan 12 23:07:34.596: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-8508 get rc,svc -l name=pause --no-headers' Jan 12 23:07:34.706: INFO: stderr: "No resources found in kubectl-8508 namespace.\n" Jan 12 23:07:34.706: INFO: stdout: "" Jan 12 23:07:34.706: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-8508 get pods -l name=pause -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 12 23:07:34.875: INFO: stderr: "" Jan 12 23:07:34.875: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:07:34.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8508" for this suite. • [SLOW TEST:5.483 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1312 should update the label on a resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":309,"completed":71,"skipped":1178,"failed":0} SSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:07:34.896: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating replication controller my-hostname-basic-691a8696-45a4-4f63-bf98-9dddbb7f6479 Jan 12 23:07:35.237: INFO: Pod name my-hostname-basic-691a8696-45a4-4f63-bf98-9dddbb7f6479: Found 0 pods out of 1 Jan 12 23:07:40.258: INFO: Pod name my-hostname-basic-691a8696-45a4-4f63-bf98-9dddbb7f6479: Found 1 pods out of 1 Jan 12 23:07:40.258: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-691a8696-45a4-4f63-bf98-9dddbb7f6479" are running Jan 12 23:07:40.260: INFO: Pod "my-hostname-basic-691a8696-45a4-4f63-bf98-9dddbb7f6479-knbj8" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-01-12 23:07:35 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-01-12 23:07:38 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-01-12 23:07:38 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-01-12 23:07:35 +0000 UTC Reason: Message:}]) Jan 12 23:07:40.261: INFO: Trying to dial the pod Jan 12 23:07:45.310: INFO: Controller my-hostname-basic-691a8696-45a4-4f63-bf98-9dddbb7f6479: Got expected result from replica 1 [my-hostname-basic-691a8696-45a4-4f63-bf98-9dddbb7f6479-knbj8]: "my-hostname-basic-691a8696-45a4-4f63-bf98-9dddbb7f6479-knbj8", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:07:45.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9291" for this suite. • [SLOW TEST:10.422 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":309,"completed":72,"skipped":1182,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:07:45.318: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: set up a multi version CRD Jan 12 23:07:45.426: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:08:03.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7440" for this suite. • [SLOW TEST:18.105 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":309,"completed":73,"skipped":1187,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:08:03.423: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test emptydir 0644 on tmpfs Jan 12 23:08:03.543: INFO: Waiting up to 5m0s for pod "pod-a224c756-274f-4139-bf99-1ff94e8d6eb8" in namespace "emptydir-1588" to be "Succeeded or Failed" Jan 12 23:08:03.547: INFO: Pod "pod-a224c756-274f-4139-bf99-1ff94e8d6eb8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.553099ms Jan 12 23:08:05.581: INFO: Pod "pod-a224c756-274f-4139-bf99-1ff94e8d6eb8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037912447s Jan 12 23:08:07.598: INFO: Pod "pod-a224c756-274f-4139-bf99-1ff94e8d6eb8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.055066671s STEP: Saw pod success Jan 12 23:08:07.598: INFO: Pod "pod-a224c756-274f-4139-bf99-1ff94e8d6eb8" satisfied condition "Succeeded or Failed" Jan 12 23:08:07.601: INFO: Trying to get logs from node leguer-worker2 pod pod-a224c756-274f-4139-bf99-1ff94e8d6eb8 container test-container: STEP: delete the pod Jan 12 23:08:07.642: INFO: Waiting for pod pod-a224c756-274f-4139-bf99-1ff94e8d6eb8 to disappear Jan 12 23:08:07.646: INFO: Pod pod-a224c756-274f-4139-bf99-1ff94e8d6eb8 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:08:07.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1588" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":74,"skipped":1194,"failed":0} SSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:08:07.655: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name projected-configmap-test-volume-map-c71d95e3-0203-42ab-a180-f6d44ffb987f STEP: Creating a pod to test consume configMaps Jan 12 23:08:07.732: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-faf70a16-1c40-46f5-bb4f-43d355e76820" in namespace "projected-2539" to be "Succeeded or Failed" Jan 12 23:08:07.781: INFO: Pod "pod-projected-configmaps-faf70a16-1c40-46f5-bb4f-43d355e76820": Phase="Pending", Reason="", readiness=false. Elapsed: 48.451444ms Jan 12 23:08:09.785: INFO: Pod "pod-projected-configmaps-faf70a16-1c40-46f5-bb4f-43d355e76820": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052736729s Jan 12 23:08:11.789: INFO: Pod "pod-projected-configmaps-faf70a16-1c40-46f5-bb4f-43d355e76820": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.056795884s STEP: Saw pod success Jan 12 23:08:11.789: INFO: Pod "pod-projected-configmaps-faf70a16-1c40-46f5-bb4f-43d355e76820" satisfied condition "Succeeded or Failed" Jan 12 23:08:11.792: INFO: Trying to get logs from node leguer-worker pod pod-projected-configmaps-faf70a16-1c40-46f5-bb4f-43d355e76820 container agnhost-container: STEP: delete the pod Jan 12 23:08:11.871: INFO: Waiting for pod pod-projected-configmaps-faf70a16-1c40-46f5-bb4f-43d355e76820 to disappear Jan 12 23:08:11.942: INFO: Pod pod-projected-configmaps-faf70a16-1c40-46f5-bb4f-43d355e76820 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:08:11.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2539" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":309,"completed":75,"skipped":1197,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:08:11.953: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [BeforeEach] Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1520 [It] should create a pod from an image when restart is Never [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jan 12 23:08:12.024: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-7345 run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine' Jan 12 23:08:12.146: INFO: stderr: "" Jan 12 23:08:12.146: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1524 Jan 12 23:08:12.216: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-7345 delete pods e2e-test-httpd-pod' Jan 12 23:08:20.119: INFO: stderr: "" Jan 12 23:08:20.119: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:08:20.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7345" for this suite. • [SLOW TEST:8.177 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1517 should create a pod from an image when restart is Never [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":309,"completed":76,"skipped":1221,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:08:20.131: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 12 23:08:20.211: INFO: Creating ReplicaSet my-hostname-basic-15e68e63-614d-4652-8f1e-e4750ece0652 Jan 12 23:08:20.243: INFO: Pod name my-hostname-basic-15e68e63-614d-4652-8f1e-e4750ece0652: Found 0 pods out of 1 Jan 12 23:08:25.247: INFO: Pod name my-hostname-basic-15e68e63-614d-4652-8f1e-e4750ece0652: Found 1 pods out of 1 Jan 12 23:08:25.247: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-15e68e63-614d-4652-8f1e-e4750ece0652" is running Jan 12 23:08:25.249: INFO: Pod "my-hostname-basic-15e68e63-614d-4652-8f1e-e4750ece0652-xrp94" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-01-12 23:08:20 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-01-12 23:08:23 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-01-12 23:08:23 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-01-12 23:08:20 +0000 UTC Reason: Message:}]) Jan 12 23:08:25.250: INFO: Trying to dial the pod Jan 12 23:08:30.263: INFO: Controller my-hostname-basic-15e68e63-614d-4652-8f1e-e4750ece0652: Got expected result from replica 1 [my-hostname-basic-15e68e63-614d-4652-8f1e-e4750ece0652-xrp94]: "my-hostname-basic-15e68e63-614d-4652-8f1e-e4750ece0652-xrp94", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:08:30.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-8384" for this suite. • [SLOW TEST:10.140 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":309,"completed":77,"skipped":1233,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:08:30.272: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating service in namespace services-8310 STEP: creating service affinity-nodeport in namespace services-8310 STEP: creating replication controller affinity-nodeport in namespace services-8310 I0112 23:08:30.428131 7 runners.go:190] Created replication controller with name: affinity-nodeport, namespace: services-8310, replica count: 3 I0112 23:08:33.478546 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0112 23:08:36.478829 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0112 23:08:39.479064 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 12 23:08:39.493: INFO: Creating new exec pod Jan 12 23:08:46.576: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-8310 exec execpod-affinity8jtnj -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport 80' Jan 12 23:08:46.863: INFO: stderr: "I0112 23:08:46.736484 1016 log.go:181] (0xc00003a420) (0xc000e20000) Create stream\nI0112 23:08:46.736594 1016 log.go:181] (0xc00003a420) (0xc000e20000) Stream added, broadcasting: 1\nI0112 23:08:46.739259 1016 log.go:181] (0xc00003a420) Reply frame received for 1\nI0112 23:08:46.739339 1016 log.go:181] (0xc00003a420) (0xc000304aa0) Create stream\nI0112 23:08:46.739377 1016 log.go:181] (0xc00003a420) (0xc000304aa0) Stream added, broadcasting: 3\nI0112 23:08:46.740357 1016 log.go:181] (0xc00003a420) Reply frame received for 3\nI0112 23:08:46.740387 1016 log.go:181] (0xc00003a420) (0xc000c961e0) Create stream\nI0112 23:08:46.740402 1016 log.go:181] (0xc00003a420) (0xc000c961e0) Stream added, broadcasting: 5\nI0112 23:08:46.741558 1016 log.go:181] (0xc00003a420) Reply frame received for 5\nI0112 23:08:46.853706 1016 log.go:181] (0xc00003a420) Data frame received for 5\nI0112 23:08:46.853731 1016 log.go:181] (0xc000c961e0) (5) Data frame handling\nI0112 23:08:46.853739 1016 log.go:181] (0xc000c961e0) (5) Data frame sent\nI0112 23:08:46.853746 1016 log.go:181] (0xc00003a420) Data frame received for 5\nI0112 23:08:46.853752 1016 log.go:181] (0xc000c961e0) (5) Data frame handling\n+ nc -zv -t -w 2 affinity-nodeport 80\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\nI0112 23:08:46.853767 1016 log.go:181] (0xc000c961e0) (5) Data frame sent\nI0112 23:08:46.854135 1016 log.go:181] (0xc00003a420) Data frame received for 3\nI0112 23:08:46.854165 1016 log.go:181] (0xc000304aa0) (3) Data frame handling\nI0112 23:08:46.854298 1016 log.go:181] (0xc00003a420) Data frame received for 5\nI0112 23:08:46.854309 1016 log.go:181] (0xc000c961e0) (5) Data frame handling\nI0112 23:08:46.856216 1016 log.go:181] (0xc00003a420) Data frame received for 1\nI0112 23:08:46.856238 1016 log.go:181] (0xc000e20000) (1) Data frame handling\nI0112 23:08:46.856260 1016 log.go:181] (0xc000e20000) (1) Data frame sent\nI0112 23:08:46.856275 1016 log.go:181] (0xc00003a420) (0xc000e20000) Stream removed, broadcasting: 1\nI0112 23:08:46.856436 1016 log.go:181] (0xc00003a420) Go away received\nI0112 23:08:46.856802 1016 log.go:181] (0xc00003a420) (0xc000e20000) Stream removed, broadcasting: 1\nI0112 23:08:46.856831 1016 log.go:181] (0xc00003a420) (0xc000304aa0) Stream removed, broadcasting: 3\nI0112 23:08:46.856966 1016 log.go:181] (0xc00003a420) (0xc000c961e0) Stream removed, broadcasting: 5\n" Jan 12 23:08:46.863: INFO: stdout: "" Jan 12 23:08:46.864: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-8310 exec execpod-affinity8jtnj -- /bin/sh -x -c nc -zv -t -w 2 10.96.136.27 80' Jan 12 23:08:47.060: INFO: stderr: "I0112 23:08:46.992354 1034 log.go:181] (0xc000017080) (0xc000816460) Create stream\nI0112 23:08:46.992403 1034 log.go:181] (0xc000017080) (0xc000816460) Stream added, broadcasting: 1\nI0112 23:08:46.994619 1034 log.go:181] (0xc000017080) Reply frame received for 1\nI0112 23:08:46.994645 1034 log.go:181] (0xc000017080) (0xc00073a000) Create stream\nI0112 23:08:46.994653 1034 log.go:181] (0xc000017080) (0xc00073a000) Stream added, broadcasting: 3\nI0112 23:08:46.995891 1034 log.go:181] (0xc000017080) Reply frame received for 3\nI0112 23:08:46.995935 1034 log.go:181] (0xc000017080) (0xc0007b0320) Create stream\nI0112 23:08:46.995947 1034 log.go:181] (0xc000017080) (0xc0007b0320) Stream added, broadcasting: 5\nI0112 23:08:46.997292 1034 log.go:181] (0xc000017080) Reply frame received for 5\nI0112 23:08:47.050652 1034 log.go:181] (0xc000017080) Data frame received for 5\nI0112 23:08:47.050670 1034 log.go:181] (0xc0007b0320) (5) Data frame handling\nI0112 23:08:47.050686 1034 log.go:181] (0xc0007b0320) (5) Data frame sent\nI0112 23:08:47.050691 1034 log.go:181] (0xc000017080) Data frame received for 5\nI0112 23:08:47.050695 1034 log.go:181] (0xc0007b0320) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.136.27 80\nConnection to 10.96.136.27 80 port [tcp/http] succeeded!\nI0112 23:08:47.050929 1034 log.go:181] (0xc000017080) Data frame received for 3\nI0112 23:08:47.050954 1034 log.go:181] (0xc00073a000) (3) Data frame handling\nI0112 23:08:47.052761 1034 log.go:181] (0xc000017080) Data frame received for 1\nI0112 23:08:47.052993 1034 log.go:181] (0xc000816460) (1) Data frame handling\nI0112 23:08:47.053098 1034 log.go:181] (0xc000816460) (1) Data frame sent\nI0112 23:08:47.053134 1034 log.go:181] (0xc000017080) (0xc000816460) Stream removed, broadcasting: 1\nI0112 23:08:47.053168 1034 log.go:181] (0xc000017080) Go away received\nI0112 23:08:47.053794 1034 log.go:181] (0xc000017080) (0xc000816460) Stream removed, broadcasting: 1\nI0112 23:08:47.053822 1034 log.go:181] (0xc000017080) (0xc00073a000) Stream removed, broadcasting: 3\nI0112 23:08:47.053835 1034 log.go:181] (0xc000017080) (0xc0007b0320) Stream removed, broadcasting: 5\n" Jan 12 23:08:47.060: INFO: stdout: "" Jan 12 23:08:47.060: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-8310 exec execpod-affinity8jtnj -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.13 31000' Jan 12 23:08:47.262: INFO: stderr: "I0112 23:08:47.175235 1052 log.go:181] (0xc000540000) (0xc000948000) Create stream\nI0112 23:08:47.175297 1052 log.go:181] (0xc000540000) (0xc000948000) Stream added, broadcasting: 1\nI0112 23:08:47.177085 1052 log.go:181] (0xc000540000) Reply frame received for 1\nI0112 23:08:47.177116 1052 log.go:181] (0xc000540000) (0xc0009480a0) Create stream\nI0112 23:08:47.177123 1052 log.go:181] (0xc000540000) (0xc0009480a0) Stream added, broadcasting: 3\nI0112 23:08:47.177921 1052 log.go:181] (0xc000540000) Reply frame received for 3\nI0112 23:08:47.177956 1052 log.go:181] (0xc000540000) (0xc0009481e0) Create stream\nI0112 23:08:47.177963 1052 log.go:181] (0xc000540000) (0xc0009481e0) Stream added, broadcasting: 5\nI0112 23:08:47.178714 1052 log.go:181] (0xc000540000) Reply frame received for 5\nI0112 23:08:47.255584 1052 log.go:181] (0xc000540000) Data frame received for 3\nI0112 23:08:47.255651 1052 log.go:181] (0xc0009480a0) (3) Data frame handling\nI0112 23:08:47.255702 1052 log.go:181] (0xc000540000) Data frame received for 5\nI0112 23:08:47.255763 1052 log.go:181] (0xc0009481e0) (5) Data frame handling\nI0112 23:08:47.255800 1052 log.go:181] (0xc0009481e0) (5) Data frame sent\nI0112 23:08:47.255823 1052 log.go:181] (0xc000540000) Data frame received for 5\nI0112 23:08:47.255834 1052 log.go:181] (0xc0009481e0) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.13 31000\nConnection to 172.18.0.13 31000 port [tcp/31000] succeeded!\nI0112 23:08:47.257340 1052 log.go:181] (0xc000540000) Data frame received for 1\nI0112 23:08:47.257359 1052 log.go:181] (0xc000948000) (1) Data frame handling\nI0112 23:08:47.257372 1052 log.go:181] (0xc000948000) (1) Data frame sent\nI0112 23:08:47.257380 1052 log.go:181] (0xc000540000) (0xc000948000) Stream removed, broadcasting: 1\nI0112 23:08:47.257553 1052 log.go:181] (0xc000540000) Go away received\nI0112 23:08:47.257761 1052 log.go:181] (0xc000540000) (0xc000948000) Stream removed, broadcasting: 1\nI0112 23:08:47.257784 1052 log.go:181] (0xc000540000) (0xc0009480a0) Stream removed, broadcasting: 3\nI0112 23:08:47.257793 1052 log.go:181] (0xc000540000) (0xc0009481e0) Stream removed, broadcasting: 5\n" Jan 12 23:08:47.262: INFO: stdout: "" Jan 12 23:08:47.262: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-8310 exec execpod-affinity8jtnj -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.12 31000' Jan 12 23:08:47.451: INFO: stderr: "I0112 23:08:47.381432 1070 log.go:181] (0xc0009a6210) (0xc00099e460) Create stream\nI0112 23:08:47.381498 1070 log.go:181] (0xc0009a6210) (0xc00099e460) Stream added, broadcasting: 1\nI0112 23:08:47.383442 1070 log.go:181] (0xc0009a6210) Reply frame received for 1\nI0112 23:08:47.383487 1070 log.go:181] (0xc0009a6210) (0xc00099e500) Create stream\nI0112 23:08:47.383501 1070 log.go:181] (0xc0009a6210) (0xc00099e500) Stream added, broadcasting: 3\nI0112 23:08:47.384503 1070 log.go:181] (0xc0009a6210) Reply frame received for 3\nI0112 23:08:47.384536 1070 log.go:181] (0xc0009a6210) (0xc00099e5a0) Create stream\nI0112 23:08:47.384546 1070 log.go:181] (0xc0009a6210) (0xc00099e5a0) Stream added, broadcasting: 5\nI0112 23:08:47.385390 1070 log.go:181] (0xc0009a6210) Reply frame received for 5\nI0112 23:08:47.442895 1070 log.go:181] (0xc0009a6210) Data frame received for 3\nI0112 23:08:47.442942 1070 log.go:181] (0xc00099e500) (3) Data frame handling\nI0112 23:08:47.442961 1070 log.go:181] (0xc0009a6210) Data frame received for 5\nI0112 23:08:47.442984 1070 log.go:181] (0xc00099e5a0) (5) Data frame handling\nI0112 23:08:47.443010 1070 log.go:181] (0xc00099e5a0) (5) Data frame sent\nI0112 23:08:47.443034 1070 log.go:181] (0xc0009a6210) Data frame received for 5\nI0112 23:08:47.443045 1070 log.go:181] (0xc00099e5a0) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.12 31000\nConnection to 172.18.0.12 31000 port [tcp/31000] succeeded!\nI0112 23:08:47.444701 1070 log.go:181] (0xc0009a6210) Data frame received for 1\nI0112 23:08:47.444724 1070 log.go:181] (0xc00099e460) (1) Data frame handling\nI0112 23:08:47.444751 1070 log.go:181] (0xc00099e460) (1) Data frame sent\nI0112 23:08:47.444810 1070 log.go:181] (0xc0009a6210) (0xc00099e460) Stream removed, broadcasting: 1\nI0112 23:08:47.444893 1070 log.go:181] (0xc0009a6210) Go away received\nI0112 23:08:47.445844 1070 log.go:181] (0xc0009a6210) (0xc00099e460) Stream removed, broadcasting: 1\nI0112 23:08:47.445880 1070 log.go:181] (0xc0009a6210) (0xc00099e500) Stream removed, broadcasting: 3\nI0112 23:08:47.445895 1070 log.go:181] (0xc0009a6210) (0xc00099e5a0) Stream removed, broadcasting: 5\n" Jan 12 23:08:47.451: INFO: stdout: "" Jan 12 23:08:47.451: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-8310 exec execpod-affinity8jtnj -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.13:31000/ ; done' Jan 12 23:08:47.751: INFO: stderr: "I0112 23:08:47.587212 1088 log.go:181] (0xc0001d5130) (0xc000b6e1e0) Create stream\nI0112 23:08:47.587260 1088 log.go:181] (0xc0001d5130) (0xc000b6e1e0) Stream added, broadcasting: 1\nI0112 23:08:47.588970 1088 log.go:181] (0xc0001d5130) Reply frame received for 1\nI0112 23:08:47.589046 1088 log.go:181] (0xc0001d5130) (0xc000b6e280) Create stream\nI0112 23:08:47.589069 1088 log.go:181] (0xc0001d5130) (0xc000b6e280) Stream added, broadcasting: 3\nI0112 23:08:47.590055 1088 log.go:181] (0xc0001d5130) Reply frame received for 3\nI0112 23:08:47.590091 1088 log.go:181] (0xc0001d5130) (0xc0005ce280) Create stream\nI0112 23:08:47.590102 1088 log.go:181] (0xc0001d5130) (0xc0005ce280) Stream added, broadcasting: 5\nI0112 23:08:47.591305 1088 log.go:181] (0xc0001d5130) Reply frame received for 5\nI0112 23:08:47.651614 1088 log.go:181] (0xc0001d5130) Data frame received for 5\nI0112 23:08:47.651644 1088 log.go:181] (0xc0005ce280) (5) Data frame handling\nI0112 23:08:47.651666 1088 log.go:181] (0xc0005ce280) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:31000/\nI0112 23:08:47.651682 1088 log.go:181] (0xc0001d5130) Data frame received for 3\nI0112 23:08:47.651690 1088 log.go:181] (0xc000b6e280) (3) Data frame handling\nI0112 23:08:47.651699 1088 log.go:181] (0xc000b6e280) (3) Data frame sent\nI0112 23:08:47.655717 1088 log.go:181] (0xc0001d5130) Data frame received for 3\nI0112 23:08:47.655744 1088 log.go:181] (0xc000b6e280) (3) Data frame handling\nI0112 23:08:47.655768 1088 log.go:181] (0xc000b6e280) (3) Data frame sent\nI0112 23:08:47.656189 1088 log.go:181] (0xc0001d5130) Data frame received for 3\nI0112 23:08:47.656220 1088 log.go:181] (0xc000b6e280) (3) Data frame handling\nI0112 23:08:47.656237 1088 log.go:181] (0xc000b6e280) (3) Data frame sent\nI0112 23:08:47.656280 1088 log.go:181] (0xc0001d5130) Data frame received for 5\nI0112 23:08:47.656307 1088 log.go:181] (0xc0005ce280) (5) Data frame handling\nI0112 23:08:47.656322 1088 log.go:181] (0xc0005ce280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:31000/\nI0112 23:08:47.660184 1088 log.go:181] (0xc0001d5130) Data frame received for 3\nI0112 23:08:47.660213 1088 log.go:181] (0xc000b6e280) (3) Data frame handling\nI0112 23:08:47.660232 1088 log.go:181] (0xc000b6e280) (3) Data frame sent\nI0112 23:08:47.660753 1088 log.go:181] (0xc0001d5130) Data frame received for 3\nI0112 23:08:47.660781 1088 log.go:181] (0xc000b6e280) (3) Data frame handling\nI0112 23:08:47.660794 1088 log.go:181] (0xc000b6e280) (3) Data frame sent\nI0112 23:08:47.660819 1088 log.go:181] (0xc0001d5130) Data frame received for 5\nI0112 23:08:47.660925 1088 log.go:181] (0xc0005ce280) (5) Data frame handling\nI0112 23:08:47.660946 1088 log.go:181] (0xc0005ce280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:31000/\nI0112 23:08:47.665526 1088 log.go:181] (0xc0001d5130) Data frame received for 3\nI0112 23:08:47.665557 1088 log.go:181] (0xc000b6e280) (3) Data frame handling\nI0112 23:08:47.665577 1088 log.go:181] (0xc000b6e280) (3) Data frame sent\nI0112 23:08:47.665908 1088 log.go:181] (0xc0001d5130) Data frame received for 5\nI0112 23:08:47.665927 1088 log.go:181] (0xc0005ce280) (5) Data frame handling\nI0112 23:08:47.665934 1088 log.go:181] (0xc0005ce280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:31000/\nI0112 23:08:47.665953 1088 log.go:181] (0xc0001d5130) Data frame received for 3\nI0112 23:08:47.665982 1088 log.go:181] (0xc000b6e280) (3) Data frame handling\nI0112 23:08:47.666009 1088 log.go:181] (0xc000b6e280) (3) Data frame sent\nI0112 23:08:47.669901 1088 log.go:181] (0xc0001d5130) Data frame received for 3\nI0112 23:08:47.669920 1088 log.go:181] (0xc000b6e280) (3) Data frame handling\nI0112 23:08:47.669937 1088 log.go:181] (0xc000b6e280) (3) Data frame sent\nI0112 23:08:47.670331 1088 log.go:181] (0xc0001d5130) Data frame received for 5\nI0112 23:08:47.670371 1088 log.go:181] (0xc0005ce280) (5) Data frame handling\nI0112 23:08:47.670406 1088 log.go:181] (0xc0005ce280) (5) Data frame sent\nI0112 23:08:47.670440 1088 log.go:181] (0xc0001d5130) Data frame received for 3\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:31000/\nI0112 23:08:47.670477 1088 log.go:181] (0xc000b6e280) (3) Data frame handling\nI0112 23:08:47.670494 1088 log.go:181] (0xc000b6e280) (3) Data frame sent\nI0112 23:08:47.674346 1088 log.go:181] (0xc0001d5130) Data frame received for 3\nI0112 23:08:47.674366 1088 log.go:181] (0xc000b6e280) (3) Data frame handling\nI0112 23:08:47.674380 1088 log.go:181] (0xc000b6e280) (3) Data frame sent\nI0112 23:08:47.674626 1088 log.go:181] (0xc0001d5130) Data frame received for 5\nI0112 23:08:47.674639 1088 log.go:181] (0xc0005ce280) (5) Data frame handling\nI0112 23:08:47.674657 1088 log.go:181] (0xc0005ce280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:31000/\nI0112 23:08:47.674685 1088 log.go:181] (0xc0001d5130) Data frame received for 3\nI0112 23:08:47.674704 1088 log.go:181] (0xc000b6e280) (3) Data frame handling\nI0112 23:08:47.674723 1088 log.go:181] (0xc000b6e280) (3) Data frame sent\nI0112 23:08:47.681483 1088 log.go:181] (0xc0001d5130) Data frame received for 3\nI0112 23:08:47.681510 1088 log.go:181] (0xc000b6e280) (3) Data frame handling\nI0112 23:08:47.681530 1088 log.go:181] (0xc000b6e280) (3) Data frame sent\nI0112 23:08:47.682103 1088 log.go:181] (0xc0001d5130) Data frame received for 5\nI0112 23:08:47.682147 1088 log.go:181] (0xc0005ce280) (5) Data frame handling\nI0112 23:08:47.682169 1088 log.go:181] (0xc0005ce280) (5) Data frame sent\nI0112 23:08:47.682181 1088 log.go:181] (0xc0001d5130) Data frame received for 5\nI0112 23:08:47.682190 1088 log.go:181] (0xc0005ce280) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:31000/\nI0112 23:08:47.682217 1088 log.go:181] (0xc0005ce280) (5) Data frame sent\nI0112 23:08:47.682244 1088 log.go:181] (0xc0001d5130) Data frame received for 3\nI0112 23:08:47.682256 1088 log.go:181] (0xc000b6e280) (3) Data frame handling\nI0112 23:08:47.682266 1088 log.go:181] (0xc000b6e280) (3) Data frame sent\nI0112 23:08:47.686694 1088 log.go:181] (0xc0001d5130) Data frame received for 3\nI0112 23:08:47.686740 1088 log.go:181] (0xc000b6e280) (3) Data frame handling\nI0112 23:08:47.686779 1088 log.go:181] (0xc000b6e280) (3) Data frame sent\nI0112 23:08:47.687090 1088 log.go:181] (0xc0001d5130) Data frame received for 5\nI0112 23:08:47.687121 1088 log.go:181] (0xc0005ce280) (5) Data frame handling\nI0112 23:08:47.687144 1088 log.go:181] (0xc0005ce280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:31000/\nI0112 23:08:47.689447 1088 log.go:181] (0xc0001d5130) Data frame received for 3\nI0112 23:08:47.689477 1088 log.go:181] (0xc000b6e280) (3) Data frame handling\nI0112 23:08:47.689518 1088 log.go:181] (0xc000b6e280) (3) Data frame sent\nI0112 23:08:47.693153 1088 log.go:181] (0xc0001d5130) Data frame received for 3\nI0112 23:08:47.693172 1088 log.go:181] (0xc000b6e280) (3) Data frame handling\nI0112 23:08:47.693187 1088 log.go:181] (0xc000b6e280) (3) Data frame sent\nI0112 23:08:47.693709 1088 log.go:181] (0xc0001d5130) Data frame received for 3\nI0112 23:08:47.693736 1088 log.go:181] (0xc0001d5130) Data frame received for 5\nI0112 23:08:47.693775 1088 log.go:181] (0xc0005ce280) (5) Data frame handling\nI0112 23:08:47.693793 1088 log.go:181] (0xc0005ce280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:31000/\nI0112 23:08:47.693809 1088 log.go:181] (0xc000b6e280) (3) Data frame handling\nI0112 23:08:47.693818 1088 log.go:181] (0xc000b6e280) (3) Data frame sent\nI0112 23:08:47.699662 1088 log.go:181] (0xc0001d5130) Data frame received for 3\nI0112 23:08:47.699675 1088 log.go:181] (0xc000b6e280) (3) Data frame handling\nI0112 23:08:47.699681 1088 log.go:181] (0xc000b6e280) (3) Data frame sent\nI0112 23:08:47.700694 1088 log.go:181] (0xc0001d5130) Data frame received for 3\nI0112 23:08:47.700704 1088 log.go:181] (0xc000b6e280) (3) Data frame handling\nI0112 23:08:47.700709 1088 log.go:181] (0xc000b6e280) (3) Data frame sent\nI0112 23:08:47.700746 1088 log.go:181] (0xc0001d5130) Data frame received for 5\nI0112 23:08:47.700766 1088 log.go:181] (0xc0005ce280) (5) Data frame handling\nI0112 23:08:47.700786 1088 log.go:181] (0xc0005ce280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:31000/\nI0112 23:08:47.706159 1088 log.go:181] (0xc0001d5130) Data frame received for 3\nI0112 23:08:47.706176 1088 log.go:181] (0xc000b6e280) (3) Data frame handling\nI0112 23:08:47.706189 1088 log.go:181] (0xc000b6e280) (3) Data frame sent\nI0112 23:08:47.706933 1088 log.go:181] (0xc0001d5130) Data frame received for 3\nI0112 23:08:47.706952 1088 log.go:181] (0xc000b6e280) (3) Data frame handling\nI0112 23:08:47.706961 1088 log.go:181] (0xc000b6e280) (3) Data frame sent\nI0112 23:08:47.706971 1088 log.go:181] (0xc0001d5130) Data frame received for 5\nI0112 23:08:47.706979 1088 log.go:181] (0xc0005ce280) (5) Data frame handling\nI0112 23:08:47.706987 1088 log.go:181] (0xc0005ce280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:31000/\nI0112 23:08:47.710383 1088 log.go:181] (0xc0001d5130) Data frame received for 3\nI0112 23:08:47.710406 1088 log.go:181] (0xc000b6e280) (3) Data frame handling\nI0112 23:08:47.710423 1088 log.go:181] (0xc000b6e280) (3) Data frame sent\nI0112 23:08:47.711067 1088 log.go:181] (0xc0001d5130) Data frame received for 5\nI0112 23:08:47.711090 1088 log.go:181] (0xc0005ce280) (5) Data frame handling\nI0112 23:08:47.711099 1088 log.go:181] (0xc0005ce280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:31000/\nI0112 23:08:47.711122 1088 log.go:181] (0xc0001d5130) Data frame received for 3\nI0112 23:08:47.711155 1088 log.go:181] (0xc000b6e280) (3) Data frame handling\nI0112 23:08:47.711190 1088 log.go:181] (0xc000b6e280) (3) Data frame sent\nI0112 23:08:47.715873 1088 log.go:181] (0xc0001d5130) Data frame received for 3\nI0112 23:08:47.715888 1088 log.go:181] (0xc000b6e280) (3) Data frame handling\nI0112 23:08:47.715896 1088 log.go:181] (0xc000b6e280) (3) Data frame sent\nI0112 23:08:47.716593 1088 log.go:181] (0xc0001d5130) Data frame received for 3\nI0112 23:08:47.716622 1088 log.go:181] (0xc000b6e280) (3) Data frame handling\nI0112 23:08:47.716637 1088 log.go:181] (0xc000b6e280) (3) Data frame sent\nI0112 23:08:47.716664 1088 log.go:181] (0xc0001d5130) Data frame received for 5\nI0112 23:08:47.716675 1088 log.go:181] (0xc0005ce280) (5) Data frame handling\nI0112 23:08:47.716687 1088 log.go:181] (0xc0005ce280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:31000/\nI0112 23:08:47.721264 1088 log.go:181] (0xc0001d5130) Data frame received for 3\nI0112 23:08:47.721306 1088 log.go:181] (0xc000b6e280) (3) Data frame handling\nI0112 23:08:47.721349 1088 log.go:181] (0xc000b6e280) (3) Data frame sent\nI0112 23:08:47.721847 1088 log.go:181] (0xc0001d5130) Data frame received for 3\nI0112 23:08:47.721896 1088 log.go:181] (0xc000b6e280) (3) Data frame handling\nI0112 23:08:47.721910 1088 log.go:181] (0xc000b6e280) (3) Data frame sent\nI0112 23:08:47.721928 1088 log.go:181] (0xc0001d5130) Data frame received for 5\nI0112 23:08:47.721940 1088 log.go:181] (0xc0005ce280) (5) Data frame handling\nI0112 23:08:47.721957 1088 log.go:181] (0xc0005ce280) (5) Data frame sent\nI0112 23:08:47.721980 1088 log.go:181] (0xc0001d5130) Data frame received for 5\nI0112 23:08:47.722003 1088 log.go:181] (0xc0005ce280) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:31000/\nI0112 23:08:47.722060 1088 log.go:181] (0xc0005ce280) (5) Data frame sent\nI0112 23:08:47.727299 1088 log.go:181] (0xc0001d5130) Data frame received for 3\nI0112 23:08:47.727318 1088 log.go:181] (0xc000b6e280) (3) Data frame handling\nI0112 23:08:47.727337 1088 log.go:181] (0xc000b6e280) (3) Data frame sent\nI0112 23:08:47.728038 1088 log.go:181] (0xc0001d5130) Data frame received for 5\nI0112 23:08:47.728056 1088 log.go:181] (0xc0005ce280) (5) Data frame handling\nI0112 23:08:47.728065 1088 log.go:181] (0xc0005ce280) (5) Data frame sent\nI0112 23:08:47.728073 1088 log.go:181] (0xc0001d5130) Data frame received for 5\nI0112 23:08:47.728079 1088 log.go:181] (0xc0005ce280) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:31000/\nI0112 23:08:47.728091 1088 log.go:181] (0xc0001d5130) Data frame received for 3\nI0112 23:08:47.728151 1088 log.go:181] (0xc000b6e280) (3) Data frame handling\nI0112 23:08:47.728178 1088 log.go:181] (0xc0005ce280) (5) Data frame sent\nI0112 23:08:47.728207 1088 log.go:181] (0xc000b6e280) (3) Data frame sent\nI0112 23:08:47.733135 1088 log.go:181] (0xc0001d5130) Data frame received for 3\nI0112 23:08:47.733159 1088 log.go:181] (0xc000b6e280) (3) Data frame handling\nI0112 23:08:47.733179 1088 log.go:181] (0xc000b6e280) (3) Data frame sent\nI0112 23:08:47.733982 1088 log.go:181] (0xc0001d5130) Data frame received for 3\nI0112 23:08:47.734009 1088 log.go:181] (0xc000b6e280) (3) Data frame handling\nI0112 23:08:47.734027 1088 log.go:181] (0xc000b6e280) (3) Data frame sent\nI0112 23:08:47.734052 1088 log.go:181] (0xc0001d5130) Data frame received for 5\nI0112 23:08:47.734064 1088 log.go:181] (0xc0005ce280) (5) Data frame handling\nI0112 23:08:47.734092 1088 log.go:181] (0xc0005ce280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:31000/\nI0112 23:08:47.741925 1088 log.go:181] (0xc0001d5130) Data frame received for 3\nI0112 23:08:47.741946 1088 log.go:181] (0xc000b6e280) (3) Data frame handling\nI0112 23:08:47.741958 1088 log.go:181] (0xc000b6e280) (3) Data frame sent\nI0112 23:08:47.743328 1088 log.go:181] (0xc0001d5130) Data frame received for 5\nI0112 23:08:47.743370 1088 log.go:181] (0xc0005ce280) (5) Data frame handling\nI0112 23:08:47.743709 1088 log.go:181] (0xc0001d5130) Data frame received for 3\nI0112 23:08:47.743741 1088 log.go:181] (0xc000b6e280) (3) Data frame handling\nI0112 23:08:47.745429 1088 log.go:181] (0xc0001d5130) Data frame received for 1\nI0112 23:08:47.745473 1088 log.go:181] (0xc000b6e1e0) (1) Data frame handling\nI0112 23:08:47.745495 1088 log.go:181] (0xc000b6e1e0) (1) Data frame sent\nI0112 23:08:47.745526 1088 log.go:181] (0xc0001d5130) (0xc000b6e1e0) Stream removed, broadcasting: 1\nI0112 23:08:47.745562 1088 log.go:181] (0xc0001d5130) Go away received\nI0112 23:08:47.745930 1088 log.go:181] (0xc0001d5130) (0xc000b6e1e0) Stream removed, broadcasting: 1\nI0112 23:08:47.745949 1088 log.go:181] (0xc0001d5130) (0xc000b6e280) Stream removed, broadcasting: 3\nI0112 23:08:47.745959 1088 log.go:181] (0xc0001d5130) (0xc0005ce280) Stream removed, broadcasting: 5\n" Jan 12 23:08:47.752: INFO: stdout: "\naffinity-nodeport-h4pnm\naffinity-nodeport-h4pnm\naffinity-nodeport-h4pnm\naffinity-nodeport-h4pnm\naffinity-nodeport-h4pnm\naffinity-nodeport-h4pnm\naffinity-nodeport-h4pnm\naffinity-nodeport-h4pnm\naffinity-nodeport-h4pnm\naffinity-nodeport-h4pnm\naffinity-nodeport-h4pnm\naffinity-nodeport-h4pnm\naffinity-nodeport-h4pnm\naffinity-nodeport-h4pnm\naffinity-nodeport-h4pnm\naffinity-nodeport-h4pnm" Jan 12 23:08:47.752: INFO: Received response from host: affinity-nodeport-h4pnm Jan 12 23:08:47.752: INFO: Received response from host: affinity-nodeport-h4pnm Jan 12 23:08:47.752: INFO: Received response from host: affinity-nodeport-h4pnm Jan 12 23:08:47.752: INFO: Received response from host: affinity-nodeport-h4pnm Jan 12 23:08:47.752: INFO: Received response from host: affinity-nodeport-h4pnm Jan 12 23:08:47.752: INFO: Received response from host: affinity-nodeport-h4pnm Jan 12 23:08:47.752: INFO: Received response from host: affinity-nodeport-h4pnm Jan 12 23:08:47.752: INFO: Received response from host: affinity-nodeport-h4pnm Jan 12 23:08:47.752: INFO: Received response from host: affinity-nodeport-h4pnm Jan 12 23:08:47.752: INFO: Received response from host: affinity-nodeport-h4pnm Jan 12 23:08:47.752: INFO: Received response from host: affinity-nodeport-h4pnm Jan 12 23:08:47.752: INFO: Received response from host: affinity-nodeport-h4pnm Jan 12 23:08:47.752: INFO: Received response from host: affinity-nodeport-h4pnm Jan 12 23:08:47.752: INFO: Received response from host: affinity-nodeport-h4pnm Jan 12 23:08:47.752: INFO: Received response from host: affinity-nodeport-h4pnm Jan 12 23:08:47.752: INFO: Received response from host: affinity-nodeport-h4pnm Jan 12 23:08:47.752: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport in namespace services-8310, will wait for the garbage collector to delete the pods Jan 12 23:08:47.850: INFO: Deleting ReplicationController affinity-nodeport took: 7.208538ms Jan 12 23:08:48.451: INFO: Terminating ReplicationController affinity-nodeport pods took: 600.283541ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:09:50.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8310" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 • [SLOW TEST:80.051 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":309,"completed":78,"skipped":1243,"failed":0} SSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:09:50.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 12 23:09:50.414: INFO: Waiting up to 5m0s for pod "busybox-user-65534-3e8db4d6-ad09-4e54-a222-63aafc55c55b" in namespace "security-context-test-5869" to be "Succeeded or Failed" Jan 12 23:09:50.417: INFO: Pod "busybox-user-65534-3e8db4d6-ad09-4e54-a222-63aafc55c55b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.48417ms Jan 12 23:09:52.588: INFO: Pod "busybox-user-65534-3e8db4d6-ad09-4e54-a222-63aafc55c55b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.174396515s Jan 12 23:09:54.593: INFO: Pod "busybox-user-65534-3e8db4d6-ad09-4e54-a222-63aafc55c55b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.179674034s Jan 12 23:09:54.593: INFO: Pod "busybox-user-65534-3e8db4d6-ad09-4e54-a222-63aafc55c55b" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:09:54.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5869" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":79,"skipped":1249,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] server version should find the server version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] server version /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:09:54.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename server-version STEP: Waiting for a default service account to be provisioned in namespace [It] should find the server version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Request ServerVersion STEP: Confirm major version Jan 12 23:09:54.659: INFO: Major version: 1 STEP: Confirm minor version Jan 12 23:09:54.659: INFO: cleanMinorVersion: 20 Jan 12 23:09:54.659: INFO: Minor version: 20 [AfterEach] [sig-api-machinery] server version /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:09:54.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "server-version-7167" for this suite. •{"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":309,"completed":80,"skipped":1267,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:09:54.680: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name cm-test-opt-del-10f6f8ac-6417-4bb4-b8de-816d76011721 STEP: Creating configMap with name cm-test-opt-upd-d7bd4c32-b5cd-4c43-8b9f-c4293cf5008d STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-10f6f8ac-6417-4bb4-b8de-816d76011721 STEP: Updating configmap cm-test-opt-upd-d7bd4c32-b5cd-4c43-8b9f-c4293cf5008d STEP: Creating configMap with name cm-test-opt-create-394f9b1a-895c-4f10-97f6-4f4282d41962 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:11:31.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5721" for this suite. • [SLOW TEST:96.926 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":309,"completed":81,"skipped":1295,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:11:31.607: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 12 23:11:31.718: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Jan 12 23:11:33.768: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:11:34.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9332" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":309,"completed":82,"skipped":1309,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:11:34.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should add annotations for pods in rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating Agnhost RC Jan 12 23:11:35.826: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-7716 create -f -' Jan 12 23:11:36.602: INFO: stderr: "" Jan 12 23:11:36.602: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Jan 12 23:11:37.617: INFO: Selector matched 1 pods for map[app:agnhost] Jan 12 23:11:37.617: INFO: Found 0 / 1 Jan 12 23:11:38.607: INFO: Selector matched 1 pods for map[app:agnhost] Jan 12 23:11:38.607: INFO: Found 0 / 1 Jan 12 23:11:39.607: INFO: Selector matched 1 pods for map[app:agnhost] Jan 12 23:11:39.607: INFO: Found 0 / 1 Jan 12 23:11:40.619: INFO: Selector matched 1 pods for map[app:agnhost] Jan 12 23:11:40.619: INFO: Found 0 / 1 Jan 12 23:11:41.605: INFO: Selector matched 1 pods for map[app:agnhost] Jan 12 23:11:41.605: INFO: Found 1 / 1 Jan 12 23:11:41.605: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Jan 12 23:11:41.607: INFO: Selector matched 1 pods for map[app:agnhost] Jan 12 23:11:41.608: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 12 23:11:41.608: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-7716 patch pod agnhost-primary-4w6w6 -p {"metadata":{"annotations":{"x":"y"}}}' Jan 12 23:11:41.709: INFO: stderr: "" Jan 12 23:11:41.709: INFO: stdout: "pod/agnhost-primary-4w6w6 patched\n" STEP: checking annotations Jan 12 23:11:41.853: INFO: Selector matched 1 pods for map[app:agnhost] Jan 12 23:11:41.853: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:11:41.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7716" for this suite. • [SLOW TEST:6.907 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl patch /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1466 should add annotations for pods in rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":309,"completed":83,"skipped":1314,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:11:41.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 12 23:11:42.068: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-a96458d3-8354-4087-8e67-41510e73494b" in namespace "security-context-test-3479" to be "Succeeded or Failed" Jan 12 23:11:42.106: INFO: Pod "busybox-privileged-false-a96458d3-8354-4087-8e67-41510e73494b": Phase="Pending", Reason="", readiness=false. Elapsed: 37.968864ms Jan 12 23:11:44.108: INFO: Pod "busybox-privileged-false-a96458d3-8354-4087-8e67-41510e73494b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040451251s Jan 12 23:11:46.122: INFO: Pod "busybox-privileged-false-a96458d3-8354-4087-8e67-41510e73494b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.054037007s Jan 12 23:11:46.122: INFO: Pod "busybox-privileged-false-a96458d3-8354-4087-8e67-41510e73494b" satisfied condition "Succeeded or Failed" Jan 12 23:11:46.127: INFO: Got logs for pod "busybox-privileged-false-a96458d3-8354-4087-8e67-41510e73494b": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:11:46.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3479" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":84,"skipped":1338,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:11:46.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test emptydir volume type on node default medium Jan 12 23:11:46.361: INFO: Waiting up to 5m0s for pod "pod-00c2770f-a05a-4612-9602-a887deecaeeb" in namespace "emptydir-727" to be "Succeeded or Failed" Jan 12 23:11:46.523: INFO: Pod "pod-00c2770f-a05a-4612-9602-a887deecaeeb": Phase="Pending", Reason="", readiness=false. Elapsed: 162.2409ms Jan 12 23:11:48.614: INFO: Pod "pod-00c2770f-a05a-4612-9602-a887deecaeeb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.252425559s Jan 12 23:11:50.619: INFO: Pod "pod-00c2770f-a05a-4612-9602-a887deecaeeb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.258200462s Jan 12 23:11:52.624: INFO: Pod "pod-00c2770f-a05a-4612-9602-a887deecaeeb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.262998391s STEP: Saw pod success Jan 12 23:11:52.624: INFO: Pod "pod-00c2770f-a05a-4612-9602-a887deecaeeb" satisfied condition "Succeeded or Failed" Jan 12 23:11:52.627: INFO: Trying to get logs from node leguer-worker2 pod pod-00c2770f-a05a-4612-9602-a887deecaeeb container test-container: STEP: delete the pod Jan 12 23:11:52.663: INFO: Waiting for pod pod-00c2770f-a05a-4612-9602-a887deecaeeb to disappear Jan 12 23:11:52.677: INFO: Pod pod-00c2770f-a05a-4612-9602-a887deecaeeb no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:11:52.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-727" for this suite. • [SLOW TEST:6.552 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:45 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":85,"skipped":1340,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:11:52.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:12:03.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9856" for this suite. • [SLOW TEST:11.196 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":309,"completed":86,"skipped":1384,"failed":0} SSSSS ------------------------------ [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:12:03.884: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating service in namespace services-1910 Jan 12 23:12:08.013: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-1910 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Jan 12 23:12:08.281: INFO: stderr: "I0112 23:12:08.173332 1139 log.go:181] (0xc000244fd0) (0xc000c3e1e0) Create stream\nI0112 23:12:08.173402 1139 log.go:181] (0xc000244fd0) (0xc000c3e1e0) Stream added, broadcasting: 1\nI0112 23:12:08.178117 1139 log.go:181] (0xc000244fd0) Reply frame received for 1\nI0112 23:12:08.178178 1139 log.go:181] (0xc000244fd0) (0xc000c3e280) Create stream\nI0112 23:12:08.178214 1139 log.go:181] (0xc000244fd0) (0xc000c3e280) Stream added, broadcasting: 3\nI0112 23:12:08.179621 1139 log.go:181] (0xc000244fd0) Reply frame received for 3\nI0112 23:12:08.179664 1139 log.go:181] (0xc000244fd0) (0xc000ad0000) Create stream\nI0112 23:12:08.179680 1139 log.go:181] (0xc000244fd0) (0xc000ad0000) Stream added, broadcasting: 5\nI0112 23:12:08.180709 1139 log.go:181] (0xc000244fd0) Reply frame received for 5\nI0112 23:12:08.252308 1139 log.go:181] (0xc000244fd0) Data frame received for 5\nI0112 23:12:08.252331 1139 log.go:181] (0xc000ad0000) (5) Data frame handling\nI0112 23:12:08.252344 1139 log.go:181] (0xc000ad0000) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI0112 23:12:08.273231 1139 log.go:181] (0xc000244fd0) Data frame received for 3\nI0112 23:12:08.273247 1139 log.go:181] (0xc000c3e280) (3) Data frame handling\nI0112 23:12:08.273255 1139 log.go:181] (0xc000c3e280) (3) Data frame sent\nI0112 23:12:08.273866 1139 log.go:181] (0xc000244fd0) Data frame received for 5\nI0112 23:12:08.273883 1139 log.go:181] (0xc000ad0000) (5) Data frame handling\nI0112 23:12:08.274279 1139 log.go:181] (0xc000244fd0) Data frame received for 3\nI0112 23:12:08.274306 1139 log.go:181] (0xc000c3e280) (3) Data frame handling\nI0112 23:12:08.275875 1139 log.go:181] (0xc000244fd0) Data frame received for 1\nI0112 23:12:08.275905 1139 log.go:181] (0xc000c3e1e0) (1) Data frame handling\nI0112 23:12:08.275925 1139 log.go:181] (0xc000c3e1e0) (1) Data frame sent\nI0112 23:12:08.275971 1139 log.go:181] (0xc000244fd0) (0xc000c3e1e0) Stream removed, broadcasting: 1\nI0112 23:12:08.276005 1139 log.go:181] (0xc000244fd0) Go away received\nI0112 23:12:08.276385 1139 log.go:181] (0xc000244fd0) (0xc000c3e1e0) Stream removed, broadcasting: 1\nI0112 23:12:08.276410 1139 log.go:181] (0xc000244fd0) (0xc000c3e280) Stream removed, broadcasting: 3\nI0112 23:12:08.276422 1139 log.go:181] (0xc000244fd0) (0xc000ad0000) Stream removed, broadcasting: 5\n" Jan 12 23:12:08.282: INFO: stdout: "iptables" Jan 12 23:12:08.282: INFO: proxyMode: iptables Jan 12 23:12:08.326: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jan 12 23:12:08.342: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-clusterip-timeout in namespace services-1910 STEP: creating replication controller affinity-clusterip-timeout in namespace services-1910 I0112 23:12:08.409252 7 runners.go:190] Created replication controller with name: affinity-clusterip-timeout, namespace: services-1910, replica count: 3 I0112 23:12:11.459675 7 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0112 23:12:14.459906 7 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 12 23:12:14.472: INFO: Creating new exec pod Jan 12 23:12:19.493: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-1910 exec execpod-affinity8b6xc -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' Jan 12 23:12:22.092: INFO: stderr: "I0112 23:12:22.010365 1157 log.go:181] (0xc0000c0160) (0xc000988140) Create stream\nI0112 23:12:22.010417 1157 log.go:181] (0xc0000c0160) (0xc000988140) Stream added, broadcasting: 1\nI0112 23:12:22.012498 1157 log.go:181] (0xc0000c0160) Reply frame received for 1\nI0112 23:12:22.012574 1157 log.go:181] (0xc0000c0160) (0xc000988820) Create stream\nI0112 23:12:22.012614 1157 log.go:181] (0xc0000c0160) (0xc000988820) Stream added, broadcasting: 3\nI0112 23:12:22.014379 1157 log.go:181] (0xc0000c0160) Reply frame received for 3\nI0112 23:12:22.014413 1157 log.go:181] (0xc0000c0160) (0xc00081c460) Create stream\nI0112 23:12:22.014428 1157 log.go:181] (0xc0000c0160) (0xc00081c460) Stream added, broadcasting: 5\nI0112 23:12:22.015102 1157 log.go:181] (0xc0000c0160) Reply frame received for 5\nI0112 23:12:22.085773 1157 log.go:181] (0xc0000c0160) Data frame received for 5\nI0112 23:12:22.085805 1157 log.go:181] (0xc00081c460) (5) Data frame handling\nI0112 23:12:22.085823 1157 log.go:181] (0xc00081c460) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip-timeout 80\nI0112 23:12:22.086052 1157 log.go:181] (0xc0000c0160) Data frame received for 5\nI0112 23:12:22.086064 1157 log.go:181] (0xc00081c460) (5) Data frame handling\nI0112 23:12:22.086075 1157 log.go:181] (0xc00081c460) (5) Data frame sent\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\nI0112 23:12:22.086312 1157 log.go:181] (0xc0000c0160) Data frame received for 3\nI0112 23:12:22.086334 1157 log.go:181] (0xc000988820) (3) Data frame handling\nI0112 23:12:22.086667 1157 log.go:181] (0xc0000c0160) Data frame received for 5\nI0112 23:12:22.086691 1157 log.go:181] (0xc00081c460) (5) Data frame handling\nI0112 23:12:22.087819 1157 log.go:181] (0xc0000c0160) Data frame received for 1\nI0112 23:12:22.087837 1157 log.go:181] (0xc000988140) (1) Data frame handling\nI0112 23:12:22.087854 1157 log.go:181] (0xc000988140) (1) Data frame sent\nI0112 23:12:22.087864 1157 log.go:181] (0xc0000c0160) (0xc000988140) Stream removed, broadcasting: 1\nI0112 23:12:22.087940 1157 log.go:181] (0xc0000c0160) Go away received\nI0112 23:12:22.088164 1157 log.go:181] (0xc0000c0160) (0xc000988140) Stream removed, broadcasting: 1\nI0112 23:12:22.088180 1157 log.go:181] (0xc0000c0160) (0xc000988820) Stream removed, broadcasting: 3\nI0112 23:12:22.088188 1157 log.go:181] (0xc0000c0160) (0xc00081c460) Stream removed, broadcasting: 5\n" Jan 12 23:12:22.092: INFO: stdout: "" Jan 12 23:12:22.093: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-1910 exec execpod-affinity8b6xc -- /bin/sh -x -c nc -zv -t -w 2 10.96.49.223 80' Jan 12 23:12:23.047: INFO: stderr: "I0112 23:12:22.976657 1175 log.go:181] (0xc00003a420) (0xc0008243c0) Create stream\nI0112 23:12:22.976757 1175 log.go:181] (0xc00003a420) (0xc0008243c0) Stream added, broadcasting: 1\nI0112 23:12:22.980815 1175 log.go:181] (0xc00003a420) Reply frame received for 1\nI0112 23:12:22.980993 1175 log.go:181] (0xc00003a420) (0xc000824d20) Create stream\nI0112 23:12:22.981021 1175 log.go:181] (0xc00003a420) (0xc000824d20) Stream added, broadcasting: 3\nI0112 23:12:22.982361 1175 log.go:181] (0xc00003a420) Reply frame received for 3\nI0112 23:12:22.982400 1175 log.go:181] (0xc00003a420) (0xc000314460) Create stream\nI0112 23:12:22.982419 1175 log.go:181] (0xc00003a420) (0xc000314460) Stream added, broadcasting: 5\nI0112 23:12:22.983536 1175 log.go:181] (0xc00003a420) Reply frame received for 5\nI0112 23:12:23.039132 1175 log.go:181] (0xc00003a420) Data frame received for 5\nI0112 23:12:23.039178 1175 log.go:181] (0xc000314460) (5) Data frame handling\nI0112 23:12:23.039199 1175 log.go:181] (0xc000314460) (5) Data frame sent\nI0112 23:12:23.039213 1175 log.go:181] (0xc00003a420) Data frame received for 5\nI0112 23:12:23.039223 1175 log.go:181] (0xc000314460) (5) Data frame handling\nI0112 23:12:23.039238 1175 log.go:181] (0xc00003a420) Data frame received for 3\nI0112 23:12:23.039248 1175 log.go:181] (0xc000824d20) (3) Data frame handling\n+ nc -zv -t -w 2 10.96.49.223 80\nConnection to 10.96.49.223 80 port [tcp/http] succeeded!\nI0112 23:12:23.041015 1175 log.go:181] (0xc00003a420) Data frame received for 1\nI0112 23:12:23.041049 1175 log.go:181] (0xc0008243c0) (1) Data frame handling\nI0112 23:12:23.041076 1175 log.go:181] (0xc0008243c0) (1) Data frame sent\nI0112 23:12:23.041092 1175 log.go:181] (0xc00003a420) (0xc0008243c0) Stream removed, broadcasting: 1\nI0112 23:12:23.041215 1175 log.go:181] (0xc00003a420) Go away received\nI0112 23:12:23.041535 1175 log.go:181] (0xc00003a420) (0xc0008243c0) Stream removed, broadcasting: 1\nI0112 23:12:23.041560 1175 log.go:181] (0xc00003a420) (0xc000824d20) Stream removed, broadcasting: 3\nI0112 23:12:23.041573 1175 log.go:181] (0xc00003a420) (0xc000314460) Stream removed, broadcasting: 5\n" Jan 12 23:12:23.047: INFO: stdout: "" Jan 12 23:12:23.047: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-1910 exec execpod-affinity8b6xc -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.96.49.223:80/ ; done' Jan 12 23:12:23.346: INFO: stderr: "I0112 23:12:23.172174 1194 log.go:181] (0xc00003a0b0) (0xc000219900) Create stream\nI0112 23:12:23.172244 1194 log.go:181] (0xc00003a0b0) (0xc000219900) Stream added, broadcasting: 1\nI0112 23:12:23.174392 1194 log.go:181] (0xc00003a0b0) Reply frame received for 1\nI0112 23:12:23.174420 1194 log.go:181] (0xc00003a0b0) (0xc000219b80) Create stream\nI0112 23:12:23.174428 1194 log.go:181] (0xc00003a0b0) (0xc000219b80) Stream added, broadcasting: 3\nI0112 23:12:23.175248 1194 log.go:181] (0xc00003a0b0) Reply frame received for 3\nI0112 23:12:23.175285 1194 log.go:181] (0xc00003a0b0) (0xc000aba1e0) Create stream\nI0112 23:12:23.175294 1194 log.go:181] (0xc00003a0b0) (0xc000aba1e0) Stream added, broadcasting: 5\nI0112 23:12:23.175973 1194 log.go:181] (0xc00003a0b0) Reply frame received for 5\nI0112 23:12:23.241092 1194 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0112 23:12:23.241134 1194 log.go:181] (0xc000aba1e0) (5) Data frame handling\nI0112 23:12:23.241152 1194 log.go:181] (0xc000aba1e0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.49.223:80/\nI0112 23:12:23.241171 1194 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0112 23:12:23.241180 1194 log.go:181] (0xc000219b80) (3) Data frame handling\nI0112 23:12:23.241192 1194 log.go:181] (0xc000219b80) (3) Data frame sent\nI0112 23:12:23.244378 1194 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0112 23:12:23.244409 1194 log.go:181] (0xc000219b80) (3) Data frame handling\nI0112 23:12:23.244431 1194 log.go:181] (0xc000219b80) (3) Data frame sent\nI0112 23:12:23.245298 1194 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0112 23:12:23.245312 1194 log.go:181] (0xc000aba1e0) (5) Data frame handling\nI0112 23:12:23.245319 1194 log.go:181] (0xc000aba1e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.49.223:80/\nI0112 23:12:23.245392 1194 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0112 23:12:23.245402 1194 log.go:181] (0xc000219b80) (3) Data frame handling\nI0112 23:12:23.245415 1194 log.go:181] (0xc000219b80) (3) Data frame sent\nI0112 23:12:23.251809 1194 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0112 23:12:23.251837 1194 log.go:181] (0xc000219b80) (3) Data frame handling\nI0112 23:12:23.251852 1194 log.go:181] (0xc000219b80) (3) Data frame sent\nI0112 23:12:23.252685 1194 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0112 23:12:23.252722 1194 log.go:181] (0xc000219b80) (3) Data frame handling\nI0112 23:12:23.252742 1194 log.go:181] (0xc000219b80) (3) Data frame sent\nI0112 23:12:23.252779 1194 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0112 23:12:23.252795 1194 log.go:181] (0xc000aba1e0) (5) Data frame handling\nI0112 23:12:23.252827 1194 log.go:181] (0xc000aba1e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.49.223:80/\nI0112 23:12:23.258232 1194 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0112 23:12:23.258266 1194 log.go:181] (0xc000219b80) (3) Data frame handling\nI0112 23:12:23.258292 1194 log.go:181] (0xc000219b80) (3) Data frame sent\nI0112 23:12:23.259150 1194 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0112 23:12:23.259199 1194 log.go:181] (0xc000219b80) (3) Data frame handling\nI0112 23:12:23.259228 1194 log.go:181] (0xc000219b80) (3) Data frame sent\nI0112 23:12:23.259270 1194 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0112 23:12:23.259290 1194 log.go:181] (0xc000aba1e0) (5) Data frame handling\nI0112 23:12:23.259307 1194 log.go:181] (0xc000aba1e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.49.223:80/\nI0112 23:12:23.266484 1194 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0112 23:12:23.266520 1194 log.go:181] (0xc000219b80) (3) Data frame handling\nI0112 23:12:23.266557 1194 log.go:181] (0xc000219b80) (3) Data frame sent\nI0112 23:12:23.267466 1194 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0112 23:12:23.267482 1194 log.go:181] (0xc000aba1e0) (5) Data frame handling\nI0112 23:12:23.267490 1194 log.go:181] (0xc000aba1e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.49.223:80/\nI0112 23:12:23.267514 1194 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0112 23:12:23.267538 1194 log.go:181] (0xc000219b80) (3) Data frame handling\nI0112 23:12:23.267560 1194 log.go:181] (0xc000219b80) (3) Data frame sent\nI0112 23:12:23.274107 1194 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0112 23:12:23.274131 1194 log.go:181] (0xc000219b80) (3) Data frame handling\nI0112 23:12:23.274150 1194 log.go:181] (0xc000219b80) (3) Data frame sent\nI0112 23:12:23.274723 1194 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0112 23:12:23.274737 1194 log.go:181] (0xc000aba1e0) (5) Data frame handling\nI0112 23:12:23.274744 1194 log.go:181] (0xc000aba1e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.49.223:80/\nI0112 23:12:23.274825 1194 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0112 23:12:23.274848 1194 log.go:181] (0xc000219b80) (3) Data frame handling\nI0112 23:12:23.274882 1194 log.go:181] (0xc000219b80) (3) Data frame sent\nI0112 23:12:23.278270 1194 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0112 23:12:23.278298 1194 log.go:181] (0xc000219b80) (3) Data frame handling\nI0112 23:12:23.278321 1194 log.go:181] (0xc000219b80) (3) Data frame sent\nI0112 23:12:23.279235 1194 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0112 23:12:23.279264 1194 log.go:181] (0xc000219b80) (3) Data frame handling\nI0112 23:12:23.279278 1194 log.go:181] (0xc000219b80) (3) Data frame sent\nI0112 23:12:23.279304 1194 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0112 23:12:23.279340 1194 log.go:181] (0xc000aba1e0) (5) Data frame handling\nI0112 23:12:23.279363 1194 log.go:181] (0xc000aba1e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.49.223:80/\nI0112 23:12:23.284138 1194 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0112 23:12:23.284157 1194 log.go:181] (0xc000219b80) (3) Data frame handling\nI0112 23:12:23.284178 1194 log.go:181] (0xc000219b80) (3) Data frame sent\nI0112 23:12:23.285008 1194 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0112 23:12:23.285033 1194 log.go:181] (0xc000219b80) (3) Data frame handling\nI0112 23:12:23.285053 1194 log.go:181] (0xc000219b80) (3) Data frame sent\nI0112 23:12:23.285183 1194 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0112 23:12:23.285203 1194 log.go:181] (0xc000aba1e0) (5) Data frame handling\nI0112 23:12:23.285227 1194 log.go:181] (0xc000aba1e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.49.223:80/\nI0112 23:12:23.289832 1194 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0112 23:12:23.289863 1194 log.go:181] (0xc000219b80) (3) Data frame handling\nI0112 23:12:23.289884 1194 log.go:181] (0xc000219b80) (3) Data frame sent\nI0112 23:12:23.290473 1194 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0112 23:12:23.290515 1194 log.go:181] (0xc000219b80) (3) Data frame handling\nI0112 23:12:23.290544 1194 log.go:181] (0xc000219b80) (3) Data frame sent\nI0112 23:12:23.290574 1194 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0112 23:12:23.290591 1194 log.go:181] (0xc000aba1e0) (5) Data frame handling\nI0112 23:12:23.290620 1194 log.go:181] (0xc000aba1e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.49.223:80/\nI0112 23:12:23.300731 1194 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0112 23:12:23.300750 1194 log.go:181] (0xc000219b80) (3) Data frame handling\nI0112 23:12:23.300762 1194 log.go:181] (0xc000219b80) (3) Data frame sent\nI0112 23:12:23.301500 1194 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0112 23:12:23.301544 1194 log.go:181] (0xc000219b80) (3) Data frame handling\nI0112 23:12:23.301566 1194 log.go:181] (0xc000219b80) (3) Data frame sent\nI0112 23:12:23.301601 1194 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0112 23:12:23.301620 1194 log.go:181] (0xc000aba1e0) (5) Data frame handling\nI0112 23:12:23.301640 1194 log.go:181] (0xc000aba1e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.49.223:80/\nI0112 23:12:23.306549 1194 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0112 23:12:23.306568 1194 log.go:181] (0xc000219b80) (3) Data frame handling\nI0112 23:12:23.306579 1194 log.go:181] (0xc000219b80) (3) Data frame sent\nI0112 23:12:23.307010 1194 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0112 23:12:23.307027 1194 log.go:181] (0xc000219b80) (3) Data frame handling\nI0112 23:12:23.307034 1194 log.go:181] (0xc000219b80) (3) Data frame sent\nI0112 23:12:23.307045 1194 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0112 23:12:23.307050 1194 log.go:181] (0xc000aba1e0) (5) Data frame handling\nI0112 23:12:23.307057 1194 log.go:181] (0xc000aba1e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.49.223:80/\nI0112 23:12:23.311390 1194 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0112 23:12:23.311414 1194 log.go:181] (0xc000219b80) (3) Data frame handling\nI0112 23:12:23.311436 1194 log.go:181] (0xc000219b80) (3) Data frame sent\nI0112 23:12:23.311867 1194 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0112 23:12:23.311886 1194 log.go:181] (0xc000aba1e0) (5) Data frame handling\nI0112 23:12:23.311902 1194 log.go:181] (0xc000aba1e0) (5) Data frame sent\n+ echo\n+ curl -qI0112 23:12:23.311923 1194 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0112 23:12:23.311948 1194 log.go:181] (0xc000aba1e0) (5) Data frame handling\nI0112 23:12:23.311973 1194 log.go:181] (0xc000aba1e0) (5) Data frame sent\nI0112 23:12:23.311995 1194 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0112 23:12:23.312012 1194 log.go:181] (0xc000219b80) (3) Data frame handling\nI0112 23:12:23.312025 1194 log.go:181] (0xc000219b80) (3) Data frame sent\n -s --connect-timeout 2 http://10.96.49.223:80/\nI0112 23:12:23.315417 1194 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0112 23:12:23.315442 1194 log.go:181] (0xc000219b80) (3) Data frame handling\nI0112 23:12:23.315465 1194 log.go:181] (0xc000219b80) (3) Data frame sent\nI0112 23:12:23.315802 1194 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0112 23:12:23.315827 1194 log.go:181] (0xc000aba1e0) (5) Data frame handling\nI0112 23:12:23.315837 1194 log.go:181] (0xc000aba1e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.49.223:80/\nI0112 23:12:23.315851 1194 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0112 23:12:23.315868 1194 log.go:181] (0xc000219b80) (3) Data frame handling\nI0112 23:12:23.315881 1194 log.go:181] (0xc000219b80) (3) Data frame sent\nI0112 23:12:23.321425 1194 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0112 23:12:23.321494 1194 log.go:181] (0xc000219b80) (3) Data frame handling\nI0112 23:12:23.321518 1194 log.go:181] (0xc000219b80) (3) Data frame sent\nI0112 23:12:23.321773 1194 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0112 23:12:23.321798 1194 log.go:181] (0xc000aba1e0) (5) Data frame handling\nI0112 23:12:23.321807 1194 log.go:181] (0xc000aba1e0) (5) Data frame sent\nI0112 23:12:23.321814 1194 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0112 23:12:23.321820 1194 log.go:181] (0xc000aba1e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.49.223:80/\nI0112 23:12:23.321834 1194 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0112 23:12:23.321863 1194 log.go:181] (0xc000219b80) (3) Data frame handling\nI0112 23:12:23.321874 1194 log.go:181] (0xc000219b80) (3) Data frame sent\nI0112 23:12:23.321887 1194 log.go:181] (0xc000aba1e0) (5) Data frame sent\nI0112 23:12:23.325631 1194 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0112 23:12:23.325660 1194 log.go:181] (0xc000219b80) (3) Data frame handling\nI0112 23:12:23.325690 1194 log.go:181] (0xc000219b80) (3) Data frame sent\nI0112 23:12:23.326248 1194 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0112 23:12:23.326277 1194 log.go:181] (0xc000aba1e0) (5) Data frame handling\nI0112 23:12:23.326290 1194 log.go:181] (0xc000aba1e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.49.223:80/\nI0112 23:12:23.326306 1194 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0112 23:12:23.326315 1194 log.go:181] (0xc000219b80) (3) Data frame handling\nI0112 23:12:23.326324 1194 log.go:181] (0xc000219b80) (3) Data frame sent\nI0112 23:12:23.330340 1194 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0112 23:12:23.330360 1194 log.go:181] (0xc000219b80) (3) Data frame handling\nI0112 23:12:23.330400 1194 log.go:181] (0xc000219b80) (3) Data frame sent\nI0112 23:12:23.331249 1194 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0112 23:12:23.331278 1194 log.go:181] (0xc000aba1e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.49.223:80/\nI0112 23:12:23.331301 1194 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0112 23:12:23.331339 1194 log.go:181] (0xc000219b80) (3) Data frame handling\nI0112 23:12:23.331363 1194 log.go:181] (0xc000219b80) (3) Data frame sent\nI0112 23:12:23.331391 1194 log.go:181] (0xc000aba1e0) (5) Data frame sent\nI0112 23:12:23.337232 1194 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0112 23:12:23.337264 1194 log.go:181] (0xc000219b80) (3) Data frame handling\nI0112 23:12:23.337294 1194 log.go:181] (0xc000219b80) (3) Data frame sent\nI0112 23:12:23.337983 1194 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0112 23:12:23.338005 1194 log.go:181] (0xc000219b80) (3) Data frame handling\nI0112 23:12:23.338042 1194 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0112 23:12:23.338064 1194 log.go:181] (0xc000aba1e0) (5) Data frame handling\nI0112 23:12:23.340026 1194 log.go:181] (0xc00003a0b0) Data frame received for 1\nI0112 23:12:23.340050 1194 log.go:181] (0xc000219900) (1) Data frame handling\nI0112 23:12:23.340061 1194 log.go:181] (0xc000219900) (1) Data frame sent\nI0112 23:12:23.340078 1194 log.go:181] (0xc00003a0b0) (0xc000219900) Stream removed, broadcasting: 1\nI0112 23:12:23.340102 1194 log.go:181] (0xc00003a0b0) Go away received\nI0112 23:12:23.340478 1194 log.go:181] (0xc00003a0b0) (0xc000219900) Stream removed, broadcasting: 1\nI0112 23:12:23.340507 1194 log.go:181] (0xc00003a0b0) (0xc000219b80) Stream removed, broadcasting: 3\nI0112 23:12:23.340533 1194 log.go:181] (0xc00003a0b0) (0xc000aba1e0) Stream removed, broadcasting: 5\n" Jan 12 23:12:23.347: INFO: stdout: "\naffinity-clusterip-timeout-5hszf\naffinity-clusterip-timeout-5hszf\naffinity-clusterip-timeout-5hszf\naffinity-clusterip-timeout-5hszf\naffinity-clusterip-timeout-5hszf\naffinity-clusterip-timeout-5hszf\naffinity-clusterip-timeout-5hszf\naffinity-clusterip-timeout-5hszf\naffinity-clusterip-timeout-5hszf\naffinity-clusterip-timeout-5hszf\naffinity-clusterip-timeout-5hszf\naffinity-clusterip-timeout-5hszf\naffinity-clusterip-timeout-5hszf\naffinity-clusterip-timeout-5hszf\naffinity-clusterip-timeout-5hszf\naffinity-clusterip-timeout-5hszf" Jan 12 23:12:23.347: INFO: Received response from host: affinity-clusterip-timeout-5hszf Jan 12 23:12:23.347: INFO: Received response from host: affinity-clusterip-timeout-5hszf Jan 12 23:12:23.347: INFO: Received response from host: affinity-clusterip-timeout-5hszf Jan 12 23:12:23.347: INFO: Received response from host: affinity-clusterip-timeout-5hszf Jan 12 23:12:23.347: INFO: Received response from host: affinity-clusterip-timeout-5hszf Jan 12 23:12:23.347: INFO: Received response from host: affinity-clusterip-timeout-5hszf Jan 12 23:12:23.347: INFO: Received response from host: affinity-clusterip-timeout-5hszf Jan 12 23:12:23.347: INFO: Received response from host: affinity-clusterip-timeout-5hszf Jan 12 23:12:23.347: INFO: Received response from host: affinity-clusterip-timeout-5hszf Jan 12 23:12:23.347: INFO: Received response from host: affinity-clusterip-timeout-5hszf Jan 12 23:12:23.347: INFO: Received response from host: affinity-clusterip-timeout-5hszf Jan 12 23:12:23.347: INFO: Received response from host: affinity-clusterip-timeout-5hszf Jan 12 23:12:23.347: INFO: Received response from host: affinity-clusterip-timeout-5hszf Jan 12 23:12:23.347: INFO: Received response from host: affinity-clusterip-timeout-5hszf Jan 12 23:12:23.347: INFO: Received response from host: affinity-clusterip-timeout-5hszf Jan 12 23:12:23.347: INFO: Received response from host: affinity-clusterip-timeout-5hszf Jan 12 23:12:23.347: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-1910 exec execpod-affinity8b6xc -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.96.49.223:80/' Jan 12 23:12:23.570: INFO: stderr: "I0112 23:12:23.495097 1212 log.go:181] (0xc00003a420) (0xc0007a54a0) Create stream\nI0112 23:12:23.495162 1212 log.go:181] (0xc00003a420) (0xc0007a54a0) Stream added, broadcasting: 1\nI0112 23:12:23.501141 1212 log.go:181] (0xc00003a420) Reply frame received for 1\nI0112 23:12:23.501229 1212 log.go:181] (0xc00003a420) (0xc0007a5720) Create stream\nI0112 23:12:23.501253 1212 log.go:181] (0xc00003a420) (0xc0007a5720) Stream added, broadcasting: 3\nI0112 23:12:23.502608 1212 log.go:181] (0xc00003a420) Reply frame received for 3\nI0112 23:12:23.502647 1212 log.go:181] (0xc00003a420) (0xc0008581e0) Create stream\nI0112 23:12:23.502660 1212 log.go:181] (0xc00003a420) (0xc0008581e0) Stream added, broadcasting: 5\nI0112 23:12:23.503565 1212 log.go:181] (0xc00003a420) Reply frame received for 5\nI0112 23:12:23.556746 1212 log.go:181] (0xc00003a420) Data frame received for 5\nI0112 23:12:23.556777 1212 log.go:181] (0xc0008581e0) (5) Data frame handling\nI0112 23:12:23.556797 1212 log.go:181] (0xc0008581e0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.96.49.223:80/\nI0112 23:12:23.562352 1212 log.go:181] (0xc00003a420) Data frame received for 3\nI0112 23:12:23.562397 1212 log.go:181] (0xc0007a5720) (3) Data frame handling\nI0112 23:12:23.562440 1212 log.go:181] (0xc0007a5720) (3) Data frame sent\nI0112 23:12:23.562821 1212 log.go:181] (0xc00003a420) Data frame received for 3\nI0112 23:12:23.562851 1212 log.go:181] (0xc0007a5720) (3) Data frame handling\nI0112 23:12:23.562875 1212 log.go:181] (0xc00003a420) Data frame received for 5\nI0112 23:12:23.562884 1212 log.go:181] (0xc0008581e0) (5) Data frame handling\nI0112 23:12:23.564377 1212 log.go:181] (0xc00003a420) Data frame received for 1\nI0112 23:12:23.564400 1212 log.go:181] (0xc0007a54a0) (1) Data frame handling\nI0112 23:12:23.564416 1212 log.go:181] (0xc0007a54a0) (1) Data frame sent\nI0112 23:12:23.564431 1212 log.go:181] (0xc00003a420) (0xc0007a54a0) Stream removed, broadcasting: 1\nI0112 23:12:23.564451 1212 log.go:181] (0xc00003a420) Go away received\nI0112 23:12:23.564918 1212 log.go:181] (0xc00003a420) (0xc0007a54a0) Stream removed, broadcasting: 1\nI0112 23:12:23.564941 1212 log.go:181] (0xc00003a420) (0xc0007a5720) Stream removed, broadcasting: 3\nI0112 23:12:23.564950 1212 log.go:181] (0xc00003a420) (0xc0008581e0) Stream removed, broadcasting: 5\n" Jan 12 23:12:23.570: INFO: stdout: "affinity-clusterip-timeout-5hszf" Jan 12 23:12:43.570: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-1910 exec execpod-affinity8b6xc -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.96.49.223:80/' Jan 12 23:12:43.824: INFO: stderr: "I0112 23:12:43.709482 1230 log.go:181] (0xc00018c420) (0xc000a080a0) Create stream\nI0112 23:12:43.709570 1230 log.go:181] (0xc00018c420) (0xc000a080a0) Stream added, broadcasting: 1\nI0112 23:12:43.711510 1230 log.go:181] (0xc00018c420) Reply frame received for 1\nI0112 23:12:43.711550 1230 log.go:181] (0xc00018c420) (0xc0003870e0) Create stream\nI0112 23:12:43.711561 1230 log.go:181] (0xc00018c420) (0xc0003870e0) Stream added, broadcasting: 3\nI0112 23:12:43.712655 1230 log.go:181] (0xc00018c420) Reply frame received for 3\nI0112 23:12:43.712714 1230 log.go:181] (0xc00018c420) (0xc000a08dc0) Create stream\nI0112 23:12:43.712745 1230 log.go:181] (0xc00018c420) (0xc000a08dc0) Stream added, broadcasting: 5\nI0112 23:12:43.713920 1230 log.go:181] (0xc00018c420) Reply frame received for 5\nI0112 23:12:43.809988 1230 log.go:181] (0xc00018c420) Data frame received for 5\nI0112 23:12:43.810015 1230 log.go:181] (0xc000a08dc0) (5) Data frame handling\nI0112 23:12:43.810029 1230 log.go:181] (0xc000a08dc0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.96.49.223:80/\nI0112 23:12:43.815913 1230 log.go:181] (0xc00018c420) Data frame received for 3\nI0112 23:12:43.815934 1230 log.go:181] (0xc0003870e0) (3) Data frame handling\nI0112 23:12:43.815947 1230 log.go:181] (0xc0003870e0) (3) Data frame sent\nI0112 23:12:43.816931 1230 log.go:181] (0xc00018c420) Data frame received for 3\nI0112 23:12:43.816966 1230 log.go:181] (0xc0003870e0) (3) Data frame handling\nI0112 23:12:43.817012 1230 log.go:181] (0xc00018c420) Data frame received for 5\nI0112 23:12:43.817042 1230 log.go:181] (0xc000a08dc0) (5) Data frame handling\nI0112 23:12:43.819075 1230 log.go:181] (0xc00018c420) Data frame received for 1\nI0112 23:12:43.819093 1230 log.go:181] (0xc000a080a0) (1) Data frame handling\nI0112 23:12:43.819120 1230 log.go:181] (0xc000a080a0) (1) Data frame sent\nI0112 23:12:43.819144 1230 log.go:181] (0xc00018c420) (0xc000a080a0) Stream removed, broadcasting: 1\nI0112 23:12:43.819439 1230 log.go:181] (0xc00018c420) Go away received\nI0112 23:12:43.819671 1230 log.go:181] (0xc00018c420) (0xc000a080a0) Stream removed, broadcasting: 1\nI0112 23:12:43.819691 1230 log.go:181] (0xc00018c420) (0xc0003870e0) Stream removed, broadcasting: 3\nI0112 23:12:43.819710 1230 log.go:181] (0xc00018c420) (0xc000a08dc0) Stream removed, broadcasting: 5\n" Jan 12 23:12:43.825: INFO: stdout: "affinity-clusterip-timeout-lpq76" Jan 12 23:12:43.825: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-timeout in namespace services-1910, will wait for the garbage collector to delete the pods Jan 12 23:12:43.939: INFO: Deleting ReplicationController affinity-clusterip-timeout took: 6.51217ms Jan 12 23:12:44.639: INFO: Terminating ReplicationController affinity-clusterip-timeout pods took: 700.204605ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:13:50.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1910" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 • [SLOW TEST:106.341 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":309,"completed":87,"skipped":1389,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:13:50.225: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should check if Kubernetes control plane services is included in cluster-info [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: validating cluster-info Jan 12 23:13:50.292: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-3972 cluster-info' Jan 12 23:13:50.414: INFO: stderr: "" Jan 12 23:13:50.414: INFO: stdout: "\x1b[0;32mKubernetes control plane\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:34747\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:13:50.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3972" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info [Conformance]","total":309,"completed":88,"skipped":1394,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:13:50.423: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap configmap-1097/configmap-test-ed7d808a-b0b3-427d-8e51-127ba0764b03 STEP: Creating a pod to test consume configMaps Jan 12 23:13:50.527: INFO: Waiting up to 5m0s for pod "pod-configmaps-907ae82f-7733-4cf2-be9b-1fef668090ca" in namespace "configmap-1097" to be "Succeeded or Failed" Jan 12 23:13:50.543: INFO: Pod "pod-configmaps-907ae82f-7733-4cf2-be9b-1fef668090ca": Phase="Pending", Reason="", readiness=false. Elapsed: 16.171532ms Jan 12 23:13:52.591: INFO: Pod "pod-configmaps-907ae82f-7733-4cf2-be9b-1fef668090ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063912275s Jan 12 23:13:54.675: INFO: Pod "pod-configmaps-907ae82f-7733-4cf2-be9b-1fef668090ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.148155146s STEP: Saw pod success Jan 12 23:13:54.675: INFO: Pod "pod-configmaps-907ae82f-7733-4cf2-be9b-1fef668090ca" satisfied condition "Succeeded or Failed" Jan 12 23:13:54.678: INFO: Trying to get logs from node leguer-worker2 pod pod-configmaps-907ae82f-7733-4cf2-be9b-1fef668090ca container env-test: STEP: delete the pod Jan 12 23:13:54.815: INFO: Waiting for pod pod-configmaps-907ae82f-7733-4cf2-be9b-1fef668090ca to disappear Jan 12 23:13:54.823: INFO: Pod pod-configmaps-907ae82f-7733-4cf2-be9b-1fef668090ca no longer exists [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:13:54.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1097" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":309,"completed":89,"skipped":1407,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:13:54.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating pod busybox-ebc15e72-2363-4eed-aeca-e5e811c922ca in namespace container-probe-3138 Jan 12 23:13:58.937: INFO: Started pod busybox-ebc15e72-2363-4eed-aeca-e5e811c922ca in namespace container-probe-3138 STEP: checking the pod's current state and verifying that restartCount is present Jan 12 23:13:58.940: INFO: Initial restart count of pod busybox-ebc15e72-2363-4eed-aeca-e5e811c922ca is 0 Jan 12 23:14:49.073: INFO: Restart count of pod container-probe-3138/busybox-ebc15e72-2363-4eed-aeca-e5e811c922ca is now 1 (50.132863739s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:14:49.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3138" for this suite. • [SLOW TEST:54.339 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":309,"completed":90,"skipped":1415,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:14:49.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Jan 12 23:14:49.250: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9607 044eebd3-41fe-42c7-b208-70eefe66253a 421513 0 2021-01-12 23:14:49 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-01-12 23:14:49 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jan 12 23:14:49.251: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9607 044eebd3-41fe-42c7-b208-70eefe66253a 421513 0 2021-01-12 23:14:49 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-01-12 23:14:49 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification Jan 12 23:14:59.259: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9607 044eebd3-41fe-42c7-b208-70eefe66253a 421544 0 2021-01-12 23:14:49 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-01-12 23:14:59 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Jan 12 23:14:59.259: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9607 044eebd3-41fe-42c7-b208-70eefe66253a 421544 0 2021-01-12 23:14:49 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-01-12 23:14:59 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Jan 12 23:15:09.272: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9607 044eebd3-41fe-42c7-b208-70eefe66253a 421564 0 2021-01-12 23:14:49 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-01-12 23:14:59 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jan 12 23:15:09.272: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9607 044eebd3-41fe-42c7-b208-70eefe66253a 421564 0 2021-01-12 23:14:49 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-01-12 23:14:59 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification Jan 12 23:15:19.282: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9607 044eebd3-41fe-42c7-b208-70eefe66253a 421584 0 2021-01-12 23:14:49 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-01-12 23:14:59 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jan 12 23:15:19.282: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9607 044eebd3-41fe-42c7-b208-70eefe66253a 421584 0 2021-01-12 23:14:49 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-01-12 23:14:59 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Jan 12 23:15:29.292: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-9607 8baacd53-755f-4092-bb17-b4f9079e6fd5 421604 0 2021-01-12 23:15:29 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-01-12 23:15:29 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jan 12 23:15:29.292: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-9607 8baacd53-755f-4092-bb17-b4f9079e6fd5 421604 0 2021-01-12 23:15:29 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-01-12 23:15:29 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification Jan 12 23:15:39.300: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-9607 8baacd53-755f-4092-bb17-b4f9079e6fd5 421624 0 2021-01-12 23:15:29 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-01-12 23:15:29 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jan 12 23:15:39.300: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-9607 8baacd53-755f-4092-bb17-b4f9079e6fd5 421624 0 2021-01-12 23:15:29 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-01-12 23:15:29 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:15:49.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9607" for this suite. • [SLOW TEST:60.141 seconds] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":309,"completed":91,"skipped":1464,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:15:49.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap configmap-7517/configmap-test-744f49af-ca48-4a8c-8c0a-86a26cff7bb4 STEP: Creating a pod to test consume configMaps Jan 12 23:15:49.445: INFO: Waiting up to 5m0s for pod "pod-configmaps-1d6f95be-8e17-4937-8159-862322ae00dd" in namespace "configmap-7517" to be "Succeeded or Failed" Jan 12 23:15:49.449: INFO: Pod "pod-configmaps-1d6f95be-8e17-4937-8159-862322ae00dd": Phase="Pending", Reason="", readiness=false. Elapsed: 3.718551ms Jan 12 23:15:51.453: INFO: Pod "pod-configmaps-1d6f95be-8e17-4937-8159-862322ae00dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007888128s Jan 12 23:15:53.459: INFO: Pod "pod-configmaps-1d6f95be-8e17-4937-8159-862322ae00dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013678375s STEP: Saw pod success Jan 12 23:15:53.459: INFO: Pod "pod-configmaps-1d6f95be-8e17-4937-8159-862322ae00dd" satisfied condition "Succeeded or Failed" Jan 12 23:15:53.462: INFO: Trying to get logs from node leguer-worker2 pod pod-configmaps-1d6f95be-8e17-4937-8159-862322ae00dd container env-test: STEP: delete the pod Jan 12 23:15:53.539: INFO: Waiting for pod pod-configmaps-1d6f95be-8e17-4937-8159-862322ae00dd to disappear Jan 12 23:15:53.551: INFO: Pod pod-configmaps-1d6f95be-8e17-4937-8159-862322ae00dd no longer exists [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:15:53.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7517" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":309,"completed":92,"skipped":1478,"failed":0} ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:15:53.557: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Performing setup for networking test in namespace pod-network-test-4312 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 12 23:15:53.703: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jan 12 23:15:53.741: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 12 23:15:55.745: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 12 23:15:57.754: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 12 23:15:59.745: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 12 23:16:01.745: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 12 23:16:03.745: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 12 23:16:05.745: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 12 23:16:07.750: INFO: The status of Pod netserver-0 is Running (Ready = true) Jan 12 23:16:07.755: INFO: The status of Pod netserver-1 is Running (Ready = false) Jan 12 23:16:09.759: INFO: The status of Pod netserver-1 is Running (Ready = false) Jan 12 23:16:11.759: INFO: The status of Pod netserver-1 is Running (Ready = false) Jan 12 23:16:13.759: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Jan 12 23:16:17.791: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 Jan 12 23:16:17.791: INFO: Breadth first check of 10.244.2.233 on host 172.18.0.13... Jan 12 23:16:17.794: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.26:9080/dial?request=hostname&protocol=udp&host=10.244.2.233&port=8081&tries=1'] Namespace:pod-network-test-4312 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 12 23:16:17.794: INFO: >>> kubeConfig: /root/.kube/config I0112 23:16:17.830826 7 log.go:181] (0xc0063a24d0) (0xc002a3d0e0) Create stream I0112 23:16:17.830866 7 log.go:181] (0xc0063a24d0) (0xc002a3d0e0) Stream added, broadcasting: 1 I0112 23:16:17.833410 7 log.go:181] (0xc0063a24d0) Reply frame received for 1 I0112 23:16:17.833449 7 log.go:181] (0xc0063a24d0) (0xc0037720a0) Create stream I0112 23:16:17.833464 7 log.go:181] (0xc0063a24d0) (0xc0037720a0) Stream added, broadcasting: 3 I0112 23:16:17.834444 7 log.go:181] (0xc0063a24d0) Reply frame received for 3 I0112 23:16:17.834484 7 log.go:181] (0xc0063a24d0) (0xc0022c00a0) Create stream I0112 23:16:17.834498 7 log.go:181] (0xc0063a24d0) (0xc0022c00a0) Stream added, broadcasting: 5 I0112 23:16:17.835960 7 log.go:181] (0xc0063a24d0) Reply frame received for 5 I0112 23:16:17.924897 7 log.go:181] (0xc0063a24d0) Data frame received for 3 I0112 23:16:17.924926 7 log.go:181] (0xc0037720a0) (3) Data frame handling I0112 23:16:17.924935 7 log.go:181] (0xc0037720a0) (3) Data frame sent I0112 23:16:17.925627 7 log.go:181] (0xc0063a24d0) Data frame received for 5 I0112 23:16:17.925641 7 log.go:181] (0xc0022c00a0) (5) Data frame handling I0112 23:16:17.925793 7 log.go:181] (0xc0063a24d0) Data frame received for 3 I0112 23:16:17.925837 7 log.go:181] (0xc0037720a0) (3) Data frame handling I0112 23:16:17.927590 7 log.go:181] (0xc0063a24d0) Data frame received for 1 I0112 23:16:17.927602 7 log.go:181] (0xc002a3d0e0) (1) Data frame handling I0112 23:16:17.927611 7 log.go:181] (0xc002a3d0e0) (1) Data frame sent I0112 23:16:17.927622 7 log.go:181] (0xc0063a24d0) (0xc002a3d0e0) Stream removed, broadcasting: 1 I0112 23:16:17.927711 7 log.go:181] (0xc0063a24d0) (0xc002a3d0e0) Stream removed, broadcasting: 1 I0112 23:16:17.927722 7 log.go:181] (0xc0063a24d0) (0xc0037720a0) Stream removed, broadcasting: 3 I0112 23:16:17.927842 7 log.go:181] (0xc0063a24d0) Go away received I0112 23:16:17.927950 7 log.go:181] (0xc0063a24d0) (0xc0022c00a0) Stream removed, broadcasting: 5 Jan 12 23:16:17.928: INFO: Waiting for responses: map[] Jan 12 23:16:17.928: INFO: reached 10.244.2.233 after 0/1 tries Jan 12 23:16:17.928: INFO: Breadth first check of 10.244.1.25 on host 172.18.0.12... Jan 12 23:16:17.934: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.26:9080/dial?request=hostname&protocol=udp&host=10.244.1.25&port=8081&tries=1'] Namespace:pod-network-test-4312 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 12 23:16:17.934: INFO: >>> kubeConfig: /root/.kube/config I0112 23:16:17.958486 7 log.go:181] (0xc00003c420) (0xc003772960) Create stream I0112 23:16:17.958508 7 log.go:181] (0xc00003c420) (0xc003772960) Stream added, broadcasting: 1 I0112 23:16:17.961924 7 log.go:181] (0xc00003c420) Reply frame received for 1 I0112 23:16:17.961980 7 log.go:181] (0xc00003c420) (0xc003772a00) Create stream I0112 23:16:17.962052 7 log.go:181] (0xc00003c420) (0xc003772a00) Stream added, broadcasting: 3 I0112 23:16:17.963584 7 log.go:181] (0xc00003c420) Reply frame received for 3 I0112 23:16:17.963649 7 log.go:181] (0xc00003c420) (0xc0011ba5a0) Create stream I0112 23:16:17.963676 7 log.go:181] (0xc00003c420) (0xc0011ba5a0) Stream added, broadcasting: 5 I0112 23:16:17.964554 7 log.go:181] (0xc00003c420) Reply frame received for 5 I0112 23:16:18.039822 7 log.go:181] (0xc00003c420) Data frame received for 3 I0112 23:16:18.039844 7 log.go:181] (0xc003772a00) (3) Data frame handling I0112 23:16:18.039938 7 log.go:181] (0xc003772a00) (3) Data frame sent I0112 23:16:18.040563 7 log.go:181] (0xc00003c420) Data frame received for 3 I0112 23:16:18.040582 7 log.go:181] (0xc003772a00) (3) Data frame handling I0112 23:16:18.040818 7 log.go:181] (0xc00003c420) Data frame received for 5 I0112 23:16:18.040879 7 log.go:181] (0xc0011ba5a0) (5) Data frame handling I0112 23:16:18.042613 7 log.go:181] (0xc00003c420) Data frame received for 1 I0112 23:16:18.042638 7 log.go:181] (0xc003772960) (1) Data frame handling I0112 23:16:18.042655 7 log.go:181] (0xc003772960) (1) Data frame sent I0112 23:16:18.042671 7 log.go:181] (0xc00003c420) (0xc003772960) Stream removed, broadcasting: 1 I0112 23:16:18.042688 7 log.go:181] (0xc00003c420) Go away received I0112 23:16:18.042809 7 log.go:181] (0xc00003c420) (0xc003772960) Stream removed, broadcasting: 1 I0112 23:16:18.042822 7 log.go:181] (0xc00003c420) (0xc003772a00) Stream removed, broadcasting: 3 I0112 23:16:18.042827 7 log.go:181] (0xc00003c420) (0xc0011ba5a0) Stream removed, broadcasting: 5 Jan 12 23:16:18.042: INFO: Waiting for responses: map[] Jan 12 23:16:18.042: INFO: reached 10.244.1.25 after 0/1 tries Jan 12 23:16:18.042: INFO: Going to retry 0 out of 2 pods.... [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:16:18.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-4312" for this suite. • [SLOW TEST:24.495 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:27 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:30 should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":309,"completed":93,"skipped":1478,"failed":0} S ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:16:18.053: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:85 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 12 23:16:18.188: INFO: Creating deployment "test-recreate-deployment" Jan 12 23:16:18.192: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Jan 12 23:16:18.240: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Jan 12 23:16:20.248: INFO: Waiting deployment "test-recreate-deployment" to complete Jan 12 23:16:20.251: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746090178, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746090178, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746090178, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746090178, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-786dd7c454\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 12 23:16:22.255: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Jan 12 23:16:22.264: INFO: Updating deployment test-recreate-deployment Jan 12 23:16:22.265: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:79 Jan 12 23:16:22.936: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-9246 3d5457a0-5596-4aa2-a454-b8047dfac76a 421817 2 2021-01-12 23:16:18 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2021-01-12 23:16:22 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-01-12 23:16:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc005e98128 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2021-01-12 23:16:22 +0000 UTC,LastTransitionTime:2021-01-12 23:16:22 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-f79dd4667" is progressing.,LastUpdateTime:2021-01-12 23:16:22 +0000 UTC,LastTransitionTime:2021-01-12 23:16:18 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Jan 12 23:16:22.940: INFO: New ReplicaSet "test-recreate-deployment-f79dd4667" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-f79dd4667 deployment-9246 e6260463-b267-4ead-88e2-5f5390c5808b 421814 1 2021-01-12 23:16:22 +0000 UTC map[name:sample-pod-3 pod-template-hash:f79dd4667] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 3d5457a0-5596-4aa2-a454-b8047dfac76a 0xc00427add0 0xc00427add1}] [] [{kube-controller-manager Update apps/v1 2021-01-12 23:16:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3d5457a0-5596-4aa2-a454-b8047dfac76a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: f79dd4667,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:f79dd4667] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00427ae48 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 12 23:16:22.940: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Jan 12 23:16:22.940: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-786dd7c454 deployment-9246 010eb787-6039-48e9-9b7b-fd4b927e10cd 421805 2 2021-01-12 23:16:18 +0000 UTC map[name:sample-pod-3 pod-template-hash:786dd7c454] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 3d5457a0-5596-4aa2-a454-b8047dfac76a 0xc00427acd7 0xc00427acd8}] [] [{kube-controller-manager Update apps/v1 2021-01-12 23:16:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3d5457a0-5596-4aa2-a454-b8047dfac76a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 786dd7c454,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:786dd7c454] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.21 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00427ad68 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 12 23:16:22.943: INFO: Pod "test-recreate-deployment-f79dd4667-sr4h5" is not available: &Pod{ObjectMeta:{test-recreate-deployment-f79dd4667-sr4h5 test-recreate-deployment-f79dd4667- deployment-9246 061c8820-0b8d-4d5b-804d-ed15e915d8b1 421816 0 2021-01-12 23:16:22 +0000 UTC map[name:sample-pod-3 pod-template-hash:f79dd4667] map[] [{apps/v1 ReplicaSet test-recreate-deployment-f79dd4667 e6260463-b267-4ead-88e2-5f5390c5808b 0xc005e98460 0xc005e98461}] [] [{kube-controller-manager Update v1 2021-01-12 23:16:22 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e6260463-b267-4ead-88e2-5f5390c5808b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-12 23:16:22 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h6584,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h6584,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h6584,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:16:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:16:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:16:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:16:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2021-01-12 23:16:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:16:22.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9246" for this suite. •{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":309,"completed":94,"skipped":1479,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:16:22.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:16:30.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2266" for this suite. • [SLOW TEST:7.175 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":309,"completed":95,"skipped":1486,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:16:30.127: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name projected-configmap-test-volume-f5128964-af7e-49de-ae7a-55a30b477cdb STEP: Creating a pod to test consume configMaps Jan 12 23:16:30.260: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-71eef6f0-b24b-44c4-91d3-af66c5997834" in namespace "projected-3508" to be "Succeeded or Failed" Jan 12 23:16:30.271: INFO: Pod "pod-projected-configmaps-71eef6f0-b24b-44c4-91d3-af66c5997834": Phase="Pending", Reason="", readiness=false. Elapsed: 10.744147ms Jan 12 23:16:32.276: INFO: Pod "pod-projected-configmaps-71eef6f0-b24b-44c4-91d3-af66c5997834": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016035801s Jan 12 23:16:34.281: INFO: Pod "pod-projected-configmaps-71eef6f0-b24b-44c4-91d3-af66c5997834": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021207493s STEP: Saw pod success Jan 12 23:16:34.281: INFO: Pod "pod-projected-configmaps-71eef6f0-b24b-44c4-91d3-af66c5997834" satisfied condition "Succeeded or Failed" Jan 12 23:16:34.284: INFO: Trying to get logs from node leguer-worker pod pod-projected-configmaps-71eef6f0-b24b-44c4-91d3-af66c5997834 container agnhost-container: STEP: delete the pod Jan 12 23:16:34.354: INFO: Waiting for pod pod-projected-configmaps-71eef6f0-b24b-44c4-91d3-af66c5997834 to disappear Jan 12 23:16:34.359: INFO: Pod pod-projected-configmaps-71eef6f0-b24b-44c4-91d3-af66c5997834 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:16:34.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3508" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":309,"completed":96,"skipped":1503,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:16:34.368: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should check if kubectl can dry-run update Pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jan 12 23:16:34.419: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-8207 run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod' Jan 12 23:16:34.520: INFO: stderr: "" Jan 12 23:16:34.520: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: replace the image in the pod with server-side dry-run Jan 12 23:16:34.520: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-8207 patch pod e2e-test-httpd-pod -p {"spec":{"containers":[{"name": "e2e-test-httpd-pod","image": "docker.io/library/busybox:1.29"}]}} --dry-run=server' Jan 12 23:16:34.863: INFO: stderr: "" Jan 12 23:16:34.863: INFO: stdout: "pod/e2e-test-httpd-pod patched\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/httpd:2.4.38-alpine Jan 12 23:16:34.865: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-8207 delete pods e2e-test-httpd-pod' Jan 12 23:16:40.099: INFO: stderr: "" Jan 12 23:16:40.099: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:16:40.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8207" for this suite. • [SLOW TEST:5.751 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl server-side dry-run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:909 should check if kubectl can dry-run update Pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":309,"completed":97,"skipped":1542,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:16:40.120: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename discovery STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/discovery.go:39 STEP: Setting up server cert [It] should validate PreferredVersion for each APIGroup [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 12 23:16:40.491: INFO: Checking APIGroup: apiregistration.k8s.io Jan 12 23:16:40.492: INFO: PreferredVersion.GroupVersion: apiregistration.k8s.io/v1 Jan 12 23:16:40.492: INFO: Versions found [{apiregistration.k8s.io/v1 v1} {apiregistration.k8s.io/v1beta1 v1beta1}] Jan 12 23:16:40.492: INFO: apiregistration.k8s.io/v1 matches apiregistration.k8s.io/v1 Jan 12 23:16:40.492: INFO: Checking APIGroup: apps Jan 12 23:16:40.493: INFO: PreferredVersion.GroupVersion: apps/v1 Jan 12 23:16:40.493: INFO: Versions found [{apps/v1 v1}] Jan 12 23:16:40.493: INFO: apps/v1 matches apps/v1 Jan 12 23:16:40.493: INFO: Checking APIGroup: events.k8s.io Jan 12 23:16:40.495: INFO: PreferredVersion.GroupVersion: events.k8s.io/v1 Jan 12 23:16:40.495: INFO: Versions found [{events.k8s.io/v1 v1} {events.k8s.io/v1beta1 v1beta1}] Jan 12 23:16:40.495: INFO: events.k8s.io/v1 matches events.k8s.io/v1 Jan 12 23:16:40.495: INFO: Checking APIGroup: authentication.k8s.io Jan 12 23:16:40.496: INFO: PreferredVersion.GroupVersion: authentication.k8s.io/v1 Jan 12 23:16:40.496: INFO: Versions found [{authentication.k8s.io/v1 v1} {authentication.k8s.io/v1beta1 v1beta1}] Jan 12 23:16:40.496: INFO: authentication.k8s.io/v1 matches authentication.k8s.io/v1 Jan 12 23:16:40.496: INFO: Checking APIGroup: authorization.k8s.io Jan 12 23:16:40.498: INFO: PreferredVersion.GroupVersion: authorization.k8s.io/v1 Jan 12 23:16:40.498: INFO: Versions found [{authorization.k8s.io/v1 v1} {authorization.k8s.io/v1beta1 v1beta1}] Jan 12 23:16:40.498: INFO: authorization.k8s.io/v1 matches authorization.k8s.io/v1 Jan 12 23:16:40.498: INFO: Checking APIGroup: autoscaling Jan 12 23:16:40.500: INFO: PreferredVersion.GroupVersion: autoscaling/v1 Jan 12 23:16:40.500: INFO: Versions found [{autoscaling/v1 v1} {autoscaling/v2beta1 v2beta1} {autoscaling/v2beta2 v2beta2}] Jan 12 23:16:40.500: INFO: autoscaling/v1 matches autoscaling/v1 Jan 12 23:16:40.500: INFO: Checking APIGroup: batch Jan 12 23:16:40.501: INFO: PreferredVersion.GroupVersion: batch/v1 Jan 12 23:16:40.501: INFO: Versions found [{batch/v1 v1} {batch/v1beta1 v1beta1}] Jan 12 23:16:40.501: INFO: batch/v1 matches batch/v1 Jan 12 23:16:40.501: INFO: Checking APIGroup: certificates.k8s.io Jan 12 23:16:40.503: INFO: PreferredVersion.GroupVersion: certificates.k8s.io/v1 Jan 12 23:16:40.503: INFO: Versions found [{certificates.k8s.io/v1 v1} {certificates.k8s.io/v1beta1 v1beta1}] Jan 12 23:16:40.503: INFO: certificates.k8s.io/v1 matches certificates.k8s.io/v1 Jan 12 23:16:40.503: INFO: Checking APIGroup: networking.k8s.io Jan 12 23:16:40.504: INFO: PreferredVersion.GroupVersion: networking.k8s.io/v1 Jan 12 23:16:40.504: INFO: Versions found [{networking.k8s.io/v1 v1} {networking.k8s.io/v1beta1 v1beta1}] Jan 12 23:16:40.504: INFO: networking.k8s.io/v1 matches networking.k8s.io/v1 Jan 12 23:16:40.504: INFO: Checking APIGroup: extensions Jan 12 23:16:40.505: INFO: PreferredVersion.GroupVersion: extensions/v1beta1 Jan 12 23:16:40.505: INFO: Versions found [{extensions/v1beta1 v1beta1}] Jan 12 23:16:40.505: INFO: extensions/v1beta1 matches extensions/v1beta1 Jan 12 23:16:40.505: INFO: Checking APIGroup: policy Jan 12 23:16:40.506: INFO: PreferredVersion.GroupVersion: policy/v1beta1 Jan 12 23:16:40.506: INFO: Versions found [{policy/v1beta1 v1beta1}] Jan 12 23:16:40.506: INFO: policy/v1beta1 matches policy/v1beta1 Jan 12 23:16:40.506: INFO: Checking APIGroup: rbac.authorization.k8s.io Jan 12 23:16:40.507: INFO: PreferredVersion.GroupVersion: rbac.authorization.k8s.io/v1 Jan 12 23:16:40.507: INFO: Versions found [{rbac.authorization.k8s.io/v1 v1} {rbac.authorization.k8s.io/v1beta1 v1beta1}] Jan 12 23:16:40.507: INFO: rbac.authorization.k8s.io/v1 matches rbac.authorization.k8s.io/v1 Jan 12 23:16:40.507: INFO: Checking APIGroup: storage.k8s.io Jan 12 23:16:40.508: INFO: PreferredVersion.GroupVersion: storage.k8s.io/v1 Jan 12 23:16:40.508: INFO: Versions found [{storage.k8s.io/v1 v1} {storage.k8s.io/v1beta1 v1beta1}] Jan 12 23:16:40.508: INFO: storage.k8s.io/v1 matches storage.k8s.io/v1 Jan 12 23:16:40.508: INFO: Checking APIGroup: admissionregistration.k8s.io Jan 12 23:16:40.509: INFO: PreferredVersion.GroupVersion: admissionregistration.k8s.io/v1 Jan 12 23:16:40.509: INFO: Versions found [{admissionregistration.k8s.io/v1 v1} {admissionregistration.k8s.io/v1beta1 v1beta1}] Jan 12 23:16:40.509: INFO: admissionregistration.k8s.io/v1 matches admissionregistration.k8s.io/v1 Jan 12 23:16:40.509: INFO: Checking APIGroup: apiextensions.k8s.io Jan 12 23:16:40.510: INFO: PreferredVersion.GroupVersion: apiextensions.k8s.io/v1 Jan 12 23:16:40.510: INFO: Versions found [{apiextensions.k8s.io/v1 v1} {apiextensions.k8s.io/v1beta1 v1beta1}] Jan 12 23:16:40.510: INFO: apiextensions.k8s.io/v1 matches apiextensions.k8s.io/v1 Jan 12 23:16:40.510: INFO: Checking APIGroup: scheduling.k8s.io Jan 12 23:16:40.511: INFO: PreferredVersion.GroupVersion: scheduling.k8s.io/v1 Jan 12 23:16:40.511: INFO: Versions found [{scheduling.k8s.io/v1 v1} {scheduling.k8s.io/v1beta1 v1beta1}] Jan 12 23:16:40.511: INFO: scheduling.k8s.io/v1 matches scheduling.k8s.io/v1 Jan 12 23:16:40.511: INFO: Checking APIGroup: coordination.k8s.io Jan 12 23:16:40.511: INFO: PreferredVersion.GroupVersion: coordination.k8s.io/v1 Jan 12 23:16:40.511: INFO: Versions found [{coordination.k8s.io/v1 v1} {coordination.k8s.io/v1beta1 v1beta1}] Jan 12 23:16:40.511: INFO: coordination.k8s.io/v1 matches coordination.k8s.io/v1 Jan 12 23:16:40.511: INFO: Checking APIGroup: node.k8s.io Jan 12 23:16:40.512: INFO: PreferredVersion.GroupVersion: node.k8s.io/v1 Jan 12 23:16:40.512: INFO: Versions found [{node.k8s.io/v1 v1} {node.k8s.io/v1beta1 v1beta1}] Jan 12 23:16:40.512: INFO: node.k8s.io/v1 matches node.k8s.io/v1 Jan 12 23:16:40.512: INFO: Checking APIGroup: discovery.k8s.io Jan 12 23:16:40.513: INFO: PreferredVersion.GroupVersion: discovery.k8s.io/v1beta1 Jan 12 23:16:40.513: INFO: Versions found [{discovery.k8s.io/v1beta1 v1beta1}] Jan 12 23:16:40.513: INFO: discovery.k8s.io/v1beta1 matches discovery.k8s.io/v1beta1 Jan 12 23:16:40.513: INFO: Checking APIGroup: flowcontrol.apiserver.k8s.io Jan 12 23:16:40.514: INFO: PreferredVersion.GroupVersion: flowcontrol.apiserver.k8s.io/v1beta1 Jan 12 23:16:40.514: INFO: Versions found [{flowcontrol.apiserver.k8s.io/v1beta1 v1beta1}] Jan 12 23:16:40.514: INFO: flowcontrol.apiserver.k8s.io/v1beta1 matches flowcontrol.apiserver.k8s.io/v1beta1 Jan 12 23:16:40.514: INFO: Checking APIGroup: pingcap.com Jan 12 23:16:40.515: INFO: PreferredVersion.GroupVersion: pingcap.com/v1alpha1 Jan 12 23:16:40.515: INFO: Versions found [{pingcap.com/v1alpha1 v1alpha1}] Jan 12 23:16:40.515: INFO: pingcap.com/v1alpha1 matches pingcap.com/v1alpha1 [AfterEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:16:40.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "discovery-5620" for this suite. •{"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":309,"completed":98,"skipped":1585,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:16:40.524: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating secret secrets-9912/secret-test-c41eb804-b642-42bd-baa8-1ae696e7c4b2 STEP: Creating a pod to test consume secrets Jan 12 23:16:40.629: INFO: Waiting up to 5m0s for pod "pod-configmaps-31b9aa2a-f08b-464a-9c35-72839fda3a0b" in namespace "secrets-9912" to be "Succeeded or Failed" Jan 12 23:16:40.655: INFO: Pod "pod-configmaps-31b9aa2a-f08b-464a-9c35-72839fda3a0b": Phase="Pending", Reason="", readiness=false. Elapsed: 26.259773ms Jan 12 23:16:42.664: INFO: Pod "pod-configmaps-31b9aa2a-f08b-464a-9c35-72839fda3a0b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035093487s Jan 12 23:16:44.670: INFO: Pod "pod-configmaps-31b9aa2a-f08b-464a-9c35-72839fda3a0b": Phase="Running", Reason="", readiness=true. Elapsed: 4.040619234s Jan 12 23:16:46.674: INFO: Pod "pod-configmaps-31b9aa2a-f08b-464a-9c35-72839fda3a0b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.045116908s STEP: Saw pod success Jan 12 23:16:46.674: INFO: Pod "pod-configmaps-31b9aa2a-f08b-464a-9c35-72839fda3a0b" satisfied condition "Succeeded or Failed" Jan 12 23:16:46.677: INFO: Trying to get logs from node leguer-worker2 pod pod-configmaps-31b9aa2a-f08b-464a-9c35-72839fda3a0b container env-test: STEP: delete the pod Jan 12 23:16:46.722: INFO: Waiting for pod pod-configmaps-31b9aa2a-f08b-464a-9c35-72839fda3a0b to disappear Jan 12 23:16:46.732: INFO: Pod pod-configmaps-31b9aa2a-f08b-464a-9c35-72839fda3a0b no longer exists [AfterEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:16:46.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9912" for this suite. • [SLOW TEST:6.222 seconds] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:36 should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":309,"completed":99,"skipped":1621,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:16:46.747: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 12 23:16:47.856: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 12 23:16:49.883: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746090207, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746090207, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746090207, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746090207, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 12 23:16:52.948: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:17:05.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9001" for this suite. STEP: Destroying namespace "webhook-9001-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:18.553 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":309,"completed":100,"skipped":1657,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:17:05.299: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Performing setup for networking test in namespace pod-network-test-4147 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 12 23:17:05.371: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jan 12 23:17:05.455: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 12 23:17:07.503: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 12 23:17:09.459: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 12 23:17:11.463: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 12 23:17:13.461: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 12 23:17:15.461: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 12 23:17:17.466: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 12 23:17:19.460: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 12 23:17:21.460: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 12 23:17:23.461: INFO: The status of Pod netserver-0 is Running (Ready = true) Jan 12 23:17:23.468: INFO: The status of Pod netserver-1 is Running (Ready = false) Jan 12 23:17:25.473: INFO: The status of Pod netserver-1 is Running (Ready = false) Jan 12 23:17:27.472: INFO: The status of Pod netserver-1 is Running (Ready = false) Jan 12 23:17:29.471: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Jan 12 23:17:33.508: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 Jan 12 23:17:33.508: INFO: Breadth first check of 10.244.2.236 on host 172.18.0.13... Jan 12 23:17:33.511: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.237:9080/dial?request=hostname&protocol=http&host=10.244.2.236&port=8080&tries=1'] Namespace:pod-network-test-4147 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 12 23:17:33.511: INFO: >>> kubeConfig: /root/.kube/config I0112 23:17:33.547686 7 log.go:181] (0xc001f4cb00) (0xc0040d2fa0) Create stream I0112 23:17:33.547718 7 log.go:181] (0xc001f4cb00) (0xc0040d2fa0) Stream added, broadcasting: 1 I0112 23:17:33.550190 7 log.go:181] (0xc001f4cb00) Reply frame received for 1 I0112 23:17:33.550236 7 log.go:181] (0xc001f4cb00) (0xc003f9ad20) Create stream I0112 23:17:33.550253 7 log.go:181] (0xc001f4cb00) (0xc003f9ad20) Stream added, broadcasting: 3 I0112 23:17:33.551298 7 log.go:181] (0xc001f4cb00) Reply frame received for 3 I0112 23:17:33.551373 7 log.go:181] (0xc001f4cb00) (0xc00109d2c0) Create stream I0112 23:17:33.551401 7 log.go:181] (0xc001f4cb00) (0xc00109d2c0) Stream added, broadcasting: 5 I0112 23:17:33.552335 7 log.go:181] (0xc001f4cb00) Reply frame received for 5 I0112 23:17:33.620331 7 log.go:181] (0xc001f4cb00) Data frame received for 3 I0112 23:17:33.620372 7 log.go:181] (0xc003f9ad20) (3) Data frame handling I0112 23:17:33.620420 7 log.go:181] (0xc003f9ad20) (3) Data frame sent I0112 23:17:33.621367 7 log.go:181] (0xc001f4cb00) Data frame received for 3 I0112 23:17:33.621416 7 log.go:181] (0xc003f9ad20) (3) Data frame handling I0112 23:17:33.621451 7 log.go:181] (0xc001f4cb00) Data frame received for 5 I0112 23:17:33.621471 7 log.go:181] (0xc00109d2c0) (5) Data frame handling I0112 23:17:33.622952 7 log.go:181] (0xc001f4cb00) Data frame received for 1 I0112 23:17:33.623000 7 log.go:181] (0xc0040d2fa0) (1) Data frame handling I0112 23:17:33.623020 7 log.go:181] (0xc0040d2fa0) (1) Data frame sent I0112 23:17:33.623030 7 log.go:181] (0xc001f4cb00) (0xc0040d2fa0) Stream removed, broadcasting: 1 I0112 23:17:33.623040 7 log.go:181] (0xc001f4cb00) Go away received I0112 23:17:33.623212 7 log.go:181] (0xc001f4cb00) (0xc0040d2fa0) Stream removed, broadcasting: 1 I0112 23:17:33.623238 7 log.go:181] (0xc001f4cb00) (0xc003f9ad20) Stream removed, broadcasting: 3 I0112 23:17:33.623251 7 log.go:181] (0xc001f4cb00) (0xc00109d2c0) Stream removed, broadcasting: 5 Jan 12 23:17:33.623: INFO: Waiting for responses: map[] Jan 12 23:17:33.623: INFO: reached 10.244.2.236 after 0/1 tries Jan 12 23:17:33.623: INFO: Breadth first check of 10.244.1.31 on host 172.18.0.12... Jan 12 23:17:33.627: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.237:9080/dial?request=hostname&protocol=http&host=10.244.1.31&port=8080&tries=1'] Namespace:pod-network-test-4147 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 12 23:17:33.627: INFO: >>> kubeConfig: /root/.kube/config I0112 23:17:33.662398 7 log.go:181] (0xc002dfedc0) (0xc003f9b220) Create stream I0112 23:17:33.662433 7 log.go:181] (0xc002dfedc0) (0xc003f9b220) Stream added, broadcasting: 1 I0112 23:17:33.664701 7 log.go:181] (0xc002dfedc0) Reply frame received for 1 I0112 23:17:33.664740 7 log.go:181] (0xc002dfedc0) (0xc003f9b2c0) Create stream I0112 23:17:33.664757 7 log.go:181] (0xc002dfedc0) (0xc003f9b2c0) Stream added, broadcasting: 3 I0112 23:17:33.665707 7 log.go:181] (0xc002dfedc0) Reply frame received for 3 I0112 23:17:33.665750 7 log.go:181] (0xc002dfedc0) (0xc004052fa0) Create stream I0112 23:17:33.665774 7 log.go:181] (0xc002dfedc0) (0xc004052fa0) Stream added, broadcasting: 5 I0112 23:17:33.666622 7 log.go:181] (0xc002dfedc0) Reply frame received for 5 I0112 23:17:33.732591 7 log.go:181] (0xc002dfedc0) Data frame received for 3 I0112 23:17:33.732619 7 log.go:181] (0xc003f9b2c0) (3) Data frame handling I0112 23:17:33.732640 7 log.go:181] (0xc003f9b2c0) (3) Data frame sent I0112 23:17:33.733668 7 log.go:181] (0xc002dfedc0) Data frame received for 3 I0112 23:17:33.733705 7 log.go:181] (0xc003f9b2c0) (3) Data frame handling I0112 23:17:33.733732 7 log.go:181] (0xc002dfedc0) Data frame received for 5 I0112 23:17:33.733745 7 log.go:181] (0xc004052fa0) (5) Data frame handling I0112 23:17:33.735424 7 log.go:181] (0xc002dfedc0) Data frame received for 1 I0112 23:17:33.735466 7 log.go:181] (0xc003f9b220) (1) Data frame handling I0112 23:17:33.735498 7 log.go:181] (0xc003f9b220) (1) Data frame sent I0112 23:17:33.735525 7 log.go:181] (0xc002dfedc0) (0xc003f9b220) Stream removed, broadcasting: 1 I0112 23:17:33.735565 7 log.go:181] (0xc002dfedc0) Go away received I0112 23:17:33.735623 7 log.go:181] (0xc002dfedc0) (0xc003f9b220) Stream removed, broadcasting: 1 I0112 23:17:33.735642 7 log.go:181] (0xc002dfedc0) (0xc003f9b2c0) Stream removed, broadcasting: 3 I0112 23:17:33.735654 7 log.go:181] (0xc002dfedc0) (0xc004052fa0) Stream removed, broadcasting: 5 Jan 12 23:17:33.735: INFO: Waiting for responses: map[] Jan 12 23:17:33.735: INFO: reached 10.244.1.31 after 0/1 tries Jan 12 23:17:33.735: INFO: Going to retry 0 out of 2 pods.... [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:17:33.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-4147" for this suite. • [SLOW TEST:28.445 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:27 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:30 should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":309,"completed":101,"skipped":1676,"failed":0} SSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:17:33.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward API volume plugin Jan 12 23:17:33.863: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4272c1cb-fb06-40cc-a581-8fef8eab3c7f" in namespace "projected-4204" to be "Succeeded or Failed" Jan 12 23:17:33.885: INFO: Pod "downwardapi-volume-4272c1cb-fb06-40cc-a581-8fef8eab3c7f": Phase="Pending", Reason="", readiness=false. Elapsed: 22.097233ms Jan 12 23:17:35.890: INFO: Pod "downwardapi-volume-4272c1cb-fb06-40cc-a581-8fef8eab3c7f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026949416s Jan 12 23:17:37.893: INFO: Pod "downwardapi-volume-4272c1cb-fb06-40cc-a581-8fef8eab3c7f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030673294s STEP: Saw pod success Jan 12 23:17:37.893: INFO: Pod "downwardapi-volume-4272c1cb-fb06-40cc-a581-8fef8eab3c7f" satisfied condition "Succeeded or Failed" Jan 12 23:17:37.896: INFO: Trying to get logs from node leguer-worker2 pod downwardapi-volume-4272c1cb-fb06-40cc-a581-8fef8eab3c7f container client-container: STEP: delete the pod Jan 12 23:17:37.926: INFO: Waiting for pod downwardapi-volume-4272c1cb-fb06-40cc-a581-8fef8eab3c7f to disappear Jan 12 23:17:37.936: INFO: Pod downwardapi-volume-4272c1cb-fb06-40cc-a581-8fef8eab3c7f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:17:37.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4204" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":309,"completed":102,"skipped":1679,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:17:37.944: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating projection with configMap that has name projected-configmap-test-upd-ce46f914-5c11-49c7-b2ce-4737510fd631 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-ce46f914-5c11-49c7-b2ce-4737510fd631 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:17:46.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3674" for this suite. • [SLOW TEST:8.499 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":309,"completed":103,"skipped":1685,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:17:46.444: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should run the lifecycle of PodTemplates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [AfterEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:17:46.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-8154" for this suite. •{"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":309,"completed":104,"skipped":1698,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:17:46.612: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating pod test-webserver-38befdf6-dd5a-4d38-bc79-73d124a3e5bc in namespace container-probe-4986 Jan 12 23:17:50.686: INFO: Started pod test-webserver-38befdf6-dd5a-4d38-bc79-73d124a3e5bc in namespace container-probe-4986 STEP: checking the pod's current state and verifying that restartCount is present Jan 12 23:17:50.689: INFO: Initial restart count of pod test-webserver-38befdf6-dd5a-4d38-bc79-73d124a3e5bc is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:21:51.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4986" for this suite. • [SLOW TEST:244.751 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":309,"completed":105,"skipped":1760,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:21:51.363: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:22:19.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8055" for this suite. • [SLOW TEST:28.544 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":309,"completed":106,"skipped":1783,"failed":0} SS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:22:19.907: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 [It] should contain environment variables for services [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 12 23:22:24.098: INFO: Waiting up to 5m0s for pod "client-envvars-4dd572ec-13c1-4b9d-8736-c898b581a71e" in namespace "pods-5611" to be "Succeeded or Failed" Jan 12 23:22:24.101: INFO: Pod "client-envvars-4dd572ec-13c1-4b9d-8736-c898b581a71e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.035272ms Jan 12 23:22:26.106: INFO: Pod "client-envvars-4dd572ec-13c1-4b9d-8736-c898b581a71e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007673847s Jan 12 23:22:28.111: INFO: Pod "client-envvars-4dd572ec-13c1-4b9d-8736-c898b581a71e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0124737s STEP: Saw pod success Jan 12 23:22:28.111: INFO: Pod "client-envvars-4dd572ec-13c1-4b9d-8736-c898b581a71e" satisfied condition "Succeeded or Failed" Jan 12 23:22:28.135: INFO: Trying to get logs from node leguer-worker pod client-envvars-4dd572ec-13c1-4b9d-8736-c898b581a71e container env3cont: STEP: delete the pod Jan 12 23:22:28.183: INFO: Waiting for pod client-envvars-4dd572ec-13c1-4b9d-8736-c898b581a71e to disappear Jan 12 23:22:28.186: INFO: Pod client-envvars-4dd572ec-13c1-4b9d-8736-c898b581a71e no longer exists [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:22:28.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5611" for this suite. • [SLOW TEST:8.286 seconds] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should contain environment variables for services [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":309,"completed":107,"skipped":1785,"failed":0} S ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:22:28.194: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0112 23:22:41.024529 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Jan 12 23:23:43.051: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. Jan 12 23:23:43.051: INFO: Deleting pod "simpletest-rc-to-be-deleted-6dspd" in namespace "gc-6954" Jan 12 23:23:43.113: INFO: Deleting pod "simpletest-rc-to-be-deleted-6rzwd" in namespace "gc-6954" Jan 12 23:23:43.155: INFO: Deleting pod "simpletest-rc-to-be-deleted-blxkg" in namespace "gc-6954" Jan 12 23:23:43.558: INFO: Deleting pod "simpletest-rc-to-be-deleted-dzwrc" in namespace "gc-6954" Jan 12 23:23:43.730: INFO: Deleting pod "simpletest-rc-to-be-deleted-f2jz8" in namespace "gc-6954" [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:23:43.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6954" for this suite. • [SLOW TEST:76.021 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":309,"completed":108,"skipped":1786,"failed":0} SSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:23:44.214: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating secret with name secret-test-7f8f16b5-e9c6-47b4-9933-9940fb448185 STEP: Creating a pod to test consume secrets Jan 12 23:23:45.047: INFO: Waiting up to 5m0s for pod "pod-secrets-4d10f125-d9ae-46c1-97f2-cd78962852de" in namespace "secrets-1600" to be "Succeeded or Failed" Jan 12 23:23:45.184: INFO: Pod "pod-secrets-4d10f125-d9ae-46c1-97f2-cd78962852de": Phase="Pending", Reason="", readiness=false. Elapsed: 136.493187ms Jan 12 23:23:47.187: INFO: Pod "pod-secrets-4d10f125-d9ae-46c1-97f2-cd78962852de": Phase="Pending", Reason="", readiness=false. Elapsed: 2.140106067s Jan 12 23:23:49.191: INFO: Pod "pod-secrets-4d10f125-d9ae-46c1-97f2-cd78962852de": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.143858123s STEP: Saw pod success Jan 12 23:23:49.191: INFO: Pod "pod-secrets-4d10f125-d9ae-46c1-97f2-cd78962852de" satisfied condition "Succeeded or Failed" Jan 12 23:23:49.194: INFO: Trying to get logs from node leguer-worker pod pod-secrets-4d10f125-d9ae-46c1-97f2-cd78962852de container secret-volume-test: STEP: delete the pod Jan 12 23:23:49.278: INFO: Waiting for pod pod-secrets-4d10f125-d9ae-46c1-97f2-cd78962852de to disappear Jan 12 23:23:49.283: INFO: Pod pod-secrets-4d10f125-d9ae-46c1-97f2-cd78962852de no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:23:49.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1600" for this suite. STEP: Destroying namespace "secret-namespace-9472" for this suite. • [SLOW TEST:5.160 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":309,"completed":109,"skipped":1789,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should patch a secret [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:23:49.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:23:49.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7477" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":309,"completed":110,"skipped":1809,"failed":0} SSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:23:49.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should check if kubectl diff finds a difference for Deployments [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: create deployment with httpd image Jan 12 23:23:49.635: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-7009 create -f -' Jan 12 23:23:53.723: INFO: stderr: "" Jan 12 23:23:53.723: INFO: stdout: "deployment.apps/httpd-deployment created\n" STEP: verify diff finds difference between live and declared image Jan 12 23:23:53.723: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-7009 diff -f -' Jan 12 23:23:54.261: INFO: rc: 1 Jan 12 23:23:54.261: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-7009 delete -f -' Jan 12 23:23:54.386: INFO: stderr: "" Jan 12 23:23:54.386: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:23:54.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7009" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":309,"completed":111,"skipped":1815,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:23:54.395: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jan 12 23:24:04.565: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 12 23:24:04.619: INFO: Pod pod-with-poststart-http-hook still exists Jan 12 23:24:06.619: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 12 23:24:06.623: INFO: Pod pod-with-poststart-http-hook still exists Jan 12 23:24:08.619: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 12 23:24:08.680: INFO: Pod pod-with-poststart-http-hook still exists Jan 12 23:24:10.619: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 12 23:24:10.632: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:24:10.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4914" for this suite. • [SLOW TEST:16.247 seconds] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:43 should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":309,"completed":112,"skipped":1869,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:24:10.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward API volume plugin Jan 12 23:24:10.802: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6b9c4d49-dce8-4c7d-923b-56df6d720600" in namespace "projected-6066" to be "Succeeded or Failed" Jan 12 23:24:10.854: INFO: Pod "downwardapi-volume-6b9c4d49-dce8-4c7d-923b-56df6d720600": Phase="Pending", Reason="", readiness=false. Elapsed: 51.516132ms Jan 12 23:24:12.859: INFO: Pod "downwardapi-volume-6b9c4d49-dce8-4c7d-923b-56df6d720600": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056943061s Jan 12 23:24:14.864: INFO: Pod "downwardapi-volume-6b9c4d49-dce8-4c7d-923b-56df6d720600": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.061338745s STEP: Saw pod success Jan 12 23:24:14.864: INFO: Pod "downwardapi-volume-6b9c4d49-dce8-4c7d-923b-56df6d720600" satisfied condition "Succeeded or Failed" Jan 12 23:24:14.866: INFO: Trying to get logs from node leguer-worker2 pod downwardapi-volume-6b9c4d49-dce8-4c7d-923b-56df6d720600 container client-container: STEP: delete the pod Jan 12 23:24:14.963: INFO: Waiting for pod downwardapi-volume-6b9c4d49-dce8-4c7d-923b-56df6d720600 to disappear Jan 12 23:24:14.969: INFO: Pod downwardapi-volume-6b9c4d49-dce8-4c7d-923b-56df6d720600 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:24:14.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6066" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":113,"skipped":1889,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:24:14.977: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 [It] should be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jan 12 23:24:19.639: INFO: Successfully updated pod "pod-update-d1362a53-3148-4d21-9c39-ed209238e82d" STEP: verifying the updated pod is in kubernetes Jan 12 23:24:19.649: INFO: Pod update OK [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:24:19.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8385" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":309,"completed":114,"skipped":1903,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:24:19.657: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 12 23:24:21.824: INFO: Deleting pod "var-expansion-dc2a53d2-6ca8-4ee5-854d-fa5e12a1fbb5" in namespace "var-expansion-8547" Jan 12 23:24:21.840: INFO: Wait up to 5m0s for pod "var-expansion-dc2a53d2-6ca8-4ee5-854d-fa5e12a1fbb5" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:24:51.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8547" for this suite. • [SLOW TEST:32.206 seconds] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]","total":309,"completed":115,"skipped":1917,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:24:51.864: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [AfterEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:24:57.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9394" for this suite. • [SLOW TEST:6.135 seconds] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 when scheduling a busybox Pod with hostAliases /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:137 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":116,"skipped":1938,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:24:57.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward API volume plugin Jan 12 23:24:58.069: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b5e9a5d3-6aa7-401a-b4ed-b0a16221c2c1" in namespace "projected-3425" to be "Succeeded or Failed" Jan 12 23:24:58.080: INFO: Pod "downwardapi-volume-b5e9a5d3-6aa7-401a-b4ed-b0a16221c2c1": Phase="Pending", Reason="", readiness=false. Elapsed: 11.293825ms Jan 12 23:25:00.085: INFO: Pod "downwardapi-volume-b5e9a5d3-6aa7-401a-b4ed-b0a16221c2c1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01593936s Jan 12 23:25:02.088: INFO: Pod "downwardapi-volume-b5e9a5d3-6aa7-401a-b4ed-b0a16221c2c1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019333122s STEP: Saw pod success Jan 12 23:25:02.088: INFO: Pod "downwardapi-volume-b5e9a5d3-6aa7-401a-b4ed-b0a16221c2c1" satisfied condition "Succeeded or Failed" Jan 12 23:25:02.090: INFO: Trying to get logs from node leguer-worker pod downwardapi-volume-b5e9a5d3-6aa7-401a-b4ed-b0a16221c2c1 container client-container: STEP: delete the pod Jan 12 23:25:02.239: INFO: Waiting for pod downwardapi-volume-b5e9a5d3-6aa7-401a-b4ed-b0a16221c2c1 to disappear Jan 12 23:25:02.267: INFO: Pod downwardapi-volume-b5e9a5d3-6aa7-401a-b4ed-b0a16221c2c1 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:25:02.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3425" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":309,"completed":117,"skipped":1987,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:25:02.275: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Performing setup for networking test in namespace pod-network-test-327 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 12 23:25:02.395: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jan 12 23:25:02.443: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 12 23:25:04.449: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 12 23:25:06.449: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 12 23:25:08.446: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 12 23:25:10.448: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 12 23:25:12.448: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 12 23:25:14.449: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 12 23:25:16.448: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 12 23:25:18.449: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 12 23:25:20.448: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 12 23:25:22.448: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 12 23:25:24.448: INFO: The status of Pod netserver-0 is Running (Ready = true) Jan 12 23:25:24.455: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Jan 12 23:25:30.507: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 Jan 12 23:25:30.507: INFO: Going to poll 10.244.2.249 on port 8080 at least 0 times, with a maximum of 34 tries before failing Jan 12 23:25:30.510: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.249:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-327 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 12 23:25:30.510: INFO: >>> kubeConfig: /root/.kube/config I0112 23:25:30.556040 7 log.go:181] (0xc002dfe790) (0xc0022c1860) Create stream I0112 23:25:30.556078 7 log.go:181] (0xc002dfe790) (0xc0022c1860) Stream added, broadcasting: 1 I0112 23:25:30.559087 7 log.go:181] (0xc002dfe790) Reply frame received for 1 I0112 23:25:30.559133 7 log.go:181] (0xc002dfe790) (0xc00326dd60) Create stream I0112 23:25:30.559150 7 log.go:181] (0xc002dfe790) (0xc00326dd60) Stream added, broadcasting: 3 I0112 23:25:30.561553 7 log.go:181] (0xc002dfe790) Reply frame received for 3 I0112 23:25:30.561605 7 log.go:181] (0xc002dfe790) (0xc0040d25a0) Create stream I0112 23:25:30.561634 7 log.go:181] (0xc002dfe790) (0xc0040d25a0) Stream added, broadcasting: 5 I0112 23:25:30.562835 7 log.go:181] (0xc002dfe790) Reply frame received for 5 I0112 23:25:30.657779 7 log.go:181] (0xc002dfe790) Data frame received for 5 I0112 23:25:30.657858 7 log.go:181] (0xc0040d25a0) (5) Data frame handling I0112 23:25:30.657894 7 log.go:181] (0xc002dfe790) Data frame received for 3 I0112 23:25:30.657922 7 log.go:181] (0xc00326dd60) (3) Data frame handling I0112 23:25:30.657945 7 log.go:181] (0xc00326dd60) (3) Data frame sent I0112 23:25:30.657961 7 log.go:181] (0xc002dfe790) Data frame received for 3 I0112 23:25:30.657966 7 log.go:181] (0xc00326dd60) (3) Data frame handling I0112 23:25:30.663600 7 log.go:181] (0xc002dfe790) Data frame received for 1 I0112 23:25:30.663619 7 log.go:181] (0xc0022c1860) (1) Data frame handling I0112 23:25:30.663629 7 log.go:181] (0xc0022c1860) (1) Data frame sent I0112 23:25:30.663638 7 log.go:181] (0xc002dfe790) (0xc0022c1860) Stream removed, broadcasting: 1 I0112 23:25:30.663651 7 log.go:181] (0xc002dfe790) Go away received I0112 23:25:30.663858 7 log.go:181] (0xc002dfe790) (0xc0022c1860) Stream removed, broadcasting: 1 I0112 23:25:30.663896 7 log.go:181] (0xc002dfe790) (0xc00326dd60) Stream removed, broadcasting: 3 I0112 23:25:30.663908 7 log.go:181] (0xc002dfe790) (0xc0040d25a0) Stream removed, broadcasting: 5 Jan 12 23:25:30.663: INFO: Found all 1 expected endpoints: [netserver-0] Jan 12 23:25:30.663: INFO: Going to poll 10.244.1.45 on port 8080 at least 0 times, with a maximum of 34 tries before failing Jan 12 23:25:30.667: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.45:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-327 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 12 23:25:30.667: INFO: >>> kubeConfig: /root/.kube/config I0112 23:25:30.692659 7 log.go:181] (0xc00003c630) (0xc0040d2960) Create stream I0112 23:25:30.692687 7 log.go:181] (0xc00003c630) (0xc0040d2960) Stream added, broadcasting: 1 I0112 23:25:30.695533 7 log.go:181] (0xc00003c630) Reply frame received for 1 I0112 23:25:30.695574 7 log.go:181] (0xc00003c630) (0xc0022c1a40) Create stream I0112 23:25:30.695588 7 log.go:181] (0xc00003c630) (0xc0022c1a40) Stream added, broadcasting: 3 I0112 23:25:30.696607 7 log.go:181] (0xc00003c630) Reply frame received for 3 I0112 23:25:30.696655 7 log.go:181] (0xc00003c630) (0xc001912be0) Create stream I0112 23:25:30.696672 7 log.go:181] (0xc00003c630) (0xc001912be0) Stream added, broadcasting: 5 I0112 23:25:30.697726 7 log.go:181] (0xc00003c630) Reply frame received for 5 I0112 23:25:30.775674 7 log.go:181] (0xc00003c630) Data frame received for 5 I0112 23:25:30.775733 7 log.go:181] (0xc001912be0) (5) Data frame handling I0112 23:25:30.775767 7 log.go:181] (0xc00003c630) Data frame received for 3 I0112 23:25:30.775784 7 log.go:181] (0xc0022c1a40) (3) Data frame handling I0112 23:25:30.775817 7 log.go:181] (0xc0022c1a40) (3) Data frame sent I0112 23:25:30.775838 7 log.go:181] (0xc00003c630) Data frame received for 3 I0112 23:25:30.775855 7 log.go:181] (0xc0022c1a40) (3) Data frame handling I0112 23:25:30.778146 7 log.go:181] (0xc00003c630) Data frame received for 1 I0112 23:25:30.778210 7 log.go:181] (0xc0040d2960) (1) Data frame handling I0112 23:25:30.778270 7 log.go:181] (0xc0040d2960) (1) Data frame sent I0112 23:25:30.778299 7 log.go:181] (0xc00003c630) (0xc0040d2960) Stream removed, broadcasting: 1 I0112 23:25:30.778321 7 log.go:181] (0xc00003c630) Go away received I0112 23:25:30.778432 7 log.go:181] (0xc00003c630) (0xc0040d2960) Stream removed, broadcasting: 1 I0112 23:25:30.778466 7 log.go:181] (0xc00003c630) (0xc0022c1a40) Stream removed, broadcasting: 3 I0112 23:25:30.778484 7 log.go:181] (0xc00003c630) (0xc001912be0) Stream removed, broadcasting: 5 Jan 12 23:25:30.778: INFO: Found all 1 expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:25:30.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-327" for this suite. • [SLOW TEST:28.513 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:27 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:30 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":118,"skipped":2002,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:25:30.789: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 12 23:25:30.900: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-7765 I0112 23:25:30.917414 7 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-7765, replica count: 1 I0112 23:25:31.967831 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0112 23:25:32.968096 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0112 23:25:33.968332 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0112 23:25:34.968618 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 12 23:25:36.146: INFO: Created: latency-svc-crcv9 Jan 12 23:25:36.765: INFO: Got endpoints: latency-svc-crcv9 [1.69621523s] Jan 12 23:25:36.867: INFO: Created: latency-svc-867tp Jan 12 23:25:36.911: INFO: Got endpoints: latency-svc-867tp [146.217263ms] Jan 12 23:25:37.002: INFO: Created: latency-svc-fjp5g Jan 12 23:25:37.011: INFO: Got endpoints: latency-svc-fjp5g [245.953446ms] Jan 12 23:25:37.035: INFO: Created: latency-svc-8z4fp Jan 12 23:25:37.053: INFO: Got endpoints: latency-svc-8z4fp [287.390077ms] Jan 12 23:25:37.154: INFO: Created: latency-svc-h45zh Jan 12 23:25:37.170: INFO: Got endpoints: latency-svc-h45zh [404.846101ms] Jan 12 23:25:37.205: INFO: Created: latency-svc-4rvn5 Jan 12 23:25:37.224: INFO: Got endpoints: latency-svc-4rvn5 [459.402717ms] Jan 12 23:25:37.247: INFO: Created: latency-svc-zfp22 Jan 12 23:25:37.292: INFO: Got endpoints: latency-svc-zfp22 [526.270317ms] Jan 12 23:25:37.317: INFO: Created: latency-svc-ztcgh Jan 12 23:25:37.326: INFO: Got endpoints: latency-svc-ztcgh [560.586091ms] Jan 12 23:25:37.347: INFO: Created: latency-svc-kmp7p Jan 12 23:25:37.379: INFO: Got endpoints: latency-svc-kmp7p [614.395756ms] Jan 12 23:25:37.451: INFO: Created: latency-svc-wj9rv Jan 12 23:25:37.491: INFO: Got endpoints: latency-svc-wj9rv [725.889359ms] Jan 12 23:25:37.562: INFO: Created: latency-svc-6p5hg Jan 12 23:25:37.605: INFO: Got endpoints: latency-svc-6p5hg [839.448396ms] Jan 12 23:25:37.708: INFO: Created: latency-svc-gnnvf Jan 12 23:25:37.715: INFO: Got endpoints: latency-svc-gnnvf [950.436317ms] Jan 12 23:25:37.744: INFO: Created: latency-svc-h22mb Jan 12 23:25:37.773: INFO: Got endpoints: latency-svc-h22mb [1.007705434s] Jan 12 23:25:37.844: INFO: Created: latency-svc-sxsvt Jan 12 23:25:37.865: INFO: Got endpoints: latency-svc-sxsvt [1.100103562s] Jan 12 23:25:37.865: INFO: Created: latency-svc-t9rhl Jan 12 23:25:37.878: INFO: Got endpoints: latency-svc-t9rhl [1.112731669s] Jan 12 23:25:37.917: INFO: Created: latency-svc-b7j6f Jan 12 23:25:37.980: INFO: Got endpoints: latency-svc-b7j6f [1.214878505s] Jan 12 23:25:38.001: INFO: Created: latency-svc-gqwz7 Jan 12 23:25:38.017: INFO: Got endpoints: latency-svc-gqwz7 [1.105783647s] Jan 12 23:25:38.119: INFO: Created: latency-svc-r5s58 Jan 12 23:25:38.125: INFO: Got endpoints: latency-svc-r5s58 [1.114094192s] Jan 12 23:25:38.147: INFO: Created: latency-svc-szpl7 Jan 12 23:25:38.161: INFO: Got endpoints: latency-svc-szpl7 [1.108586294s] Jan 12 23:25:38.245: INFO: Created: latency-svc-tc7dl Jan 12 23:25:38.271: INFO: Got endpoints: latency-svc-tc7dl [1.100164455s] Jan 12 23:25:38.297: INFO: Created: latency-svc-5s497 Jan 12 23:25:38.326: INFO: Got endpoints: latency-svc-5s497 [1.102015922s] Jan 12 23:25:38.406: INFO: Created: latency-svc-5nv6s Jan 12 23:25:38.423: INFO: Got endpoints: latency-svc-5nv6s [1.130812138s] Jan 12 23:25:38.444: INFO: Created: latency-svc-v6pch Jan 12 23:25:38.458: INFO: Got endpoints: latency-svc-v6pch [1.132288818s] Jan 12 23:25:38.475: INFO: Created: latency-svc-9fzsb Jan 12 23:25:38.507: INFO: Got endpoints: latency-svc-9fzsb [1.127872759s] Jan 12 23:25:38.523: INFO: Created: latency-svc-tmh6q Jan 12 23:25:38.539: INFO: Got endpoints: latency-svc-tmh6q [1.047759666s] Jan 12 23:25:38.561: INFO: Created: latency-svc-vrqld Jan 12 23:25:38.585: INFO: Got endpoints: latency-svc-vrqld [979.519735ms] Jan 12 23:25:38.646: INFO: Created: latency-svc-m7ddv Jan 12 23:25:38.663: INFO: Got endpoints: latency-svc-m7ddv [947.249696ms] Jan 12 23:25:38.664: INFO: Created: latency-svc-b6n6l Jan 12 23:25:38.690: INFO: Got endpoints: latency-svc-b6n6l [917.203283ms] Jan 12 23:25:38.721: INFO: Created: latency-svc-r5kqt Jan 12 23:25:38.825: INFO: Got endpoints: latency-svc-r5kqt [959.669551ms] Jan 12 23:25:38.867: INFO: Created: latency-svc-6r7x8 Jan 12 23:25:38.880: INFO: Got endpoints: latency-svc-6r7x8 [1.00210968s] Jan 12 23:25:38.967: INFO: Created: latency-svc-l4slr Jan 12 23:25:38.979: INFO: Got endpoints: latency-svc-l4slr [998.7558ms] Jan 12 23:25:39.024: INFO: Created: latency-svc-cspsk Jan 12 23:25:39.039: INFO: Got endpoints: latency-svc-cspsk [1.021828221s] Jan 12 23:25:39.299: INFO: Created: latency-svc-r4dlx Jan 12 23:25:39.366: INFO: Got endpoints: latency-svc-r4dlx [1.240570092s] Jan 12 23:25:39.533: INFO: Created: latency-svc-7qkmb Jan 12 23:25:39.563: INFO: Got endpoints: latency-svc-7qkmb [1.402079959s] Jan 12 23:25:39.623: INFO: Created: latency-svc-9n4gr Jan 12 23:25:39.657: INFO: Got endpoints: latency-svc-9n4gr [1.38677012s] Jan 12 23:25:39.723: INFO: Created: latency-svc-mlwjj Jan 12 23:25:39.730: INFO: Got endpoints: latency-svc-mlwjj [1.403505757s] Jan 12 23:25:39.993: INFO: Created: latency-svc-7hrs4 Jan 12 23:25:40.059: INFO: Got endpoints: latency-svc-7hrs4 [1.635844601s] Jan 12 23:25:40.157: INFO: Created: latency-svc-jlvdg Jan 12 23:25:40.424: INFO: Got endpoints: latency-svc-jlvdg [1.965548882s] Jan 12 23:25:40.426: INFO: Created: latency-svc-l2gzw Jan 12 23:25:40.487: INFO: Got endpoints: latency-svc-l2gzw [1.979913997s] Jan 12 23:25:40.583: INFO: Created: latency-svc-g2fx2 Jan 12 23:25:40.593: INFO: Got endpoints: latency-svc-g2fx2 [2.05383506s] Jan 12 23:25:40.617: INFO: Created: latency-svc-wpk42 Jan 12 23:25:40.838: INFO: Got endpoints: latency-svc-wpk42 [2.253255754s] Jan 12 23:25:40.841: INFO: Created: latency-svc-bzxjz Jan 12 23:25:40.848: INFO: Got endpoints: latency-svc-bzxjz [2.184898099s] Jan 12 23:25:40.990: INFO: Created: latency-svc-l5m55 Jan 12 23:25:41.011: INFO: Got endpoints: latency-svc-l5m55 [2.320402502s] Jan 12 23:25:41.194: INFO: Created: latency-svc-vnb7w Jan 12 23:25:41.220: INFO: Got endpoints: latency-svc-vnb7w [2.394275757s] Jan 12 23:25:41.395: INFO: Created: latency-svc-hx4vc Jan 12 23:25:41.405: INFO: Got endpoints: latency-svc-hx4vc [2.525067757s] Jan 12 23:25:41.503: INFO: Created: latency-svc-t4cq5 Jan 12 23:25:41.507: INFO: Got endpoints: latency-svc-t4cq5 [2.527654049s] Jan 12 23:25:41.533: INFO: Created: latency-svc-bczv4 Jan 12 23:25:41.551: INFO: Got endpoints: latency-svc-bczv4 [2.511991353s] Jan 12 23:25:41.634: INFO: Created: latency-svc-wdnl8 Jan 12 23:25:41.688: INFO: Got endpoints: latency-svc-wdnl8 [2.322261256s] Jan 12 23:25:41.785: INFO: Created: latency-svc-vmxhr Jan 12 23:25:41.797: INFO: Got endpoints: latency-svc-vmxhr [2.233279515s] Jan 12 23:25:41.897: INFO: Created: latency-svc-qc9w2 Jan 12 23:25:41.935: INFO: Got endpoints: latency-svc-qc9w2 [2.277662381s] Jan 12 23:25:41.935: INFO: Created: latency-svc-8s2cp Jan 12 23:25:41.968: INFO: Got endpoints: latency-svc-8s2cp [2.238372613s] Jan 12 23:25:42.035: INFO: Created: latency-svc-kst55 Jan 12 23:25:42.048: INFO: Got endpoints: latency-svc-kst55 [1.989043604s] Jan 12 23:25:42.078: INFO: Created: latency-svc-mwg4f Jan 12 23:25:42.094: INFO: Got endpoints: latency-svc-mwg4f [1.669848987s] Jan 12 23:25:42.130: INFO: Created: latency-svc-76fsv Jan 12 23:25:42.160: INFO: Got endpoints: latency-svc-76fsv [1.67288896s] Jan 12 23:25:42.178: INFO: Created: latency-svc-7s4r2 Jan 12 23:25:42.191: INFO: Got endpoints: latency-svc-7s4r2 [1.597817549s] Jan 12 23:25:42.208: INFO: Created: latency-svc-wj77b Jan 12 23:25:42.240: INFO: Got endpoints: latency-svc-wj77b [1.40179738s] Jan 12 23:25:42.287: INFO: Created: latency-svc-rqcr7 Jan 12 23:25:42.306: INFO: Created: latency-svc-76qqm Jan 12 23:25:42.307: INFO: Got endpoints: latency-svc-rqcr7 [1.458751988s] Jan 12 23:25:42.334: INFO: Got endpoints: latency-svc-76qqm [1.32309399s] Jan 12 23:25:42.430: INFO: Created: latency-svc-lqvwx Jan 12 23:25:42.438: INFO: Got endpoints: latency-svc-lqvwx [1.217992868s] Jan 12 23:25:42.462: INFO: Created: latency-svc-bppk4 Jan 12 23:25:42.480: INFO: Got endpoints: latency-svc-bppk4 [1.074386509s] Jan 12 23:25:42.498: INFO: Created: latency-svc-hbrfp Jan 12 23:25:42.509: INFO: Got endpoints: latency-svc-hbrfp [1.002776384s] Jan 12 23:25:42.528: INFO: Created: latency-svc-qzkr2 Jan 12 23:25:42.561: INFO: Got endpoints: latency-svc-qzkr2 [1.010025391s] Jan 12 23:25:42.564: INFO: Created: latency-svc-5hl7t Jan 12 23:25:42.582: INFO: Got endpoints: latency-svc-5hl7t [893.430676ms] Jan 12 23:25:42.616: INFO: Created: latency-svc-c6qpr Jan 12 23:25:42.627: INFO: Got endpoints: latency-svc-c6qpr [830.34953ms] Jan 12 23:25:42.694: INFO: Created: latency-svc-9x4tz Jan 12 23:25:42.699: INFO: Got endpoints: latency-svc-9x4tz [763.869721ms] Jan 12 23:25:42.720: INFO: Created: latency-svc-q8dtt Jan 12 23:25:42.750: INFO: Got endpoints: latency-svc-q8dtt [781.784065ms] Jan 12 23:25:42.838: INFO: Created: latency-svc-vgxm2 Jan 12 23:25:42.855: INFO: Got endpoints: latency-svc-vgxm2 [807.574788ms] Jan 12 23:25:42.918: INFO: Created: latency-svc-ftwj5 Jan 12 23:25:42.962: INFO: Got endpoints: latency-svc-ftwj5 [868.515336ms] Jan 12 23:25:43.119: INFO: Created: latency-svc-kqs2f Jan 12 23:25:43.186: INFO: Got endpoints: latency-svc-kqs2f [1.026125997s] Jan 12 23:25:43.187: INFO: Created: latency-svc-mcs7z Jan 12 23:25:43.286: INFO: Got endpoints: latency-svc-mcs7z [1.095254667s] Jan 12 23:25:43.300: INFO: Created: latency-svc-r5kwr Jan 12 23:25:43.318: INFO: Got endpoints: latency-svc-r5kwr [1.07849179s] Jan 12 23:25:43.430: INFO: Created: latency-svc-phm77 Jan 12 23:25:43.475: INFO: Got endpoints: latency-svc-phm77 [1.168428313s] Jan 12 23:25:43.510: INFO: Created: latency-svc-nzg4g Jan 12 23:25:43.528: INFO: Got endpoints: latency-svc-nzg4g [1.193878706s] Jan 12 23:25:43.574: INFO: Created: latency-svc-kvf9c Jan 12 23:25:43.601: INFO: Got endpoints: latency-svc-kvf9c [1.162872705s] Jan 12 23:25:43.646: INFO: Created: latency-svc-2h6s7 Jan 12 23:25:43.666: INFO: Got endpoints: latency-svc-2h6s7 [1.186322037s] Jan 12 23:25:43.758: INFO: Created: latency-svc-lcwpb Jan 12 23:25:43.776: INFO: Got endpoints: latency-svc-lcwpb [1.266730194s] Jan 12 23:25:43.811: INFO: Created: latency-svc-rlsxv Jan 12 23:25:43.843: INFO: Got endpoints: latency-svc-rlsxv [1.281723605s] Jan 12 23:25:43.883: INFO: Created: latency-svc-gft52 Jan 12 23:25:43.897: INFO: Got endpoints: latency-svc-gft52 [1.314951392s] Jan 12 23:25:43.975: INFO: Created: latency-svc-fjw6g Jan 12 23:25:43.993: INFO: Got endpoints: latency-svc-fjw6g [1.365900599s] Jan 12 23:25:43.993: INFO: Created: latency-svc-7p5p7 Jan 12 23:25:44.027: INFO: Got endpoints: latency-svc-7p5p7 [1.327577572s] Jan 12 23:25:44.056: INFO: Created: latency-svc-nmv6g Jan 12 23:25:44.065: INFO: Got endpoints: latency-svc-nmv6g [1.314556197s] Jan 12 23:25:44.129: INFO: Created: latency-svc-wjcgg Jan 12 23:25:44.145: INFO: Got endpoints: latency-svc-wjcgg [1.289733085s] Jan 12 23:25:44.191: INFO: Created: latency-svc-6kddn Jan 12 23:25:44.205: INFO: Got endpoints: latency-svc-6kddn [1.24258399s] Jan 12 23:25:44.270: INFO: Created: latency-svc-94cdf Jan 12 23:25:44.386: INFO: Got endpoints: latency-svc-94cdf [1.199733416s] Jan 12 23:25:44.447: INFO: Created: latency-svc-mf852 Jan 12 23:25:44.532: INFO: Got endpoints: latency-svc-mf852 [1.245671873s] Jan 12 23:25:44.573: INFO: Created: latency-svc-q52b4 Jan 12 23:25:44.588: INFO: Got endpoints: latency-svc-q52b4 [1.269818582s] Jan 12 23:25:44.665: INFO: Created: latency-svc-lpgzs Jan 12 23:25:44.681: INFO: Got endpoints: latency-svc-lpgzs [1.206247787s] Jan 12 23:25:44.707: INFO: Created: latency-svc-r4hdg Jan 12 23:25:44.723: INFO: Got endpoints: latency-svc-r4hdg [1.195188435s] Jan 12 23:25:44.795: INFO: Created: latency-svc-5tq24 Jan 12 23:25:44.807: INFO: Got endpoints: latency-svc-5tq24 [1.206696807s] Jan 12 23:25:44.856: INFO: Created: latency-svc-4zf6m Jan 12 23:25:44.914: INFO: Got endpoints: latency-svc-4zf6m [1.248399656s] Jan 12 23:25:44.941: INFO: Created: latency-svc-t5r4f Jan 12 23:25:44.957: INFO: Got endpoints: latency-svc-t5r4f [1.180521322s] Jan 12 23:25:44.986: INFO: Created: latency-svc-mfkln Jan 12 23:25:45.053: INFO: Got endpoints: latency-svc-mfkln [1.209496633s] Jan 12 23:25:45.071: INFO: Created: latency-svc-z7t9m Jan 12 23:25:45.085: INFO: Got endpoints: latency-svc-z7t9m [1.188286653s] Jan 12 23:25:45.108: INFO: Created: latency-svc-x22pn Jan 12 23:25:45.128: INFO: Got endpoints: latency-svc-x22pn [1.134719637s] Jan 12 23:25:45.191: INFO: Created: latency-svc-vtdhj Jan 12 23:25:45.217: INFO: Created: latency-svc-qhtj2 Jan 12 23:25:45.217: INFO: Got endpoints: latency-svc-vtdhj [1.190510455s] Jan 12 23:25:45.244: INFO: Got endpoints: latency-svc-qhtj2 [1.179288079s] Jan 12 23:25:45.274: INFO: Created: latency-svc-zxmlx Jan 12 23:25:45.327: INFO: Got endpoints: latency-svc-zxmlx [1.182164609s] Jan 12 23:25:45.353: INFO: Created: latency-svc-7dw7w Jan 12 23:25:45.397: INFO: Got endpoints: latency-svc-7dw7w [1.192439301s] Jan 12 23:25:45.427: INFO: Created: latency-svc-bjh5w Jan 12 23:25:45.478: INFO: Got endpoints: latency-svc-bjh5w [1.091592327s] Jan 12 23:25:45.490: INFO: Created: latency-svc-l6kct Jan 12 23:25:45.509: INFO: Got endpoints: latency-svc-l6kct [976.884323ms] Jan 12 23:25:45.539: INFO: Created: latency-svc-dsqtt Jan 12 23:25:45.544: INFO: Got endpoints: latency-svc-dsqtt [955.54005ms] Jan 12 23:25:45.577: INFO: Created: latency-svc-cfxd5 Jan 12 23:25:45.627: INFO: Got endpoints: latency-svc-cfxd5 [945.52539ms] Jan 12 23:25:45.655: INFO: Created: latency-svc-gvx8z Jan 12 23:25:45.682: INFO: Got endpoints: latency-svc-gvx8z [958.905996ms] Jan 12 23:25:45.724: INFO: Created: latency-svc-rkwtm Jan 12 23:25:45.799: INFO: Got endpoints: latency-svc-rkwtm [991.713006ms] Jan 12 23:25:45.828: INFO: Created: latency-svc-bgfxv Jan 12 23:25:45.844: INFO: Got endpoints: latency-svc-bgfxv [929.639665ms] Jan 12 23:25:45.883: INFO: Created: latency-svc-mp2h4 Jan 12 23:25:45.895: INFO: Got endpoints: latency-svc-mp2h4 [937.426732ms] Jan 12 23:25:45.945: INFO: Created: latency-svc-cgc5r Jan 12 23:25:45.955: INFO: Got endpoints: latency-svc-cgc5r [902.057383ms] Jan 12 23:25:45.976: INFO: Created: latency-svc-mrx2n Jan 12 23:25:46.006: INFO: Got endpoints: latency-svc-mrx2n [920.881899ms] Jan 12 23:25:46.032: INFO: Created: latency-svc-jm7zg Jan 12 23:25:46.088: INFO: Got endpoints: latency-svc-jm7zg [960.369074ms] Jan 12 23:25:46.091: INFO: Created: latency-svc-8cdlx Jan 12 23:25:46.101: INFO: Got endpoints: latency-svc-8cdlx [883.47629ms] Jan 12 23:25:46.122: INFO: Created: latency-svc-fz966 Jan 12 23:25:46.137: INFO: Got endpoints: latency-svc-fz966 [892.969934ms] Jan 12 23:25:46.157: INFO: Created: latency-svc-ccnr9 Jan 12 23:25:46.181: INFO: Got endpoints: latency-svc-ccnr9 [853.178991ms] Jan 12 23:25:46.244: INFO: Created: latency-svc-h9cjx Jan 12 23:25:46.266: INFO: Created: latency-svc-j6dvb Jan 12 23:25:46.266: INFO: Got endpoints: latency-svc-h9cjx [868.971907ms] Jan 12 23:25:46.281: INFO: Got endpoints: latency-svc-j6dvb [803.0213ms] Jan 12 23:25:46.302: INFO: Created: latency-svc-68nl4 Jan 12 23:25:46.317: INFO: Got endpoints: latency-svc-68nl4 [808.653628ms] Jan 12 23:25:46.339: INFO: Created: latency-svc-h8qsf Jan 12 23:25:46.376: INFO: Got endpoints: latency-svc-h8qsf [831.871583ms] Jan 12 23:25:46.380: INFO: Created: latency-svc-9qrhk Jan 12 23:25:46.419: INFO: Got endpoints: latency-svc-9qrhk [792.379442ms] Jan 12 23:25:46.450: INFO: Created: latency-svc-qk6pk Jan 12 23:25:46.464: INFO: Got endpoints: latency-svc-qk6pk [781.719354ms] Jan 12 23:25:46.519: INFO: Created: latency-svc-lnpbr Jan 12 23:25:46.535: INFO: Got endpoints: latency-svc-lnpbr [736.18314ms] Jan 12 23:25:46.554: INFO: Created: latency-svc-bmv9c Jan 12 23:25:46.571: INFO: Got endpoints: latency-svc-bmv9c [727.126067ms] Jan 12 23:25:46.603: INFO: Created: latency-svc-nkvnm Jan 12 23:25:46.651: INFO: Got endpoints: latency-svc-nkvnm [756.665369ms] Jan 12 23:25:46.667: INFO: Created: latency-svc-7smt9 Jan 12 23:25:46.676: INFO: Got endpoints: latency-svc-7smt9 [721.081078ms] Jan 12 23:25:46.690: INFO: Created: latency-svc-l22tm Jan 12 23:25:46.701: INFO: Got endpoints: latency-svc-l22tm [694.523516ms] Jan 12 23:25:46.714: INFO: Created: latency-svc-l547s Jan 12 23:25:46.724: INFO: Got endpoints: latency-svc-l547s [636.217118ms] Jan 12 23:25:46.842: INFO: Created: latency-svc-kpkp4 Jan 12 23:25:46.880: INFO: Got endpoints: latency-svc-kpkp4 [779.567669ms] Jan 12 23:25:46.981: INFO: Created: latency-svc-wvd8q Jan 12 23:25:47.041: INFO: Created: latency-svc-qn2lr Jan 12 23:25:47.041: INFO: Got endpoints: latency-svc-wvd8q [904.035003ms] Jan 12 23:25:47.915: INFO: Got endpoints: latency-svc-qn2lr [1.733974665s] Jan 12 23:25:47.936: INFO: Created: latency-svc-zpcqz Jan 12 23:25:48.891: INFO: Got endpoints: latency-svc-zpcqz [2.624716248s] Jan 12 23:25:48.961: INFO: Created: latency-svc-mr4j2 Jan 12 23:25:49.052: INFO: Got endpoints: latency-svc-mr4j2 [2.771365491s] Jan 12 23:25:49.093: INFO: Created: latency-svc-csvgv Jan 12 23:25:49.105: INFO: Got endpoints: latency-svc-csvgv [2.787568243s] Jan 12 23:25:49.139: INFO: Created: latency-svc-nk7g5 Jan 12 23:25:49.227: INFO: Got endpoints: latency-svc-nk7g5 [2.850906225s] Jan 12 23:25:49.229: INFO: Created: latency-svc-l7jcs Jan 12 23:25:49.256: INFO: Got endpoints: latency-svc-l7jcs [2.83600018s] Jan 12 23:25:49.309: INFO: Created: latency-svc-lnk6k Jan 12 23:25:49.376: INFO: Got endpoints: latency-svc-lnk6k [2.91202184s] Jan 12 23:25:49.403: INFO: Created: latency-svc-t2sg4 Jan 12 23:25:49.413: INFO: Got endpoints: latency-svc-t2sg4 [2.877872897s] Jan 12 23:25:49.465: INFO: Created: latency-svc-n4qsm Jan 12 23:25:49.473: INFO: Got endpoints: latency-svc-n4qsm [2.901932779s] Jan 12 23:25:49.514: INFO: Created: latency-svc-5cd6s Jan 12 23:25:49.539: INFO: Created: latency-svc-v6fpb Jan 12 23:25:49.539: INFO: Got endpoints: latency-svc-5cd6s [2.888105422s] Jan 12 23:25:49.567: INFO: Got endpoints: latency-svc-v6fpb [2.891304042s] Jan 12 23:25:49.601: INFO: Created: latency-svc-mzjcz Jan 12 23:25:49.659: INFO: Got endpoints: latency-svc-mzjcz [2.958608982s] Jan 12 23:25:49.685: INFO: Created: latency-svc-qld57 Jan 12 23:25:49.704: INFO: Got endpoints: latency-svc-qld57 [2.979014002s] Jan 12 23:25:49.790: INFO: Created: latency-svc-5rj4c Jan 12 23:25:49.807: INFO: Got endpoints: latency-svc-5rj4c [2.926130988s] Jan 12 23:25:49.808: INFO: Created: latency-svc-ztwwk Jan 12 23:25:49.829: INFO: Got endpoints: latency-svc-ztwwk [2.787325159s] Jan 12 23:25:49.859: INFO: Created: latency-svc-rm57x Jan 12 23:25:49.878: INFO: Got endpoints: latency-svc-rm57x [1.963168116s] Jan 12 23:25:49.921: INFO: Created: latency-svc-6wskt Jan 12 23:25:49.961: INFO: Got endpoints: latency-svc-6wskt [1.069542553s] Jan 12 23:25:49.962: INFO: Created: latency-svc-9xzhx Jan 12 23:25:50.058: INFO: Got endpoints: latency-svc-9xzhx [1.005955966s] Jan 12 23:25:50.075: INFO: Created: latency-svc-z582w Jan 12 23:25:50.103: INFO: Got endpoints: latency-svc-z582w [997.921344ms] Jan 12 23:25:50.122: INFO: Created: latency-svc-khpbx Jan 12 23:25:50.141: INFO: Got endpoints: latency-svc-khpbx [914.245387ms] Jan 12 23:25:50.191: INFO: Created: latency-svc-gfh7z Jan 12 23:25:50.215: INFO: Got endpoints: latency-svc-gfh7z [958.979836ms] Jan 12 23:25:50.215: INFO: Created: latency-svc-rlw8m Jan 12 23:25:50.245: INFO: Got endpoints: latency-svc-rlw8m [869.48184ms] Jan 12 23:25:50.275: INFO: Created: latency-svc-7b2tl Jan 12 23:25:50.288: INFO: Got endpoints: latency-svc-7b2tl [874.452817ms] Jan 12 23:25:50.328: INFO: Created: latency-svc-g25h8 Jan 12 23:25:50.350: INFO: Got endpoints: latency-svc-g25h8 [877.059831ms] Jan 12 23:25:50.387: INFO: Created: latency-svc-rvdq4 Jan 12 23:25:50.404: INFO: Got endpoints: latency-svc-rvdq4 [864.947638ms] Jan 12 23:25:50.425: INFO: Created: latency-svc-pn4sd Jan 12 23:25:50.459: INFO: Got endpoints: latency-svc-pn4sd [892.048988ms] Jan 12 23:25:50.479: INFO: Created: latency-svc-7n4gs Jan 12 23:25:50.489: INFO: Got endpoints: latency-svc-7n4gs [829.537362ms] Jan 12 23:25:50.503: INFO: Created: latency-svc-8rw95 Jan 12 23:25:50.521: INFO: Got endpoints: latency-svc-8rw95 [816.985422ms] Jan 12 23:25:50.549: INFO: Created: latency-svc-gcrqc Jan 12 23:25:50.591: INFO: Got endpoints: latency-svc-gcrqc [784.420984ms] Jan 12 23:25:50.602: INFO: Created: latency-svc-ps825 Jan 12 23:25:50.611: INFO: Got endpoints: latency-svc-ps825 [782.78303ms] Jan 12 23:25:50.627: INFO: Created: latency-svc-96bmq Jan 12 23:25:50.636: INFO: Got endpoints: latency-svc-96bmq [757.755519ms] Jan 12 23:25:50.650: INFO: Created: latency-svc-dg7pq Jan 12 23:25:50.660: INFO: Got endpoints: latency-svc-dg7pq [698.691604ms] Jan 12 23:25:50.682: INFO: Created: latency-svc-dhsxb Jan 12 23:25:51.160: INFO: Got endpoints: latency-svc-dhsxb [1.10194926s] Jan 12 23:25:51.164: INFO: Created: latency-svc-wlblh Jan 12 23:25:51.828: INFO: Got endpoints: latency-svc-wlblh [1.725047394s] Jan 12 23:25:51.830: INFO: Created: latency-svc-gdcnh Jan 12 23:25:51.861: INFO: Got endpoints: latency-svc-gdcnh [1.720261892s] Jan 12 23:25:51.935: INFO: Created: latency-svc-4jn6d Jan 12 23:25:51.950: INFO: Got endpoints: latency-svc-4jn6d [1.735515014s] Jan 12 23:25:51.993: INFO: Created: latency-svc-mfc4g Jan 12 23:25:52.028: INFO: Got endpoints: latency-svc-mfc4g [1.782575807s] Jan 12 23:25:52.066: INFO: Created: latency-svc-k85hc Jan 12 23:25:52.094: INFO: Got endpoints: latency-svc-k85hc [1.806243381s] Jan 12 23:25:52.121: INFO: Created: latency-svc-ps6pw Jan 12 23:25:52.191: INFO: Got endpoints: latency-svc-ps6pw [1.840245017s] Jan 12 23:25:52.212: INFO: Created: latency-svc-j8cbs Jan 12 23:25:52.225: INFO: Got endpoints: latency-svc-j8cbs [1.820778695s] Jan 12 23:25:52.251: INFO: Created: latency-svc-b7cp9 Jan 12 23:25:52.265: INFO: Got endpoints: latency-svc-b7cp9 [1.805242155s] Jan 12 23:25:52.281: INFO: Created: latency-svc-897rp Jan 12 23:25:52.310: INFO: Got endpoints: latency-svc-897rp [1.820778348s] Jan 12 23:25:52.329: INFO: Created: latency-svc-sw5n7 Jan 12 23:25:52.343: INFO: Got endpoints: latency-svc-sw5n7 [1.822076505s] Jan 12 23:25:52.361: INFO: Created: latency-svc-mbrrz Jan 12 23:25:52.391: INFO: Got endpoints: latency-svc-mbrrz [1.799950692s] Jan 12 23:25:52.442: INFO: Created: latency-svc-v42tr Jan 12 23:25:52.466: INFO: Got endpoints: latency-svc-v42tr [1.854800598s] Jan 12 23:25:52.467: INFO: Created: latency-svc-trr7k Jan 12 23:25:52.490: INFO: Got endpoints: latency-svc-trr7k [1.854823016s] Jan 12 23:25:52.521: INFO: Created: latency-svc-9vfzg Jan 12 23:25:52.537: INFO: Got endpoints: latency-svc-9vfzg [1.8777755s] Jan 12 23:25:52.579: INFO: Created: latency-svc-zckjb Jan 12 23:25:52.601: INFO: Got endpoints: latency-svc-zckjb [1.440212459s] Jan 12 23:25:52.625: INFO: Created: latency-svc-nbr82 Jan 12 23:25:52.639: INFO: Got endpoints: latency-svc-nbr82 [810.564512ms] Jan 12 23:25:52.679: INFO: Created: latency-svc-whntr Jan 12 23:25:52.723: INFO: Got endpoints: latency-svc-whntr [861.72798ms] Jan 12 23:25:52.736: INFO: Created: latency-svc-tm9bp Jan 12 23:25:52.753: INFO: Got endpoints: latency-svc-tm9bp [802.538494ms] Jan 12 23:25:52.772: INFO: Created: latency-svc-nv66s Jan 12 23:25:52.802: INFO: Got endpoints: latency-svc-nv66s [774.270653ms] Jan 12 23:25:52.855: INFO: Created: latency-svc-7jc7j Jan 12 23:25:52.870: INFO: Created: latency-svc-sz6bz Jan 12 23:25:52.871: INFO: Got endpoints: latency-svc-7jc7j [777.186952ms] Jan 12 23:25:52.888: INFO: Got endpoints: latency-svc-sz6bz [696.94389ms] Jan 12 23:25:52.907: INFO: Created: latency-svc-b5hjh Jan 12 23:25:52.930: INFO: Got endpoints: latency-svc-b5hjh [704.644437ms] Jan 12 23:25:53.003: INFO: Created: latency-svc-464d8 Jan 12 23:25:53.038: INFO: Got endpoints: latency-svc-464d8 [773.483142ms] Jan 12 23:25:53.067: INFO: Created: latency-svc-q895c Jan 12 23:25:53.112: INFO: Got endpoints: latency-svc-q895c [801.99104ms] Jan 12 23:25:53.135: INFO: Created: latency-svc-5npkb Jan 12 23:25:53.148: INFO: Got endpoints: latency-svc-5npkb [805.434289ms] Jan 12 23:25:53.171: INFO: Created: latency-svc-hgwnl Jan 12 23:25:53.184: INFO: Got endpoints: latency-svc-hgwnl [792.988649ms] Jan 12 23:25:53.210: INFO: Created: latency-svc-5bpff Jan 12 23:25:53.244: INFO: Got endpoints: latency-svc-5bpff [777.492713ms] Jan 12 23:25:53.264: INFO: Created: latency-svc-xllmg Jan 12 23:25:53.294: INFO: Got endpoints: latency-svc-xllmg [803.653125ms] Jan 12 23:25:53.324: INFO: Created: latency-svc-p6ps9 Jan 12 23:25:53.340: INFO: Got endpoints: latency-svc-p6ps9 [802.339823ms] Jan 12 23:25:53.383: INFO: Created: latency-svc-p8n28 Jan 12 23:25:53.388: INFO: Got endpoints: latency-svc-p8n28 [786.982462ms] Jan 12 23:25:53.412: INFO: Created: latency-svc-ztksw Jan 12 23:25:53.427: INFO: Got endpoints: latency-svc-ztksw [788.251823ms] Jan 12 23:25:53.447: INFO: Created: latency-svc-8tqc4 Jan 12 23:25:53.463: INFO: Got endpoints: latency-svc-8tqc4 [739.885378ms] Jan 12 23:25:53.507: INFO: Created: latency-svc-xzk6v Jan 12 23:25:53.528: INFO: Got endpoints: latency-svc-xzk6v [774.704572ms] Jan 12 23:25:53.529: INFO: Created: latency-svc-6gwbt Jan 12 23:25:53.541: INFO: Got endpoints: latency-svc-6gwbt [738.191249ms] Jan 12 23:25:53.558: INFO: Created: latency-svc-8v6zs Jan 12 23:25:53.571: INFO: Got endpoints: latency-svc-8v6zs [699.253473ms] Jan 12 23:25:53.588: INFO: Created: latency-svc-q8c4l Jan 12 23:25:53.602: INFO: Got endpoints: latency-svc-q8c4l [713.731446ms] Jan 12 23:25:53.645: INFO: Created: latency-svc-rv5cv Jan 12 23:25:53.662: INFO: Got endpoints: latency-svc-rv5cv [731.843573ms] Jan 12 23:25:53.687: INFO: Created: latency-svc-cvh42 Jan 12 23:25:53.700: INFO: Got endpoints: latency-svc-cvh42 [661.426942ms] Jan 12 23:25:53.720: INFO: Created: latency-svc-8wvjh Jan 12 23:25:53.735: INFO: Got endpoints: latency-svc-8wvjh [623.268815ms] Jan 12 23:25:53.777: INFO: Created: latency-svc-gbsnb Jan 12 23:25:53.783: INFO: Got endpoints: latency-svc-gbsnb [635.138813ms] Jan 12 23:25:53.805: INFO: Created: latency-svc-kg546 Jan 12 23:25:53.855: INFO: Got endpoints: latency-svc-kg546 [670.374786ms] Jan 12 23:25:53.921: INFO: Created: latency-svc-sm8dn Jan 12 23:25:54.020: INFO: Got endpoints: latency-svc-sm8dn [775.917679ms] Jan 12 23:25:54.020: INFO: Latencies: [146.217263ms 245.953446ms 287.390077ms 404.846101ms 459.402717ms 526.270317ms 560.586091ms 614.395756ms 623.268815ms 635.138813ms 636.217118ms 661.426942ms 670.374786ms 694.523516ms 696.94389ms 698.691604ms 699.253473ms 704.644437ms 713.731446ms 721.081078ms 725.889359ms 727.126067ms 731.843573ms 736.18314ms 738.191249ms 739.885378ms 756.665369ms 757.755519ms 763.869721ms 773.483142ms 774.270653ms 774.704572ms 775.917679ms 777.186952ms 777.492713ms 779.567669ms 781.719354ms 781.784065ms 782.78303ms 784.420984ms 786.982462ms 788.251823ms 792.379442ms 792.988649ms 801.99104ms 802.339823ms 802.538494ms 803.0213ms 803.653125ms 805.434289ms 807.574788ms 808.653628ms 810.564512ms 816.985422ms 829.537362ms 830.34953ms 831.871583ms 839.448396ms 853.178991ms 861.72798ms 864.947638ms 868.515336ms 868.971907ms 869.48184ms 874.452817ms 877.059831ms 883.47629ms 892.048988ms 892.969934ms 893.430676ms 902.057383ms 904.035003ms 914.245387ms 917.203283ms 920.881899ms 929.639665ms 937.426732ms 945.52539ms 947.249696ms 950.436317ms 955.54005ms 958.905996ms 958.979836ms 959.669551ms 960.369074ms 976.884323ms 979.519735ms 991.713006ms 997.921344ms 998.7558ms 1.00210968s 1.002776384s 1.005955966s 1.007705434s 1.010025391s 1.021828221s 1.026125997s 1.047759666s 1.069542553s 1.074386509s 1.07849179s 1.091592327s 1.095254667s 1.100103562s 1.100164455s 1.10194926s 1.102015922s 1.105783647s 1.108586294s 1.112731669s 1.114094192s 1.127872759s 1.130812138s 1.132288818s 1.134719637s 1.162872705s 1.168428313s 1.179288079s 1.180521322s 1.182164609s 1.186322037s 1.188286653s 1.190510455s 1.192439301s 1.193878706s 1.195188435s 1.199733416s 1.206247787s 1.206696807s 1.209496633s 1.214878505s 1.217992868s 1.240570092s 1.24258399s 1.245671873s 1.248399656s 1.266730194s 1.269818582s 1.281723605s 1.289733085s 1.314556197s 1.314951392s 1.32309399s 1.327577572s 1.365900599s 1.38677012s 1.40179738s 1.402079959s 1.403505757s 1.440212459s 1.458751988s 1.597817549s 1.635844601s 1.669848987s 1.67288896s 1.720261892s 1.725047394s 1.733974665s 1.735515014s 1.782575807s 1.799950692s 1.805242155s 1.806243381s 1.820778348s 1.820778695s 1.822076505s 1.840245017s 1.854800598s 1.854823016s 1.8777755s 1.963168116s 1.965548882s 1.979913997s 1.989043604s 2.05383506s 2.184898099s 2.233279515s 2.238372613s 2.253255754s 2.277662381s 2.320402502s 2.322261256s 2.394275757s 2.511991353s 2.525067757s 2.527654049s 2.624716248s 2.771365491s 2.787325159s 2.787568243s 2.83600018s 2.850906225s 2.877872897s 2.888105422s 2.891304042s 2.901932779s 2.91202184s 2.926130988s 2.958608982s 2.979014002s] Jan 12 23:25:54.020: INFO: 50 %ile: 1.07849179s Jan 12 23:25:54.020: INFO: 90 %ile: 2.320402502s Jan 12 23:25:54.020: INFO: 99 %ile: 2.958608982s Jan 12 23:25:54.020: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:25:54.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-7765" for this suite. • [SLOW TEST:23.290 seconds] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":309,"completed":119,"skipped":2021,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:25:54.080: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jan 12 23:26:04.505: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 12 23:26:04.591: INFO: Pod pod-with-poststart-exec-hook still exists Jan 12 23:26:06.591: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 12 23:26:06.647: INFO: Pod pod-with-poststart-exec-hook still exists Jan 12 23:26:08.591: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 12 23:26:08.671: INFO: Pod pod-with-poststart-exec-hook still exists Jan 12 23:26:10.591: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 12 23:26:10.599: INFO: Pod pod-with-poststart-exec-hook still exists Jan 12 23:26:12.591: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 12 23:26:12.597: INFO: Pod pod-with-poststart-exec-hook still exists Jan 12 23:26:14.592: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 12 23:26:14.603: INFO: Pod pod-with-poststart-exec-hook still exists Jan 12 23:26:16.591: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 12 23:26:16.606: INFO: Pod pod-with-poststart-exec-hook still exists Jan 12 23:26:18.591: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 12 23:26:18.777: INFO: Pod pod-with-poststart-exec-hook still exists Jan 12 23:26:20.591: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 12 23:26:20.597: INFO: Pod pod-with-poststart-exec-hook still exists Jan 12 23:26:22.591: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 12 23:26:22.608: INFO: Pod pod-with-poststart-exec-hook still exists Jan 12 23:26:24.591: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 12 23:26:24.627: INFO: Pod pod-with-poststart-exec-hook still exists Jan 12 23:26:26.591: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 12 23:26:26.595: INFO: Pod pod-with-poststart-exec-hook still exists Jan 12 23:26:28.591: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 12 23:26:28.595: INFO: Pod pod-with-poststart-exec-hook still exists Jan 12 23:26:30.591: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 12 23:26:30.596: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:26:30.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-2619" for this suite. • [SLOW TEST:36.524 seconds] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:43 should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":309,"completed":120,"skipped":2059,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:26:30.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 12 23:26:31.194: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 12 23:26:33.205: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746090791, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746090791, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746090791, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746090791, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 12 23:26:35.211: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746090791, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746090791, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746090791, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746090791, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 12 23:26:38.280: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:26:38.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8305" for this suite. STEP: Destroying namespace "webhook-8305-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:8.023 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":309,"completed":121,"skipped":2086,"failed":0} SSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:26:38.628: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [BeforeEach] Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:299 [It] should scale a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a replication controller Jan 12 23:26:38.769: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-1938 create -f -' Jan 12 23:26:39.246: INFO: stderr: "" Jan 12 23:26:39.246: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 12 23:26:39.246: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-1938 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 12 23:26:39.582: INFO: stderr: "" Jan 12 23:26:39.582: INFO: stdout: "update-demo-nautilus-2j9f7 update-demo-nautilus-kq274 " Jan 12 23:26:39.582: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-1938 get pods update-demo-nautilus-2j9f7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 12 23:26:39.708: INFO: stderr: "" Jan 12 23:26:39.708: INFO: stdout: "" Jan 12 23:26:39.708: INFO: update-demo-nautilus-2j9f7 is created but not running Jan 12 23:26:44.708: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-1938 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 12 23:26:44.816: INFO: stderr: "" Jan 12 23:26:44.817: INFO: stdout: "update-demo-nautilus-2j9f7 update-demo-nautilus-kq274 " Jan 12 23:26:44.817: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-1938 get pods update-demo-nautilus-2j9f7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 12 23:26:44.916: INFO: stderr: "" Jan 12 23:26:44.916: INFO: stdout: "true" Jan 12 23:26:44.916: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-1938 get pods update-demo-nautilus-2j9f7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 12 23:26:45.020: INFO: stderr: "" Jan 12 23:26:45.020: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 12 23:26:45.020: INFO: validating pod update-demo-nautilus-2j9f7 Jan 12 23:26:45.024: INFO: got data: { "image": "nautilus.jpg" } Jan 12 23:26:45.024: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 12 23:26:45.024: INFO: update-demo-nautilus-2j9f7 is verified up and running Jan 12 23:26:45.024: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-1938 get pods update-demo-nautilus-kq274 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 12 23:26:45.116: INFO: stderr: "" Jan 12 23:26:45.116: INFO: stdout: "true" Jan 12 23:26:45.116: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-1938 get pods update-demo-nautilus-kq274 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 12 23:26:45.207: INFO: stderr: "" Jan 12 23:26:45.207: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 12 23:26:45.207: INFO: validating pod update-demo-nautilus-kq274 Jan 12 23:26:45.212: INFO: got data: { "image": "nautilus.jpg" } Jan 12 23:26:45.212: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 12 23:26:45.212: INFO: update-demo-nautilus-kq274 is verified up and running STEP: scaling down the replication controller Jan 12 23:26:45.214: INFO: scanned /root for discovery docs: Jan 12 23:26:45.214: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-1938 scale rc update-demo-nautilus --replicas=1 --timeout=5m' Jan 12 23:26:46.343: INFO: stderr: "" Jan 12 23:26:46.343: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 12 23:26:46.343: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-1938 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 12 23:26:46.518: INFO: stderr: "" Jan 12 23:26:46.518: INFO: stdout: "update-demo-nautilus-2j9f7 update-demo-nautilus-kq274 " STEP: Replicas for name=update-demo: expected=1 actual=2 Jan 12 23:26:51.519: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-1938 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 12 23:26:51.626: INFO: stderr: "" Jan 12 23:26:51.626: INFO: stdout: "update-demo-nautilus-2j9f7 update-demo-nautilus-kq274 " STEP: Replicas for name=update-demo: expected=1 actual=2 Jan 12 23:26:56.626: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-1938 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 12 23:26:56.727: INFO: stderr: "" Jan 12 23:26:56.727: INFO: stdout: "update-demo-nautilus-2j9f7 update-demo-nautilus-kq274 " STEP: Replicas for name=update-demo: expected=1 actual=2 Jan 12 23:27:01.727: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-1938 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 12 23:27:01.832: INFO: stderr: "" Jan 12 23:27:01.832: INFO: stdout: "update-demo-nautilus-2j9f7 update-demo-nautilus-kq274 " STEP: Replicas for name=update-demo: expected=1 actual=2 Jan 12 23:27:06.832: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-1938 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 12 23:27:06.945: INFO: stderr: "" Jan 12 23:27:06.946: INFO: stdout: "update-demo-nautilus-2j9f7 update-demo-nautilus-kq274 " STEP: Replicas for name=update-demo: expected=1 actual=2 Jan 12 23:27:11.946: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-1938 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 12 23:27:12.063: INFO: stderr: "" Jan 12 23:27:12.063: INFO: stdout: "update-demo-nautilus-2j9f7 update-demo-nautilus-kq274 " STEP: Replicas for name=update-demo: expected=1 actual=2 Jan 12 23:27:17.064: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-1938 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 12 23:27:17.173: INFO: stderr: "" Jan 12 23:27:17.173: INFO: stdout: "update-demo-nautilus-2j9f7 update-demo-nautilus-kq274 " STEP: Replicas for name=update-demo: expected=1 actual=2 Jan 12 23:27:22.174: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-1938 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 12 23:27:22.298: INFO: stderr: "" Jan 12 23:27:22.298: INFO: stdout: "update-demo-nautilus-2j9f7 update-demo-nautilus-kq274 " STEP: Replicas for name=update-demo: expected=1 actual=2 Jan 12 23:27:27.298: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-1938 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 12 23:27:27.419: INFO: stderr: "" Jan 12 23:27:27.419: INFO: stdout: "update-demo-nautilus-2j9f7 update-demo-nautilus-kq274 " STEP: Replicas for name=update-demo: expected=1 actual=2 Jan 12 23:27:32.420: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-1938 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 12 23:27:32.511: INFO: stderr: "" Jan 12 23:27:32.511: INFO: stdout: "update-demo-nautilus-2j9f7 update-demo-nautilus-kq274 " STEP: Replicas for name=update-demo: expected=1 actual=2 Jan 12 23:27:37.511: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-1938 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 12 23:27:37.614: INFO: stderr: "" Jan 12 23:27:37.614: INFO: stdout: "update-demo-nautilus-2j9f7 update-demo-nautilus-kq274 " STEP: Replicas for name=update-demo: expected=1 actual=2 Jan 12 23:27:42.615: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-1938 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 12 23:27:42.716: INFO: stderr: "" Jan 12 23:27:42.716: INFO: stdout: "update-demo-nautilus-kq274 " Jan 12 23:27:42.716: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-1938 get pods update-demo-nautilus-kq274 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 12 23:27:42.812: INFO: stderr: "" Jan 12 23:27:42.812: INFO: stdout: "true" Jan 12 23:27:42.812: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-1938 get pods update-demo-nautilus-kq274 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 12 23:27:42.902: INFO: stderr: "" Jan 12 23:27:42.902: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 12 23:27:42.902: INFO: validating pod update-demo-nautilus-kq274 Jan 12 23:27:42.905: INFO: got data: { "image": "nautilus.jpg" } Jan 12 23:27:42.905: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 12 23:27:42.905: INFO: update-demo-nautilus-kq274 is verified up and running STEP: scaling up the replication controller Jan 12 23:27:42.908: INFO: scanned /root for discovery docs: Jan 12 23:27:42.908: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-1938 scale rc update-demo-nautilus --replicas=2 --timeout=5m' Jan 12 23:27:44.053: INFO: stderr: "" Jan 12 23:27:44.053: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 12 23:27:44.053: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-1938 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 12 23:27:44.161: INFO: stderr: "" Jan 12 23:27:44.161: INFO: stdout: "update-demo-nautilus-5pzwj update-demo-nautilus-kq274 " Jan 12 23:27:44.161: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-1938 get pods update-demo-nautilus-5pzwj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 12 23:27:44.262: INFO: stderr: "" Jan 12 23:27:44.262: INFO: stdout: "" Jan 12 23:27:44.262: INFO: update-demo-nautilus-5pzwj is created but not running Jan 12 23:27:49.262: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-1938 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 12 23:27:49.360: INFO: stderr: "" Jan 12 23:27:49.360: INFO: stdout: "update-demo-nautilus-5pzwj update-demo-nautilus-kq274 " Jan 12 23:27:49.361: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-1938 get pods update-demo-nautilus-5pzwj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 12 23:27:49.460: INFO: stderr: "" Jan 12 23:27:49.460: INFO: stdout: "true" Jan 12 23:27:49.460: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-1938 get pods update-demo-nautilus-5pzwj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 12 23:27:49.565: INFO: stderr: "" Jan 12 23:27:49.565: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 12 23:27:49.565: INFO: validating pod update-demo-nautilus-5pzwj Jan 12 23:27:49.569: INFO: got data: { "image": "nautilus.jpg" } Jan 12 23:27:49.569: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 12 23:27:49.570: INFO: update-demo-nautilus-5pzwj is verified up and running Jan 12 23:27:49.570: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-1938 get pods update-demo-nautilus-kq274 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 12 23:27:49.658: INFO: stderr: "" Jan 12 23:27:49.658: INFO: stdout: "true" Jan 12 23:27:49.658: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-1938 get pods update-demo-nautilus-kq274 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 12 23:27:49.748: INFO: stderr: "" Jan 12 23:27:49.748: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 12 23:27:49.748: INFO: validating pod update-demo-nautilus-kq274 Jan 12 23:27:49.751: INFO: got data: { "image": "nautilus.jpg" } Jan 12 23:27:49.751: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 12 23:27:49.751: INFO: update-demo-nautilus-kq274 is verified up and running STEP: using delete to clean up resources Jan 12 23:27:49.751: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-1938 delete --grace-period=0 --force -f -' Jan 12 23:27:49.868: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 12 23:27:49.868: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jan 12 23:27:49.868: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-1938 get rc,svc -l name=update-demo --no-headers' Jan 12 23:27:49.967: INFO: stderr: "No resources found in kubectl-1938 namespace.\n" Jan 12 23:27:49.967: INFO: stdout: "" Jan 12 23:27:49.967: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-1938 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 12 23:27:50.067: INFO: stderr: "" Jan 12 23:27:50.067: INFO: stdout: "update-demo-nautilus-5pzwj\nupdate-demo-nautilus-kq274\n" Jan 12 23:27:50.567: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-1938 get rc,svc -l name=update-demo --no-headers' Jan 12 23:27:50.782: INFO: stderr: "No resources found in kubectl-1938 namespace.\n" Jan 12 23:27:50.782: INFO: stdout: "" Jan 12 23:27:50.782: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-1938 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 12 23:27:50.878: INFO: stderr: "" Jan 12 23:27:50.878: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:27:50.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1938" for this suite. • [SLOW TEST:72.324 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:297 should scale a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":309,"completed":122,"skipped":2092,"failed":0} SSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:27:50.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:28:02.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3690" for this suite. • [SLOW TEST:11.242 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":309,"completed":123,"skipped":2097,"failed":0} [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:28:02.194: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename certificates STEP: Waiting for a default service account to be provisioned in namespace [It] should support CSR API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: getting /apis STEP: getting /apis/certificates.k8s.io STEP: getting /apis/certificates.k8s.io/v1 STEP: creating STEP: getting STEP: listing STEP: watching Jan 12 23:28:02.703: INFO: starting watch STEP: patching STEP: updating Jan 12 23:28:02.717: INFO: waiting for watch events with expected annotations Jan 12 23:28:02.717: INFO: saw patched and updated annotations STEP: getting /approval STEP: patching /approval STEP: updating /approval STEP: getting /status STEP: patching /status STEP: updating /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:28:03.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "certificates-724" for this suite. •{"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":309,"completed":124,"skipped":2097,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:28:03.059: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 12 23:28:03.130: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:28:09.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-1049" for this suite. • [SLOW TEST:6.563 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":309,"completed":125,"skipped":2130,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:28:09.623: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [BeforeEach] Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1554 [It] should update a single-container pod's image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jan 12 23:28:09.687: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-9495 run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod' Jan 12 23:28:09.798: INFO: stderr: "" Jan 12 23:28:09.798: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Jan 12 23:28:14.849: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-9495 get pod e2e-test-httpd-pod -o json' Jan 12 23:28:14.951: INFO: stderr: "" Jan 12 23:28:14.952: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2021-01-12T23:28:09Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"managedFields\": [\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:run\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n \".\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {}\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {}\n }\n },\n \"manager\": \"kubectl-run\",\n \"operation\": \"Update\",\n \"time\": \"2021-01-12T23:28:09Z\"\n },\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:phase\": {},\n \"f:podIP\": {},\n \"f:podIPs\": {\n \".\": {},\n \"k:{\\\"ip\\\":\\\"10.244.1.52\\\"}\": {\n \".\": {},\n \"f:ip\": {}\n }\n },\n \"f:startTime\": {}\n }\n },\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"time\": \"2021-01-12T23:28:13Z\"\n }\n ],\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-9495\",\n \"resourceVersion\": \"426646\",\n \"uid\": \"312542e4-452d-4fa8-bcfe-6f99425d3df2\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-8vj8q\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"leguer-worker2\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-8vj8q\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-8vj8q\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-01-12T23:28:09Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-01-12T23:28:13Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-01-12T23:28:13Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-01-12T23:28:09Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://b5d29557a2e8ed1b84341af6a173bd8a6892d08027266ff9e37593cc09c6ab9b\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2021-01-12T23:28:12Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.18.0.12\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.52\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.1.52\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2021-01-12T23:28:09Z\"\n }\n}\n" STEP: replace the image in the pod Jan 12 23:28:14.952: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-9495 replace -f -' Jan 12 23:28:15.351: INFO: stderr: "" Jan 12 23:28:15.351: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1558 Jan 12 23:28:15.393: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-9495 delete pods e2e-test-httpd-pod' Jan 12 23:28:39.823: INFO: stderr: "" Jan 12 23:28:39.823: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:28:39.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9495" for this suite. • [SLOW TEST:30.251 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1551 should update a single-container pod's image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":309,"completed":126,"skipped":2142,"failed":0} SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:28:39.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test emptydir 0777 on tmpfs Jan 12 23:28:39.987: INFO: Waiting up to 5m0s for pod "pod-38749ede-78e1-49d3-a272-83cb89492576" in namespace "emptydir-1719" to be "Succeeded or Failed" Jan 12 23:28:40.009: INFO: Pod "pod-38749ede-78e1-49d3-a272-83cb89492576": Phase="Pending", Reason="", readiness=false. Elapsed: 21.465829ms Jan 12 23:28:42.014: INFO: Pod "pod-38749ede-78e1-49d3-a272-83cb89492576": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026400833s Jan 12 23:28:44.018: INFO: Pod "pod-38749ede-78e1-49d3-a272-83cb89492576": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031206146s STEP: Saw pod success Jan 12 23:28:44.018: INFO: Pod "pod-38749ede-78e1-49d3-a272-83cb89492576" satisfied condition "Succeeded or Failed" Jan 12 23:28:44.021: INFO: Trying to get logs from node leguer-worker2 pod pod-38749ede-78e1-49d3-a272-83cb89492576 container test-container: STEP: delete the pod Jan 12 23:28:44.064: INFO: Waiting for pod pod-38749ede-78e1-49d3-a272-83cb89492576 to disappear Jan 12 23:28:44.078: INFO: Pod pod-38749ede-78e1-49d3-a272-83cb89492576 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:28:44.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1719" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":127,"skipped":2148,"failed":0} SSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:28:44.086: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating pod liveness-bb88ef20-10da-442c-ace2-c1c172853dbe in namespace container-probe-7618 Jan 12 23:28:48.197: INFO: Started pod liveness-bb88ef20-10da-442c-ace2-c1c172853dbe in namespace container-probe-7618 STEP: checking the pod's current state and verifying that restartCount is present Jan 12 23:28:48.200: INFO: Initial restart count of pod liveness-bb88ef20-10da-442c-ace2-c1c172853dbe is 0 Jan 12 23:29:08.259: INFO: Restart count of pod container-probe-7618/liveness-bb88ef20-10da-442c-ace2-c1c172853dbe is now 1 (20.058897114s elapsed) Jan 12 23:29:28.310: INFO: Restart count of pod container-probe-7618/liveness-bb88ef20-10da-442c-ace2-c1c172853dbe is now 2 (40.109754587s elapsed) Jan 12 23:29:48.360: INFO: Restart count of pod container-probe-7618/liveness-bb88ef20-10da-442c-ace2-c1c172853dbe is now 3 (1m0.160004928s elapsed) Jan 12 23:30:08.413: INFO: Restart count of pod container-probe-7618/liveness-bb88ef20-10da-442c-ace2-c1c172853dbe is now 4 (1m20.212810596s elapsed) Jan 12 23:31:22.611: INFO: Restart count of pod container-probe-7618/liveness-bb88ef20-10da-442c-ace2-c1c172853dbe is now 5 (2m34.41078096s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:31:22.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7618" for this suite. • [SLOW TEST:158.580 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":309,"completed":128,"skipped":2151,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:31:22.667: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jan 12 23:31:31.476: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 12 23:31:31.482: INFO: Pod pod-with-prestop-http-hook still exists Jan 12 23:31:33.483: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 12 23:31:33.489: INFO: Pod pod-with-prestop-http-hook still exists Jan 12 23:31:35.483: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 12 23:31:35.489: INFO: Pod pod-with-prestop-http-hook still exists Jan 12 23:31:37.483: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 12 23:31:37.487: INFO: Pod pod-with-prestop-http-hook still exists Jan 12 23:31:39.483: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 12 23:31:39.489: INFO: Pod pod-with-prestop-http-hook still exists Jan 12 23:31:41.483: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 12 23:31:41.487: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:31:41.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1307" for this suite. • [SLOW TEST:18.861 seconds] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:43 should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":309,"completed":129,"skipped":2167,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:31:41.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward API volume plugin Jan 12 23:31:41.587: INFO: Waiting up to 5m0s for pod "downwardapi-volume-45015067-693a-4d39-ad37-a30e2d4e6ecd" in namespace "projected-2599" to be "Succeeded or Failed" Jan 12 23:31:41.647: INFO: Pod "downwardapi-volume-45015067-693a-4d39-ad37-a30e2d4e6ecd": Phase="Pending", Reason="", readiness=false. Elapsed: 60.483321ms Jan 12 23:31:43.651: INFO: Pod "downwardapi-volume-45015067-693a-4d39-ad37-a30e2d4e6ecd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06441877s Jan 12 23:31:45.677: INFO: Pod "downwardapi-volume-45015067-693a-4d39-ad37-a30e2d4e6ecd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.090262462s STEP: Saw pod success Jan 12 23:31:45.677: INFO: Pod "downwardapi-volume-45015067-693a-4d39-ad37-a30e2d4e6ecd" satisfied condition "Succeeded or Failed" Jan 12 23:31:45.680: INFO: Trying to get logs from node leguer-worker2 pod downwardapi-volume-45015067-693a-4d39-ad37-a30e2d4e6ecd container client-container: STEP: delete the pod Jan 12 23:31:45.730: INFO: Waiting for pod downwardapi-volume-45015067-693a-4d39-ad37-a30e2d4e6ecd to disappear Jan 12 23:31:45.772: INFO: Pod downwardapi-volume-45015067-693a-4d39-ad37-a30e2d4e6ecd no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:31:45.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2599" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":309,"completed":130,"skipped":2200,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:31:45.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test substitution in container's args Jan 12 23:31:46.267: INFO: Waiting up to 5m0s for pod "var-expansion-9ecfc9cf-32d2-4477-9f38-15dc1661be0e" in namespace "var-expansion-1614" to be "Succeeded or Failed" Jan 12 23:31:46.280: INFO: Pod "var-expansion-9ecfc9cf-32d2-4477-9f38-15dc1661be0e": Phase="Pending", Reason="", readiness=false. Elapsed: 13.625139ms Jan 12 23:31:48.286: INFO: Pod "var-expansion-9ecfc9cf-32d2-4477-9f38-15dc1661be0e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019020221s Jan 12 23:31:50.290: INFO: Pod "var-expansion-9ecfc9cf-32d2-4477-9f38-15dc1661be0e": Phase="Running", Reason="", readiness=true. Elapsed: 4.023567584s Jan 12 23:31:52.296: INFO: Pod "var-expansion-9ecfc9cf-32d2-4477-9f38-15dc1661be0e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.029073794s STEP: Saw pod success Jan 12 23:31:52.296: INFO: Pod "var-expansion-9ecfc9cf-32d2-4477-9f38-15dc1661be0e" satisfied condition "Succeeded or Failed" Jan 12 23:31:52.299: INFO: Trying to get logs from node leguer-worker2 pod var-expansion-9ecfc9cf-32d2-4477-9f38-15dc1661be0e container dapi-container: STEP: delete the pod Jan 12 23:31:52.318: INFO: Waiting for pod var-expansion-9ecfc9cf-32d2-4477-9f38-15dc1661be0e to disappear Jan 12 23:31:52.372: INFO: Pod var-expansion-9ecfc9cf-32d2-4477-9f38-15dc1661be0e no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:31:52.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1614" for this suite. • [SLOW TEST:6.504 seconds] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":309,"completed":131,"skipped":2227,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:31:52.382: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should support --unix-socket=/path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Starting the proxy Jan 12 23:31:52.440: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-9723 proxy --unix-socket=/tmp/kubectl-proxy-unix669371163/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:31:52.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9723" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":309,"completed":132,"skipped":2260,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:31:52.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward API volume plugin Jan 12 23:31:52.586: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cf2dc553-134d-4d33-a883-7b430a1654c3" in namespace "projected-8376" to be "Succeeded or Failed" Jan 12 23:31:52.591: INFO: Pod "downwardapi-volume-cf2dc553-134d-4d33-a883-7b430a1654c3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.969828ms Jan 12 23:31:54.597: INFO: Pod "downwardapi-volume-cf2dc553-134d-4d33-a883-7b430a1654c3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010379346s Jan 12 23:31:56.601: INFO: Pod "downwardapi-volume-cf2dc553-134d-4d33-a883-7b430a1654c3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.014639719s Jan 12 23:31:58.610: INFO: Pod "downwardapi-volume-cf2dc553-134d-4d33-a883-7b430a1654c3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.023771083s STEP: Saw pod success Jan 12 23:31:58.610: INFO: Pod "downwardapi-volume-cf2dc553-134d-4d33-a883-7b430a1654c3" satisfied condition "Succeeded or Failed" Jan 12 23:31:58.613: INFO: Trying to get logs from node leguer-worker pod downwardapi-volume-cf2dc553-134d-4d33-a883-7b430a1654c3 container client-container: STEP: delete the pod Jan 12 23:31:58.722: INFO: Waiting for pod downwardapi-volume-cf2dc553-134d-4d33-a883-7b430a1654c3 to disappear Jan 12 23:31:58.741: INFO: Pod downwardapi-volume-cf2dc553-134d-4d33-a883-7b430a1654c3 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:31:58.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8376" for this suite. • [SLOW TEST:6.236 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35 should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":309,"completed":133,"skipped":2274,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:31:58.747: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name configmap-test-volume-d651f25a-9bd2-465b-b196-ecdaeabe7c62 STEP: Creating a pod to test consume configMaps Jan 12 23:31:58.887: INFO: Waiting up to 5m0s for pod "pod-configmaps-3a855504-d3a8-4a18-9ede-798d23b95b8d" in namespace "configmap-3696" to be "Succeeded or Failed" Jan 12 23:31:58.904: INFO: Pod "pod-configmaps-3a855504-d3a8-4a18-9ede-798d23b95b8d": Phase="Pending", Reason="", readiness=false. Elapsed: 17.059725ms Jan 12 23:32:00.909: INFO: Pod "pod-configmaps-3a855504-d3a8-4a18-9ede-798d23b95b8d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021968164s Jan 12 23:32:02.941: INFO: Pod "pod-configmaps-3a855504-d3a8-4a18-9ede-798d23b95b8d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.054620552s STEP: Saw pod success Jan 12 23:32:02.941: INFO: Pod "pod-configmaps-3a855504-d3a8-4a18-9ede-798d23b95b8d" satisfied condition "Succeeded or Failed" Jan 12 23:32:02.945: INFO: Trying to get logs from node leguer-worker pod pod-configmaps-3a855504-d3a8-4a18-9ede-798d23b95b8d container agnhost-container: STEP: delete the pod Jan 12 23:32:02.980: INFO: Waiting for pod pod-configmaps-3a855504-d3a8-4a18-9ede-798d23b95b8d to disappear Jan 12 23:32:03.029: INFO: Pod pod-configmaps-3a855504-d3a8-4a18-9ede-798d23b95b8d no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:32:03.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3696" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":309,"completed":134,"skipped":2292,"failed":0} SSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:32:03.038: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating secret with name secret-test-da642614-24ff-45c2-9be9-0710155f939a STEP: Creating a pod to test consume secrets Jan 12 23:32:03.421: INFO: Waiting up to 5m0s for pod "pod-secrets-888a7da6-eaea-400d-8ddf-b364002f36ff" in namespace "secrets-3714" to be "Succeeded or Failed" Jan 12 23:32:03.443: INFO: Pod "pod-secrets-888a7da6-eaea-400d-8ddf-b364002f36ff": Phase="Pending", Reason="", readiness=false. Elapsed: 22.167692ms Jan 12 23:32:05.462: INFO: Pod "pod-secrets-888a7da6-eaea-400d-8ddf-b364002f36ff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040916476s Jan 12 23:32:07.486: INFO: Pod "pod-secrets-888a7da6-eaea-400d-8ddf-b364002f36ff": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065146873s Jan 12 23:32:09.492: INFO: Pod "pod-secrets-888a7da6-eaea-400d-8ddf-b364002f36ff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.070704004s STEP: Saw pod success Jan 12 23:32:09.492: INFO: Pod "pod-secrets-888a7da6-eaea-400d-8ddf-b364002f36ff" satisfied condition "Succeeded or Failed" Jan 12 23:32:09.502: INFO: Trying to get logs from node leguer-worker2 pod pod-secrets-888a7da6-eaea-400d-8ddf-b364002f36ff container secret-volume-test: STEP: delete the pod Jan 12 23:32:09.602: INFO: Waiting for pod pod-secrets-888a7da6-eaea-400d-8ddf-b364002f36ff to disappear Jan 12 23:32:09.652: INFO: Pod pod-secrets-888a7da6-eaea-400d-8ddf-b364002f36ff no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:32:09.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3714" for this suite. • [SLOW TEST:6.641 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":309,"completed":135,"skipped":2298,"failed":0} SSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:32:09.678: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name cm-test-opt-del-bb6e9593-4b37-4651-9ad6-02bf5493dfcf STEP: Creating configMap with name cm-test-opt-upd-264d879e-2b08-4a48-a70f-0c84ec834477 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-bb6e9593-4b37-4651-9ad6-02bf5493dfcf STEP: Updating configmap cm-test-opt-upd-264d879e-2b08-4a48-a70f-0c84ec834477 STEP: Creating configMap with name cm-test-opt-create-2fcc1f25-7756-41f5-96bd-ff3699a38d0f STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:32:21.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9135" for this suite. • [SLOW TEST:12.297 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":309,"completed":136,"skipped":2304,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:32:21.976: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 12 23:32:22.070: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Jan 12 23:32:25.630: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2486 --namespace=crd-publish-openapi-2486 create -f -' Jan 12 23:32:32.091: INFO: stderr: "" Jan 12 23:32:32.092: INFO: stdout: "e2e-test-crd-publish-openapi-3858-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Jan 12 23:32:32.092: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2486 --namespace=crd-publish-openapi-2486 delete e2e-test-crd-publish-openapi-3858-crds test-cr' Jan 12 23:32:32.270: INFO: stderr: "" Jan 12 23:32:32.270: INFO: stdout: "e2e-test-crd-publish-openapi-3858-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Jan 12 23:32:32.270: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2486 --namespace=crd-publish-openapi-2486 apply -f -' Jan 12 23:32:32.648: INFO: stderr: "" Jan 12 23:32:32.648: INFO: stdout: "e2e-test-crd-publish-openapi-3858-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Jan 12 23:32:32.648: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2486 --namespace=crd-publish-openapi-2486 delete e2e-test-crd-publish-openapi-3858-crds test-cr' Jan 12 23:32:32.770: INFO: stderr: "" Jan 12 23:32:32.770: INFO: stdout: "e2e-test-crd-publish-openapi-3858-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Jan 12 23:32:32.770: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2486 explain e2e-test-crd-publish-openapi-3858-crds' Jan 12 23:32:33.036: INFO: stderr: "" Jan 12 23:32:33.037: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3858-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t<>\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:32:36.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2486" for this suite. • [SLOW TEST:14.634 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":309,"completed":137,"skipped":2317,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:32:36.610: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 12 23:32:40.819: INFO: Deleting pod "var-expansion-20c126c6-2684-48d3-bb81-66825d472df9" in namespace "var-expansion-4462" Jan 12 23:32:40.824: INFO: Wait up to 5m0s for pod "var-expansion-20c126c6-2684-48d3-bb81-66825d472df9" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:32:50.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4462" for this suite. • [SLOW TEST:14.333 seconds] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]","total":309,"completed":138,"skipped":2328,"failed":0} SSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:32:50.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating secret with name s-test-opt-del-cf098c8f-aa54-4ab3-bf3a-7b00dd299058 STEP: Creating secret with name s-test-opt-upd-a362c361-9b94-426a-8114-82e838661906 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-cf098c8f-aa54-4ab3-bf3a-7b00dd299058 STEP: Updating secret s-test-opt-upd-a362c361-9b94-426a-8114-82e838661906 STEP: Creating secret with name s-test-opt-create-caa31680-0c52-40d3-b453-71ba1c3883b1 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:33:01.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8065" for this suite. • [SLOW TEST:10.280 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":309,"completed":139,"skipped":2332,"failed":0} SS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:33:01.223: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-5454 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-5454 STEP: creating replication controller externalsvc in namespace services-5454 I0112 23:33:01.469749 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-5454, replica count: 2 I0112 23:33:04.520256 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0112 23:33:07.520470 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Jan 12 23:33:07.702: INFO: Creating new exec pod Jan 12 23:33:13.803: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-5454 exec execpodcnbsc -- /bin/sh -x -c nslookup clusterip-service.services-5454.svc.cluster.local' Jan 12 23:33:14.060: INFO: stderr: "I0112 23:33:13.953681 2203 log.go:181] (0xc000b9b130) (0xc00025e820) Create stream\nI0112 23:33:13.953746 2203 log.go:181] (0xc000b9b130) (0xc00025e820) Stream added, broadcasting: 1\nI0112 23:33:13.956474 2203 log.go:181] (0xc000b9b130) Reply frame received for 1\nI0112 23:33:13.956551 2203 log.go:181] (0xc000b9b130) (0xc000b94000) Create stream\nI0112 23:33:13.956568 2203 log.go:181] (0xc000b9b130) (0xc000b94000) Stream added, broadcasting: 3\nI0112 23:33:13.958011 2203 log.go:181] (0xc000b9b130) Reply frame received for 3\nI0112 23:33:13.958064 2203 log.go:181] (0xc000b9b130) (0xc0005bc000) Create stream\nI0112 23:33:13.958094 2203 log.go:181] (0xc000b9b130) (0xc0005bc000) Stream added, broadcasting: 5\nI0112 23:33:13.959090 2203 log.go:181] (0xc000b9b130) Reply frame received for 5\nI0112 23:33:14.031982 2203 log.go:181] (0xc000b9b130) Data frame received for 5\nI0112 23:33:14.032012 2203 log.go:181] (0xc0005bc000) (5) Data frame handling\nI0112 23:33:14.032031 2203 log.go:181] (0xc0005bc000) (5) Data frame sent\n+ nslookup clusterip-service.services-5454.svc.cluster.local\nI0112 23:33:14.050348 2203 log.go:181] (0xc000b9b130) Data frame received for 3\nI0112 23:33:14.050375 2203 log.go:181] (0xc000b94000) (3) Data frame handling\nI0112 23:33:14.050391 2203 log.go:181] (0xc000b94000) (3) Data frame sent\nI0112 23:33:14.051156 2203 log.go:181] (0xc000b9b130) Data frame received for 3\nI0112 23:33:14.051185 2203 log.go:181] (0xc000b94000) (3) Data frame handling\nI0112 23:33:14.051209 2203 log.go:181] (0xc000b94000) (3) Data frame sent\nI0112 23:33:14.051674 2203 log.go:181] (0xc000b9b130) Data frame received for 5\nI0112 23:33:14.051717 2203 log.go:181] (0xc0005bc000) (5) Data frame handling\nI0112 23:33:14.051750 2203 log.go:181] (0xc000b9b130) Data frame received for 3\nI0112 23:33:14.051770 2203 log.go:181] (0xc000b94000) (3) Data frame handling\nI0112 23:33:14.053653 2203 log.go:181] (0xc000b9b130) Data frame received for 1\nI0112 23:33:14.053683 2203 log.go:181] (0xc00025e820) (1) Data frame handling\nI0112 23:33:14.053698 2203 log.go:181] (0xc00025e820) (1) Data frame sent\nI0112 23:33:14.053714 2203 log.go:181] (0xc000b9b130) (0xc00025e820) Stream removed, broadcasting: 1\nI0112 23:33:14.053739 2203 log.go:181] (0xc000b9b130) Go away received\nI0112 23:33:14.055924 2203 log.go:181] (0xc000b9b130) (0xc00025e820) Stream removed, broadcasting: 1\nI0112 23:33:14.056175 2203 log.go:181] (0xc000b9b130) (0xc000b94000) Stream removed, broadcasting: 3\nI0112 23:33:14.056192 2203 log.go:181] (0xc000b9b130) (0xc0005bc000) Stream removed, broadcasting: 5\n" Jan 12 23:33:14.060: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-5454.svc.cluster.local\tcanonical name = externalsvc.services-5454.svc.cluster.local.\nName:\texternalsvc.services-5454.svc.cluster.local\nAddress: 10.96.255.147\n\n" STEP: deleting ReplicationController externalsvc in namespace services-5454, will wait for the garbage collector to delete the pods Jan 12 23:33:14.117: INFO: Deleting ReplicationController externalsvc took: 4.696299ms Jan 12 23:33:14.718: INFO: Terminating ReplicationController externalsvc pods took: 600.244595ms Jan 12 23:33:40.043: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:33:40.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5454" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 • [SLOW TEST:38.893 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":309,"completed":140,"skipped":2334,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:33:40.117: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a volume subpath [sig-storage] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test substitution in volume subpath Jan 12 23:33:40.235: INFO: Waiting up to 5m0s for pod "var-expansion-2377e09f-997f-473f-ad61-9e3fb4b6a692" in namespace "var-expansion-6325" to be "Succeeded or Failed" Jan 12 23:33:40.251: INFO: Pod "var-expansion-2377e09f-997f-473f-ad61-9e3fb4b6a692": Phase="Pending", Reason="", readiness=false. Elapsed: 16.258802ms Jan 12 23:33:42.256: INFO: Pod "var-expansion-2377e09f-997f-473f-ad61-9e3fb4b6a692": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020864302s Jan 12 23:33:44.261: INFO: Pod "var-expansion-2377e09f-997f-473f-ad61-9e3fb4b6a692": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026060466s STEP: Saw pod success Jan 12 23:33:44.261: INFO: Pod "var-expansion-2377e09f-997f-473f-ad61-9e3fb4b6a692" satisfied condition "Succeeded or Failed" Jan 12 23:33:44.265: INFO: Trying to get logs from node leguer-worker pod var-expansion-2377e09f-997f-473f-ad61-9e3fb4b6a692 container dapi-container: STEP: delete the pod Jan 12 23:33:44.411: INFO: Waiting for pod var-expansion-2377e09f-997f-473f-ad61-9e3fb4b6a692 to disappear Jan 12 23:33:44.517: INFO: Pod var-expansion-2377e09f-997f-473f-ad61-9e3fb4b6a692 no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:33:44.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6325" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance]","total":309,"completed":141,"skipped":2364,"failed":0} ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:33:44.525: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name configmap-test-volume-map-33fbf9f7-1dc9-438d-a82a-4a670fe36694 STEP: Creating a pod to test consume configMaps Jan 12 23:33:44.597: INFO: Waiting up to 5m0s for pod "pod-configmaps-8b04f889-d070-4983-ba31-27b04d4f8522" in namespace "configmap-3876" to be "Succeeded or Failed" Jan 12 23:33:44.648: INFO: Pod "pod-configmaps-8b04f889-d070-4983-ba31-27b04d4f8522": Phase="Pending", Reason="", readiness=false. Elapsed: 51.508896ms Jan 12 23:33:46.653: INFO: Pod "pod-configmaps-8b04f889-d070-4983-ba31-27b04d4f8522": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056416326s Jan 12 23:33:48.659: INFO: Pod "pod-configmaps-8b04f889-d070-4983-ba31-27b04d4f8522": Phase="Running", Reason="", readiness=true. Elapsed: 4.061858323s Jan 12 23:33:50.664: INFO: Pod "pod-configmaps-8b04f889-d070-4983-ba31-27b04d4f8522": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.067228479s STEP: Saw pod success Jan 12 23:33:50.664: INFO: Pod "pod-configmaps-8b04f889-d070-4983-ba31-27b04d4f8522" satisfied condition "Succeeded or Failed" Jan 12 23:33:50.667: INFO: Trying to get logs from node leguer-worker pod pod-configmaps-8b04f889-d070-4983-ba31-27b04d4f8522 container agnhost-container: STEP: delete the pod Jan 12 23:33:50.734: INFO: Waiting for pod pod-configmaps-8b04f889-d070-4983-ba31-27b04d4f8522 to disappear Jan 12 23:33:50.738: INFO: Pod pod-configmaps-8b04f889-d070-4983-ba31-27b04d4f8522 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:33:50.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3876" for this suite. • [SLOW TEST:6.220 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":142,"skipped":2364,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:33:50.746: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating the pod Jan 12 23:33:55.406: INFO: Successfully updated pod "labelsupdated79659dc-9f82-4f95-ad7c-9eb4e2cce6ca" [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:33:57.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4663" for this suite. • [SLOW TEST:6.710 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36 should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":309,"completed":143,"skipped":2397,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Lease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:33:57.457: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [AfterEach] [k8s.io] Lease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:33:57.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-3058" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":309,"completed":144,"skipped":2409,"failed":0} SS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:33:57.826: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:129 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 12 23:33:57.965: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Jan 12 23:33:57.985: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:33:57.990: INFO: Number of nodes with available pods: 0 Jan 12 23:33:57.990: INFO: Node leguer-worker is running more than one daemon pod Jan 12 23:33:58.996: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:33:59.000: INFO: Number of nodes with available pods: 0 Jan 12 23:33:59.000: INFO: Node leguer-worker is running more than one daemon pod Jan 12 23:33:59.997: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:34:00.000: INFO: Number of nodes with available pods: 0 Jan 12 23:34:00.000: INFO: Node leguer-worker is running more than one daemon pod Jan 12 23:34:00.996: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:34:01.000: INFO: Number of nodes with available pods: 0 Jan 12 23:34:01.000: INFO: Node leguer-worker is running more than one daemon pod Jan 12 23:34:01.996: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:34:02.000: INFO: Number of nodes with available pods: 0 Jan 12 23:34:02.000: INFO: Node leguer-worker is running more than one daemon pod Jan 12 23:34:02.998: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:34:03.003: INFO: Number of nodes with available pods: 2 Jan 12 23:34:03.003: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Jan 12 23:34:03.213: INFO: Wrong image for pod: daemon-set-ch2ll. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 12 23:34:03.213: INFO: Wrong image for pod: daemon-set-vjxrb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 12 23:34:03.219: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:34:04.235: INFO: Wrong image for pod: daemon-set-ch2ll. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 12 23:34:04.235: INFO: Wrong image for pod: daemon-set-vjxrb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 12 23:34:04.238: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:34:05.446: INFO: Wrong image for pod: daemon-set-ch2ll. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 12 23:34:05.446: INFO: Pod daemon-set-ch2ll is not available Jan 12 23:34:05.446: INFO: Wrong image for pod: daemon-set-vjxrb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 12 23:34:05.450: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:34:06.224: INFO: Wrong image for pod: daemon-set-ch2ll. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 12 23:34:06.224: INFO: Pod daemon-set-ch2ll is not available Jan 12 23:34:06.224: INFO: Wrong image for pod: daemon-set-vjxrb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 12 23:34:06.229: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:34:07.225: INFO: Wrong image for pod: daemon-set-ch2ll. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 12 23:34:07.225: INFO: Pod daemon-set-ch2ll is not available Jan 12 23:34:07.225: INFO: Wrong image for pod: daemon-set-vjxrb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 12 23:34:07.229: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:34:08.225: INFO: Wrong image for pod: daemon-set-ch2ll. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 12 23:34:08.225: INFO: Pod daemon-set-ch2ll is not available Jan 12 23:34:08.225: INFO: Wrong image for pod: daemon-set-vjxrb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 12 23:34:08.230: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:34:09.224: INFO: Wrong image for pod: daemon-set-ch2ll. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 12 23:34:09.225: INFO: Pod daemon-set-ch2ll is not available Jan 12 23:34:09.225: INFO: Wrong image for pod: daemon-set-vjxrb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 12 23:34:09.229: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:34:10.222: INFO: Pod daemon-set-nx9s9 is not available Jan 12 23:34:10.222: INFO: Wrong image for pod: daemon-set-vjxrb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 12 23:34:10.248: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:34:11.224: INFO: Pod daemon-set-nx9s9 is not available Jan 12 23:34:11.224: INFO: Wrong image for pod: daemon-set-vjxrb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 12 23:34:11.228: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:34:12.259: INFO: Pod daemon-set-nx9s9 is not available Jan 12 23:34:12.259: INFO: Wrong image for pod: daemon-set-vjxrb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 12 23:34:12.263: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:34:13.254: INFO: Wrong image for pod: daemon-set-vjxrb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 12 23:34:13.284: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:34:14.224: INFO: Wrong image for pod: daemon-set-vjxrb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 12 23:34:14.228: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:34:15.226: INFO: Wrong image for pod: daemon-set-vjxrb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 12 23:34:15.226: INFO: Pod daemon-set-vjxrb is not available Jan 12 23:34:15.231: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:34:16.224: INFO: Wrong image for pod: daemon-set-vjxrb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 12 23:34:16.224: INFO: Pod daemon-set-vjxrb is not available Jan 12 23:34:16.227: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:34:17.225: INFO: Wrong image for pod: daemon-set-vjxrb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 12 23:34:17.225: INFO: Pod daemon-set-vjxrb is not available Jan 12 23:34:17.229: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:34:18.225: INFO: Wrong image for pod: daemon-set-vjxrb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 12 23:34:18.225: INFO: Pod daemon-set-vjxrb is not available Jan 12 23:34:18.229: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:34:19.225: INFO: Wrong image for pod: daemon-set-vjxrb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 12 23:34:19.225: INFO: Pod daemon-set-vjxrb is not available Jan 12 23:34:19.230: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:34:20.225: INFO: Wrong image for pod: daemon-set-vjxrb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 12 23:34:20.225: INFO: Pod daemon-set-vjxrb is not available Jan 12 23:34:20.230: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:34:21.225: INFO: Wrong image for pod: daemon-set-vjxrb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 12 23:34:21.225: INFO: Pod daemon-set-vjxrb is not available Jan 12 23:34:21.230: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:34:22.224: INFO: Wrong image for pod: daemon-set-vjxrb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 12 23:34:22.224: INFO: Pod daemon-set-vjxrb is not available Jan 12 23:34:22.228: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:34:23.225: INFO: Wrong image for pod: daemon-set-vjxrb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 12 23:34:23.225: INFO: Pod daemon-set-vjxrb is not available Jan 12 23:34:23.230: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:34:24.225: INFO: Wrong image for pod: daemon-set-vjxrb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 12 23:34:24.225: INFO: Pod daemon-set-vjxrb is not available Jan 12 23:34:24.230: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:34:25.226: INFO: Wrong image for pod: daemon-set-vjxrb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 12 23:34:25.226: INFO: Pod daemon-set-vjxrb is not available Jan 12 23:34:25.230: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:34:26.225: INFO: Wrong image for pod: daemon-set-vjxrb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 12 23:34:26.225: INFO: Pod daemon-set-vjxrb is not available Jan 12 23:34:26.229: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:34:27.225: INFO: Wrong image for pod: daemon-set-vjxrb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 12 23:34:27.225: INFO: Pod daemon-set-vjxrb is not available Jan 12 23:34:27.230: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:34:28.224: INFO: Wrong image for pod: daemon-set-vjxrb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 12 23:34:28.224: INFO: Pod daemon-set-vjxrb is not available Jan 12 23:34:28.229: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:34:29.225: INFO: Wrong image for pod: daemon-set-vjxrb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 12 23:34:29.225: INFO: Pod daemon-set-vjxrb is not available Jan 12 23:34:29.230: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:34:30.225: INFO: Wrong image for pod: daemon-set-vjxrb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 12 23:34:30.225: INFO: Pod daemon-set-vjxrb is not available Jan 12 23:34:30.230: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:34:31.226: INFO: Wrong image for pod: daemon-set-vjxrb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 12 23:34:31.226: INFO: Pod daemon-set-vjxrb is not available Jan 12 23:34:31.230: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:34:32.224: INFO: Wrong image for pod: daemon-set-vjxrb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 12 23:34:32.224: INFO: Pod daemon-set-vjxrb is not available Jan 12 23:34:32.228: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:34:33.225: INFO: Wrong image for pod: daemon-set-vjxrb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 12 23:34:33.225: INFO: Pod daemon-set-vjxrb is not available Jan 12 23:34:33.230: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:34:34.224: INFO: Wrong image for pod: daemon-set-vjxrb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 12 23:34:34.224: INFO: Pod daemon-set-vjxrb is not available Jan 12 23:34:34.228: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:34:35.224: INFO: Wrong image for pod: daemon-set-vjxrb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 12 23:34:35.224: INFO: Pod daemon-set-vjxrb is not available Jan 12 23:34:35.228: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:34:36.225: INFO: Wrong image for pod: daemon-set-vjxrb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 12 23:34:36.225: INFO: Pod daemon-set-vjxrb is not available Jan 12 23:34:36.229: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:34:37.224: INFO: Wrong image for pod: daemon-set-vjxrb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 12 23:34:37.224: INFO: Pod daemon-set-vjxrb is not available Jan 12 23:34:37.228: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:34:38.224: INFO: Wrong image for pod: daemon-set-vjxrb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 12 23:34:38.224: INFO: Pod daemon-set-vjxrb is not available Jan 12 23:34:38.228: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:34:39.225: INFO: Wrong image for pod: daemon-set-vjxrb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 12 23:34:39.225: INFO: Pod daemon-set-vjxrb is not available Jan 12 23:34:39.230: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:34:40.225: INFO: Wrong image for pod: daemon-set-vjxrb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 12 23:34:40.225: INFO: Pod daemon-set-vjxrb is not available Jan 12 23:34:40.228: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:34:41.225: INFO: Wrong image for pod: daemon-set-vjxrb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 12 23:34:41.225: INFO: Pod daemon-set-vjxrb is not available Jan 12 23:34:41.229: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:34:42.224: INFO: Wrong image for pod: daemon-set-vjxrb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 12 23:34:42.224: INFO: Pod daemon-set-vjxrb is not available Jan 12 23:34:42.227: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:34:43.225: INFO: Wrong image for pod: daemon-set-vjxrb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 12 23:34:43.225: INFO: Pod daemon-set-vjxrb is not available Jan 12 23:34:43.230: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:34:44.225: INFO: Wrong image for pod: daemon-set-vjxrb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 12 23:34:44.225: INFO: Pod daemon-set-vjxrb is not available Jan 12 23:34:44.229: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:34:45.227: INFO: Wrong image for pod: daemon-set-vjxrb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 12 23:34:45.227: INFO: Pod daemon-set-vjxrb is not available Jan 12 23:34:45.231: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:34:46.224: INFO: Wrong image for pod: daemon-set-vjxrb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 12 23:34:46.224: INFO: Pod daemon-set-vjxrb is not available Jan 12 23:34:46.228: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:34:47.226: INFO: Wrong image for pod: daemon-set-vjxrb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 12 23:34:47.226: INFO: Pod daemon-set-vjxrb is not available Jan 12 23:34:47.231: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:34:48.223: INFO: Wrong image for pod: daemon-set-vjxrb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 12 23:34:48.223: INFO: Pod daemon-set-vjxrb is not available Jan 12 23:34:48.242: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:34:49.226: INFO: Wrong image for pod: daemon-set-vjxrb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 12 23:34:49.226: INFO: Pod daemon-set-vjxrb is not available Jan 12 23:34:49.231: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:34:50.272: INFO: Pod daemon-set-xgvfs is not available Jan 12 23:34:50.286: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Jan 12 23:34:50.291: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:34:50.294: INFO: Number of nodes with available pods: 1 Jan 12 23:34:50.294: INFO: Node leguer-worker is running more than one daemon pod Jan 12 23:34:51.301: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:34:51.304: INFO: Number of nodes with available pods: 1 Jan 12 23:34:51.304: INFO: Node leguer-worker is running more than one daemon pod Jan 12 23:34:52.299: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:34:52.303: INFO: Number of nodes with available pods: 1 Jan 12 23:34:52.303: INFO: Node leguer-worker is running more than one daemon pod Jan 12 23:34:53.326: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:34:53.330: INFO: Number of nodes with available pods: 2 Jan 12 23:34:53.330: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:95 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5352, will wait for the garbage collector to delete the pods Jan 12 23:34:53.408: INFO: Deleting DaemonSet.extensions daemon-set took: 7.083717ms Jan 12 23:34:54.008: INFO: Terminating DaemonSet.extensions daemon-set pods took: 600.303445ms Jan 12 23:35:50.213: INFO: Number of nodes with available pods: 0 Jan 12 23:35:50.213: INFO: Number of running nodes: 0, number of available pods: 0 Jan 12 23:35:50.216: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"428251"},"items":null} Jan 12 23:35:50.218: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"428251"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:35:50.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5352" for this suite. • [SLOW TEST:112.410 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":309,"completed":145,"skipped":2411,"failed":0} S ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:35:50.237: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name configmap-test-upd-96ba96bf-0639-4d48-acc2-8c2d0aa7f4e4 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:35:56.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2921" for this suite. • [SLOW TEST:6.187 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":309,"completed":146,"skipped":2412,"failed":0} [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:35:56.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 [It] should be submitted and removed [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Jan 12 23:35:56.557: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:36:50.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-554" for this suite. • [SLOW TEST:53.759 seconds] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should be submitted and removed [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":309,"completed":147,"skipped":2412,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-instrumentation] Events API should delete a collection of events [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:36:50.183: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should delete a collection of events [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Create set of events STEP: get a list of Events with a label in the current namespace STEP: delete a list of events Jan 12 23:36:50.311: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity [AfterEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:36:50.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-3528" for this suite. •{"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":309,"completed":148,"skipped":2428,"failed":0} SSSS ------------------------------ [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:36:50.341: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a test event STEP: listing events in all namespaces STEP: listing events in test namespace STEP: listing events with field selection filtering on source STEP: listing events with field selection filtering on reportingController STEP: getting the test event STEP: patching the test event STEP: getting the test event STEP: updating the test event STEP: getting the test event STEP: deleting the test event STEP: listing events in all namespaces STEP: listing events in test namespace [AfterEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:36:50.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-8338" for this suite. •{"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":309,"completed":149,"skipped":2432,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:36:50.510: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 [It] should get a host IP [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating pod Jan 12 23:36:54.618: INFO: Pod pod-hostip-24d04fee-6e3d-43a5-9dd8-75d3dcb2e101 has hostIP: 172.18.0.13 [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:36:54.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8649" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":309,"completed":150,"skipped":2490,"failed":0} SSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:36:54.627: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should support proxy with --port 0 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: starting the proxy server Jan 12 23:36:54.687: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-6641 proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:36:54.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6641" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":309,"completed":151,"skipped":2496,"failed":0} ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:36:54.805: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:129 [It] should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 12 23:36:54.953: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Jan 12 23:36:54.959: INFO: Number of nodes with available pods: 0 Jan 12 23:36:54.959: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Jan 12 23:36:55.038: INFO: Number of nodes with available pods: 0 Jan 12 23:36:55.038: INFO: Node leguer-worker2 is running more than one daemon pod Jan 12 23:36:56.043: INFO: Number of nodes with available pods: 0 Jan 12 23:36:56.043: INFO: Node leguer-worker2 is running more than one daemon pod Jan 12 23:36:57.042: INFO: Number of nodes with available pods: 0 Jan 12 23:36:57.042: INFO: Node leguer-worker2 is running more than one daemon pod Jan 12 23:36:58.044: INFO: Number of nodes with available pods: 0 Jan 12 23:36:58.044: INFO: Node leguer-worker2 is running more than one daemon pod Jan 12 23:36:59.043: INFO: Number of nodes with available pods: 1 Jan 12 23:36:59.043: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Jan 12 23:36:59.080: INFO: Number of nodes with available pods: 1 Jan 12 23:36:59.080: INFO: Number of running nodes: 0, number of available pods: 1 Jan 12 23:37:00.969: INFO: Number of nodes with available pods: 0 Jan 12 23:37:00.969: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Jan 12 23:37:01.394: INFO: Number of nodes with available pods: 0 Jan 12 23:37:01.394: INFO: Node leguer-worker2 is running more than one daemon pod Jan 12 23:37:02.399: INFO: Number of nodes with available pods: 0 Jan 12 23:37:02.399: INFO: Node leguer-worker2 is running more than one daemon pod Jan 12 23:37:03.399: INFO: Number of nodes with available pods: 0 Jan 12 23:37:03.399: INFO: Node leguer-worker2 is running more than one daemon pod Jan 12 23:37:04.399: INFO: Number of nodes with available pods: 0 Jan 12 23:37:04.399: INFO: Node leguer-worker2 is running more than one daemon pod Jan 12 23:37:05.399: INFO: Number of nodes with available pods: 0 Jan 12 23:37:05.399: INFO: Node leguer-worker2 is running more than one daemon pod Jan 12 23:37:06.399: INFO: Number of nodes with available pods: 0 Jan 12 23:37:06.399: INFO: Node leguer-worker2 is running more than one daemon pod Jan 12 23:37:07.398: INFO: Number of nodes with available pods: 0 Jan 12 23:37:07.398: INFO: Node leguer-worker2 is running more than one daemon pod Jan 12 23:37:08.399: INFO: Number of nodes with available pods: 0 Jan 12 23:37:08.399: INFO: Node leguer-worker2 is running more than one daemon pod Jan 12 23:37:09.401: INFO: Number of nodes with available pods: 0 Jan 12 23:37:09.401: INFO: Node leguer-worker2 is running more than one daemon pod Jan 12 23:37:10.399: INFO: Number of nodes with available pods: 0 Jan 12 23:37:10.399: INFO: Node leguer-worker2 is running more than one daemon pod Jan 12 23:37:11.398: INFO: Number of nodes with available pods: 0 Jan 12 23:37:11.398: INFO: Node leguer-worker2 is running more than one daemon pod Jan 12 23:37:12.400: INFO: Number of nodes with available pods: 0 Jan 12 23:37:12.400: INFO: Node leguer-worker2 is running more than one daemon pod Jan 12 23:37:13.400: INFO: Number of nodes with available pods: 0 Jan 12 23:37:13.400: INFO: Node leguer-worker2 is running more than one daemon pod Jan 12 23:37:14.400: INFO: Number of nodes with available pods: 1 Jan 12 23:37:14.400: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:95 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9005, will wait for the garbage collector to delete the pods Jan 12 23:37:14.464: INFO: Deleting DaemonSet.extensions daemon-set took: 6.701454ms Jan 12 23:37:15.064: INFO: Terminating DaemonSet.extensions daemon-set pods took: 600.32273ms Jan 12 23:37:19.876: INFO: Number of nodes with available pods: 0 Jan 12 23:37:19.876: INFO: Number of running nodes: 0, number of available pods: 0 Jan 12 23:37:19.879: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"428637"},"items":null} Jan 12 23:37:19.881: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"428637"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:37:19.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9005" for this suite. • [SLOW TEST:25.119 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":309,"completed":152,"skipped":2496,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:37:19.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name projected-configmap-test-volume-map-69739f02-ee30-457a-a340-70182fbde96d STEP: Creating a pod to test consume configMaps Jan 12 23:37:20.015: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-fe787810-e5dd-402d-988d-b446d4dbcf7e" in namespace "projected-339" to be "Succeeded or Failed" Jan 12 23:37:20.031: INFO: Pod "pod-projected-configmaps-fe787810-e5dd-402d-988d-b446d4dbcf7e": Phase="Pending", Reason="", readiness=false. Elapsed: 15.943208ms Jan 12 23:37:22.036: INFO: Pod "pod-projected-configmaps-fe787810-e5dd-402d-988d-b446d4dbcf7e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020583171s Jan 12 23:37:24.041: INFO: Pod "pod-projected-configmaps-fe787810-e5dd-402d-988d-b446d4dbcf7e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02549853s STEP: Saw pod success Jan 12 23:37:24.041: INFO: Pod "pod-projected-configmaps-fe787810-e5dd-402d-988d-b446d4dbcf7e" satisfied condition "Succeeded or Failed" Jan 12 23:37:24.044: INFO: Trying to get logs from node leguer-worker2 pod pod-projected-configmaps-fe787810-e5dd-402d-988d-b446d4dbcf7e container agnhost-container: STEP: delete the pod Jan 12 23:37:24.101: INFO: Waiting for pod pod-projected-configmaps-fe787810-e5dd-402d-988d-b446d4dbcf7e to disappear Jan 12 23:37:24.113: INFO: Pod pod-projected-configmaps-fe787810-e5dd-402d-988d-b446d4dbcf7e no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:37:24.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-339" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":309,"completed":153,"skipped":2506,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:37:24.121: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a test event STEP: listing all events in all namespaces STEP: patching the test event STEP: fetching the test event STEP: deleting the test event STEP: listing all events in all namespaces [AfterEach] [sig-api-machinery] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:37:24.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-1640" for this suite. •{"msg":"PASSED [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":309,"completed":154,"skipped":2528,"failed":0} SSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:37:24.282: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 12 23:37:24.388: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:37:28.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3114" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":309,"completed":155,"skipped":2533,"failed":0} ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:37:28.605: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test env composition Jan 12 23:37:28.727: INFO: Waiting up to 5m0s for pod "var-expansion-8e89240a-9f81-4f4e-aa92-68e1d70181a3" in namespace "var-expansion-5704" to be "Succeeded or Failed" Jan 12 23:37:28.794: INFO: Pod "var-expansion-8e89240a-9f81-4f4e-aa92-68e1d70181a3": Phase="Pending", Reason="", readiness=false. Elapsed: 66.837903ms Jan 12 23:37:30.798: INFO: Pod "var-expansion-8e89240a-9f81-4f4e-aa92-68e1d70181a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070919144s Jan 12 23:37:32.802: INFO: Pod "var-expansion-8e89240a-9f81-4f4e-aa92-68e1d70181a3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.075129057s STEP: Saw pod success Jan 12 23:37:32.802: INFO: Pod "var-expansion-8e89240a-9f81-4f4e-aa92-68e1d70181a3" satisfied condition "Succeeded or Failed" Jan 12 23:37:32.805: INFO: Trying to get logs from node leguer-worker2 pod var-expansion-8e89240a-9f81-4f4e-aa92-68e1d70181a3 container dapi-container: STEP: delete the pod Jan 12 23:37:33.030: INFO: Waiting for pod var-expansion-8e89240a-9f81-4f4e-aa92-68e1d70181a3 to disappear Jan 12 23:37:33.105: INFO: Pod var-expansion-8e89240a-9f81-4f4e-aa92-68e1d70181a3 no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:37:33.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5704" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":309,"completed":156,"skipped":2533,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:37:33.114: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:129 [It] should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jan 12 23:37:33.294: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:37:33.300: INFO: Number of nodes with available pods: 0 Jan 12 23:37:33.300: INFO: Node leguer-worker is running more than one daemon pod Jan 12 23:37:34.307: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:37:34.310: INFO: Number of nodes with available pods: 0 Jan 12 23:37:34.311: INFO: Node leguer-worker is running more than one daemon pod Jan 12 23:37:35.306: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:37:35.310: INFO: Number of nodes with available pods: 0 Jan 12 23:37:35.310: INFO: Node leguer-worker is running more than one daemon pod Jan 12 23:37:36.382: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:37:36.386: INFO: Number of nodes with available pods: 0 Jan 12 23:37:36.386: INFO: Node leguer-worker is running more than one daemon pod Jan 12 23:37:37.316: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:37:37.320: INFO: Number of nodes with available pods: 0 Jan 12 23:37:37.320: INFO: Node leguer-worker is running more than one daemon pod Jan 12 23:37:38.310: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:37:38.320: INFO: Number of nodes with available pods: 1 Jan 12 23:37:38.320: INFO: Node leguer-worker is running more than one daemon pod Jan 12 23:37:39.303: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:37:39.305: INFO: Number of nodes with available pods: 2 Jan 12 23:37:39.305: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Jan 12 23:37:39.382: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:37:39.386: INFO: Number of nodes with available pods: 1 Jan 12 23:37:39.386: INFO: Node leguer-worker2 is running more than one daemon pod Jan 12 23:37:40.392: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:37:40.395: INFO: Number of nodes with available pods: 1 Jan 12 23:37:40.395: INFO: Node leguer-worker2 is running more than one daemon pod Jan 12 23:37:41.391: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:37:41.394: INFO: Number of nodes with available pods: 1 Jan 12 23:37:41.394: INFO: Node leguer-worker2 is running more than one daemon pod Jan 12 23:37:42.392: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:37:42.396: INFO: Number of nodes with available pods: 1 Jan 12 23:37:42.396: INFO: Node leguer-worker2 is running more than one daemon pod Jan 12 23:37:43.392: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:37:43.395: INFO: Number of nodes with available pods: 1 Jan 12 23:37:43.395: INFO: Node leguer-worker2 is running more than one daemon pod Jan 12 23:37:44.392: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:37:44.396: INFO: Number of nodes with available pods: 1 Jan 12 23:37:44.396: INFO: Node leguer-worker2 is running more than one daemon pod Jan 12 23:37:45.390: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:37:45.394: INFO: Number of nodes with available pods: 1 Jan 12 23:37:45.394: INFO: Node leguer-worker2 is running more than one daemon pod Jan 12 23:37:46.392: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:37:46.395: INFO: Number of nodes with available pods: 1 Jan 12 23:37:46.395: INFO: Node leguer-worker2 is running more than one daemon pod Jan 12 23:37:47.405: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:37:47.408: INFO: Number of nodes with available pods: 1 Jan 12 23:37:47.408: INFO: Node leguer-worker2 is running more than one daemon pod Jan 12 23:37:48.391: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:37:48.394: INFO: Number of nodes with available pods: 1 Jan 12 23:37:48.394: INFO: Node leguer-worker2 is running more than one daemon pod Jan 12 23:37:49.391: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:37:49.395: INFO: Number of nodes with available pods: 1 Jan 12 23:37:49.395: INFO: Node leguer-worker2 is running more than one daemon pod Jan 12 23:37:50.392: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:37:50.395: INFO: Number of nodes with available pods: 1 Jan 12 23:37:50.395: INFO: Node leguer-worker2 is running more than one daemon pod Jan 12 23:37:51.390: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:37:51.394: INFO: Number of nodes with available pods: 1 Jan 12 23:37:51.394: INFO: Node leguer-worker2 is running more than one daemon pod Jan 12 23:37:52.392: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:37:52.395: INFO: Number of nodes with available pods: 1 Jan 12 23:37:52.395: INFO: Node leguer-worker2 is running more than one daemon pod Jan 12 23:37:53.392: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:37:53.396: INFO: Number of nodes with available pods: 1 Jan 12 23:37:53.396: INFO: Node leguer-worker2 is running more than one daemon pod Jan 12 23:37:54.392: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:37:54.396: INFO: Number of nodes with available pods: 1 Jan 12 23:37:54.396: INFO: Node leguer-worker2 is running more than one daemon pod Jan 12 23:37:55.391: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:37:55.395: INFO: Number of nodes with available pods: 1 Jan 12 23:37:55.395: INFO: Node leguer-worker2 is running more than one daemon pod Jan 12 23:37:56.392: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:37:56.396: INFO: Number of nodes with available pods: 1 Jan 12 23:37:56.396: INFO: Node leguer-worker2 is running more than one daemon pod Jan 12 23:37:57.391: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:37:57.410: INFO: Number of nodes with available pods: 1 Jan 12 23:37:57.410: INFO: Node leguer-worker2 is running more than one daemon pod Jan 12 23:37:58.391: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:37:58.394: INFO: Number of nodes with available pods: 1 Jan 12 23:37:58.394: INFO: Node leguer-worker2 is running more than one daemon pod Jan 12 23:37:59.393: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:37:59.402: INFO: Number of nodes with available pods: 1 Jan 12 23:37:59.402: INFO: Node leguer-worker2 is running more than one daemon pod Jan 12 23:38:00.392: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:38:00.396: INFO: Number of nodes with available pods: 1 Jan 12 23:38:00.396: INFO: Node leguer-worker2 is running more than one daemon pod Jan 12 23:38:01.390: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:38:01.394: INFO: Number of nodes with available pods: 1 Jan 12 23:38:01.394: INFO: Node leguer-worker2 is running more than one daemon pod Jan 12 23:38:02.392: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:38:02.396: INFO: Number of nodes with available pods: 1 Jan 12 23:38:02.396: INFO: Node leguer-worker2 is running more than one daemon pod Jan 12 23:38:03.392: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:38:03.396: INFO: Number of nodes with available pods: 1 Jan 12 23:38:03.396: INFO: Node leguer-worker2 is running more than one daemon pod Jan 12 23:38:04.390: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:38:04.394: INFO: Number of nodes with available pods: 1 Jan 12 23:38:04.394: INFO: Node leguer-worker2 is running more than one daemon pod Jan 12 23:38:05.391: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:38:05.395: INFO: Number of nodes with available pods: 1 Jan 12 23:38:05.395: INFO: Node leguer-worker2 is running more than one daemon pod Jan 12 23:38:06.390: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:38:06.393: INFO: Number of nodes with available pods: 1 Jan 12 23:38:06.393: INFO: Node leguer-worker2 is running more than one daemon pod Jan 12 23:38:07.417: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:38:07.421: INFO: Number of nodes with available pods: 1 Jan 12 23:38:07.421: INFO: Node leguer-worker2 is running more than one daemon pod Jan 12 23:38:08.392: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:38:08.395: INFO: Number of nodes with available pods: 1 Jan 12 23:38:08.395: INFO: Node leguer-worker2 is running more than one daemon pod Jan 12 23:38:09.393: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:38:09.397: INFO: Number of nodes with available pods: 1 Jan 12 23:38:09.397: INFO: Node leguer-worker2 is running more than one daemon pod Jan 12 23:38:10.392: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:38:10.396: INFO: Number of nodes with available pods: 1 Jan 12 23:38:10.396: INFO: Node leguer-worker2 is running more than one daemon pod Jan 12 23:38:11.390: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:38:11.393: INFO: Number of nodes with available pods: 1 Jan 12 23:38:11.393: INFO: Node leguer-worker2 is running more than one daemon pod Jan 12 23:38:12.391: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:38:12.395: INFO: Number of nodes with available pods: 1 Jan 12 23:38:12.395: INFO: Node leguer-worker2 is running more than one daemon pod Jan 12 23:38:13.392: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:38:13.397: INFO: Number of nodes with available pods: 1 Jan 12 23:38:13.397: INFO: Node leguer-worker2 is running more than one daemon pod Jan 12 23:38:14.392: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:38:14.396: INFO: Number of nodes with available pods: 1 Jan 12 23:38:14.396: INFO: Node leguer-worker2 is running more than one daemon pod Jan 12 23:38:15.392: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:38:15.398: INFO: Number of nodes with available pods: 1 Jan 12 23:38:15.398: INFO: Node leguer-worker2 is running more than one daemon pod Jan 12 23:38:16.391: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:38:16.395: INFO: Number of nodes with available pods: 1 Jan 12 23:38:16.395: INFO: Node leguer-worker2 is running more than one daemon pod Jan 12 23:38:17.391: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:38:17.395: INFO: Number of nodes with available pods: 1 Jan 12 23:38:17.395: INFO: Node leguer-worker2 is running more than one daemon pod Jan 12 23:38:18.392: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:38:18.396: INFO: Number of nodes with available pods: 1 Jan 12 23:38:18.396: INFO: Node leguer-worker2 is running more than one daemon pod Jan 12 23:38:19.391: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:38:19.394: INFO: Number of nodes with available pods: 1 Jan 12 23:38:19.394: INFO: Node leguer-worker2 is running more than one daemon pod Jan 12 23:38:20.406: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:38:20.410: INFO: Number of nodes with available pods: 1 Jan 12 23:38:20.410: INFO: Node leguer-worker2 is running more than one daemon pod Jan 12 23:38:21.390: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:38:21.393: INFO: Number of nodes with available pods: 1 Jan 12 23:38:21.393: INFO: Node leguer-worker2 is running more than one daemon pod Jan 12 23:38:22.391: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:38:22.395: INFO: Number of nodes with available pods: 1 Jan 12 23:38:22.395: INFO: Node leguer-worker2 is running more than one daemon pod Jan 12 23:38:23.392: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:38:23.395: INFO: Number of nodes with available pods: 1 Jan 12 23:38:23.396: INFO: Node leguer-worker2 is running more than one daemon pod Jan 12 23:38:24.391: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:38:24.394: INFO: Number of nodes with available pods: 1 Jan 12 23:38:24.394: INFO: Node leguer-worker2 is running more than one daemon pod Jan 12 23:38:25.392: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:38:25.395: INFO: Number of nodes with available pods: 1 Jan 12 23:38:25.395: INFO: Node leguer-worker2 is running more than one daemon pod Jan 12 23:38:26.391: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:38:26.394: INFO: Number of nodes with available pods: 1 Jan 12 23:38:26.394: INFO: Node leguer-worker2 is running more than one daemon pod Jan 12 23:38:27.391: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:38:27.395: INFO: Number of nodes with available pods: 1 Jan 12 23:38:27.395: INFO: Node leguer-worker2 is running more than one daemon pod Jan 12 23:38:28.406: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:38:28.410: INFO: Number of nodes with available pods: 1 Jan 12 23:38:28.410: INFO: Node leguer-worker2 is running more than one daemon pod Jan 12 23:38:29.391: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:38:29.395: INFO: Number of nodes with available pods: 1 Jan 12 23:38:29.395: INFO: Node leguer-worker2 is running more than one daemon pod Jan 12 23:38:30.391: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:38:30.395: INFO: Number of nodes with available pods: 1 Jan 12 23:38:30.395: INFO: Node leguer-worker2 is running more than one daemon pod Jan 12 23:38:31.390: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:38:31.393: INFO: Number of nodes with available pods: 1 Jan 12 23:38:31.393: INFO: Node leguer-worker2 is running more than one daemon pod Jan 12 23:38:32.390: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:38:32.393: INFO: Number of nodes with available pods: 1 Jan 12 23:38:32.393: INFO: Node leguer-worker2 is running more than one daemon pod Jan 12 23:38:33.392: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:38:33.396: INFO: Number of nodes with available pods: 1 Jan 12 23:38:33.396: INFO: Node leguer-worker2 is running more than one daemon pod Jan 12 23:38:34.393: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:38:34.397: INFO: Number of nodes with available pods: 1 Jan 12 23:38:34.397: INFO: Node leguer-worker2 is running more than one daemon pod Jan 12 23:38:35.393: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:38:35.397: INFO: Number of nodes with available pods: 1 Jan 12 23:38:35.397: INFO: Node leguer-worker2 is running more than one daemon pod Jan 12 23:38:36.409: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:38:36.412: INFO: Number of nodes with available pods: 1 Jan 12 23:38:36.413: INFO: Node leguer-worker2 is running more than one daemon pod Jan 12 23:38:37.399: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:38:37.402: INFO: Number of nodes with available pods: 1 Jan 12 23:38:37.402: INFO: Node leguer-worker2 is running more than one daemon pod Jan 12 23:38:38.417: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:38:38.428: INFO: Number of nodes with available pods: 1 Jan 12 23:38:38.428: INFO: Node leguer-worker2 is running more than one daemon pod Jan 12 23:38:39.392: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:38:39.396: INFO: Number of nodes with available pods: 1 Jan 12 23:38:39.396: INFO: Node leguer-worker2 is running more than one daemon pod Jan 12 23:38:40.392: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:38:40.396: INFO: Number of nodes with available pods: 1 Jan 12 23:38:40.396: INFO: Node leguer-worker2 is running more than one daemon pod Jan 12 23:38:41.535: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:38:41.539: INFO: Number of nodes with available pods: 1 Jan 12 23:38:41.539: INFO: Node leguer-worker2 is running more than one daemon pod Jan 12 23:38:42.391: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:38:42.394: INFO: Number of nodes with available pods: 1 Jan 12 23:38:42.394: INFO: Node leguer-worker2 is running more than one daemon pod Jan 12 23:38:43.393: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:38:43.397: INFO: Number of nodes with available pods: 1 Jan 12 23:38:43.397: INFO: Node leguer-worker2 is running more than one daemon pod Jan 12 23:38:44.391: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 12 23:38:44.395: INFO: Number of nodes with available pods: 2 Jan 12 23:38:44.395: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:95 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-507, will wait for the garbage collector to delete the pods Jan 12 23:38:44.459: INFO: Deleting DaemonSet.extensions daemon-set took: 7.83315ms Jan 12 23:38:45.059: INFO: Terminating DaemonSet.extensions daemon-set pods took: 600.303434ms Jan 12 23:39:50.162: INFO: Number of nodes with available pods: 0 Jan 12 23:39:50.162: INFO: Number of running nodes: 0, number of available pods: 0 Jan 12 23:39:50.165: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"429105"},"items":null} Jan 12 23:39:50.168: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"429105"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:39:50.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-507" for this suite. • [SLOW TEST:137.073 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":309,"completed":157,"skipped":2565,"failed":0} SSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:39:50.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:40:03.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7713" for this suite. • [SLOW TEST:13.270 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":309,"completed":158,"skipped":2568,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:40:03.459: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Jan 12 23:40:04.102: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Jan 12 23:40:06.115: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746091604, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746091604, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746091604, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746091604, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-7d6697c5b7\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 12 23:40:09.185: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 12 23:40:09.189: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:40:10.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-6405" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:7.068 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":309,"completed":159,"skipped":2574,"failed":0} SSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:40:10.527: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [AfterEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:40:10.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6839" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":309,"completed":160,"skipped":2578,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:40:10.681: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 12 23:40:10.819: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Jan 12 23:40:14.374: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5786 --namespace=crd-publish-openapi-5786 create -f -' Jan 12 23:40:18.813: INFO: stderr: "" Jan 12 23:40:18.813: INFO: stdout: "e2e-test-crd-publish-openapi-8369-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Jan 12 23:40:18.813: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5786 --namespace=crd-publish-openapi-5786 delete e2e-test-crd-publish-openapi-8369-crds test-cr' Jan 12 23:40:18.943: INFO: stderr: "" Jan 12 23:40:18.943: INFO: stdout: "e2e-test-crd-publish-openapi-8369-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Jan 12 23:40:18.943: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5786 --namespace=crd-publish-openapi-5786 apply -f -' Jan 12 23:40:19.294: INFO: stderr: "" Jan 12 23:40:19.294: INFO: stdout: "e2e-test-crd-publish-openapi-8369-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Jan 12 23:40:19.294: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5786 --namespace=crd-publish-openapi-5786 delete e2e-test-crd-publish-openapi-8369-crds test-cr' Jan 12 23:40:19.433: INFO: stderr: "" Jan 12 23:40:19.433: INFO: stdout: "e2e-test-crd-publish-openapi-8369-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Jan 12 23:40:19.433: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5786 explain e2e-test-crd-publish-openapi-8369-crds' Jan 12 23:40:19.710: INFO: stderr: "" Jan 12 23:40:19.710: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8369-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:40:23.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5786" for this suite. • [SLOW TEST:12.618 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":309,"completed":161,"skipped":2606,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:40:23.300: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-1035 [It] should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating statefulset ss in namespace statefulset-1035 Jan 12 23:40:23.423: INFO: Found 0 stateful pods, waiting for 1 Jan 12 23:40:33.428: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Jan 12 23:40:33.442: INFO: Deleting all statefulset in ns statefulset-1035 Jan 12 23:40:33.448: INFO: Scaling statefulset ss to 0 Jan 12 23:40:53.577: INFO: Waiting for statefulset status.replicas updated to 0 Jan 12 23:40:53.580: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:40:53.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1035" for this suite. • [SLOW TEST:30.303 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":309,"completed":162,"skipped":2619,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Events should delete a collection of events [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:40:53.603: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of events [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Create set of events Jan 12 23:40:53.692: INFO: created test-event-1 Jan 12 23:40:53.707: INFO: created test-event-2 Jan 12 23:40:53.712: INFO: created test-event-3 STEP: get a list of Events with a label in the current namespace STEP: delete collection of events Jan 12 23:40:53.718: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity Jan 12 23:40:53.739: INFO: requesting list of events to confirm quantity [AfterEach] [sig-api-machinery] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:40:53.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-4343" for this suite. •{"msg":"PASSED [sig-api-machinery] Events should delete a collection of events [Conformance]","total":309,"completed":163,"skipped":2664,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:40:53.749: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [BeforeEach] Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:299 [It] should create and stop a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a replication controller Jan 12 23:40:53.842: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-1435 create -f -' Jan 12 23:40:54.236: INFO: stderr: "" Jan 12 23:40:54.236: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 12 23:40:54.236: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-1435 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 12 23:40:54.366: INFO: stderr: "" Jan 12 23:40:54.366: INFO: stdout: "update-demo-nautilus-sklqs update-demo-nautilus-sqd7b " Jan 12 23:40:54.366: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-1435 get pods update-demo-nautilus-sklqs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 12 23:40:54.467: INFO: stderr: "" Jan 12 23:40:54.467: INFO: stdout: "" Jan 12 23:40:54.467: INFO: update-demo-nautilus-sklqs is created but not running Jan 12 23:40:59.468: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-1435 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 12 23:40:59.575: INFO: stderr: "" Jan 12 23:40:59.575: INFO: stdout: "update-demo-nautilus-sklqs update-demo-nautilus-sqd7b " Jan 12 23:40:59.575: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-1435 get pods update-demo-nautilus-sklqs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 12 23:40:59.687: INFO: stderr: "" Jan 12 23:40:59.687: INFO: stdout: "true" Jan 12 23:40:59.687: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-1435 get pods update-demo-nautilus-sklqs -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 12 23:40:59.782: INFO: stderr: "" Jan 12 23:40:59.782: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 12 23:40:59.782: INFO: validating pod update-demo-nautilus-sklqs Jan 12 23:40:59.786: INFO: got data: { "image": "nautilus.jpg" } Jan 12 23:40:59.787: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 12 23:40:59.787: INFO: update-demo-nautilus-sklqs is verified up and running Jan 12 23:40:59.787: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-1435 get pods update-demo-nautilus-sqd7b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 12 23:40:59.886: INFO: stderr: "" Jan 12 23:40:59.886: INFO: stdout: "true" Jan 12 23:40:59.886: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-1435 get pods update-demo-nautilus-sqd7b -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 12 23:40:59.980: INFO: stderr: "" Jan 12 23:40:59.980: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 12 23:40:59.980: INFO: validating pod update-demo-nautilus-sqd7b Jan 12 23:40:59.984: INFO: got data: { "image": "nautilus.jpg" } Jan 12 23:40:59.984: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 12 23:40:59.984: INFO: update-demo-nautilus-sqd7b is verified up and running STEP: using delete to clean up resources Jan 12 23:40:59.984: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-1435 delete --grace-period=0 --force -f -' Jan 12 23:41:00.086: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 12 23:41:00.086: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jan 12 23:41:00.086: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-1435 get rc,svc -l name=update-demo --no-headers' Jan 12 23:41:00.191: INFO: stderr: "No resources found in kubectl-1435 namespace.\n" Jan 12 23:41:00.191: INFO: stdout: "" Jan 12 23:41:00.191: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-1435 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 12 23:41:00.406: INFO: stderr: "" Jan 12 23:41:00.406: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:41:00.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1435" for this suite. • [SLOW TEST:6.665 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:297 should create and stop a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":309,"completed":164,"skipped":2669,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:41:00.414: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:92 Jan 12 23:41:00.595: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 12 23:41:00.612: INFO: Waiting for terminating namespaces to be deleted... Jan 12 23:41:00.614: INFO: Logging pods the apiserver thinks is on node leguer-worker before test Jan 12 23:41:00.619: INFO: rally-a8f48c6d-3kmika18-pdtzv from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 12 23:41:00.619: INFO: Container rally-a8f48c6d-3kmika18 ready: true, restart count 0 Jan 12 23:41:00.619: INFO: rally-a8f48c6d-3kmika18-pllzg from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 12 23:41:00.619: INFO: Container rally-a8f48c6d-3kmika18 ready: true, restart count 0 Jan 12 23:41:00.619: INFO: rally-a8f48c6d-4cyi45kq-j5tzz from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 12 23:41:00.619: INFO: Container rally-a8f48c6d-4cyi45kq ready: true, restart count 0 Jan 12 23:41:00.620: INFO: rally-a8f48c6d-f3hls6a3-57dwc from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:32 +0000 UTC (1 container statuses recorded) Jan 12 23:41:00.620: INFO: Container rally-a8f48c6d-f3hls6a3 ready: true, restart count 0 Jan 12 23:41:00.620: INFO: rally-a8f48c6d-1y3amfc0-lp8st from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:32 +0000 UTC (1 container statuses recorded) Jan 12 23:41:00.620: INFO: Container rally-a8f48c6d-1y3amfc0 ready: true, restart count 0 Jan 12 23:41:00.620: INFO: rally-a8f48c6d-9pqmjehi-9zwjj from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 12 23:41:00.620: INFO: Container rally-a8f48c6d-9pqmjehi ready: true, restart count 0 Jan 12 23:41:00.620: INFO: chaos-controller-manager-69c479c674-s796v from default started at 2021-01-10 20:58:24 +0000 UTC (1 container statuses recorded) Jan 12 23:41:00.620: INFO: Container chaos-mesh ready: true, restart count 0 Jan 12 23:41:00.620: INFO: chaos-daemon-lv692 from default started at 2021-01-10 20:58:25 +0000 UTC (1 container statuses recorded) Jan 12 23:41:00.620: INFO: Container chaos-daemon ready: true, restart count 0 Jan 12 23:41:00.620: INFO: kindnet-psm25 from kube-system started at 2021-01-10 17:38:10 +0000 UTC (1 container statuses recorded) Jan 12 23:41:00.620: INFO: Container kindnet-cni ready: true, restart count 0 Jan 12 23:41:00.620: INFO: kube-proxy-bmbcs from kube-system started at 2021-01-10 17:38:10 +0000 UTC (1 container statuses recorded) Jan 12 23:41:00.620: INFO: Container kube-proxy ready: true, restart count 0 Jan 12 23:41:00.620: INFO: update-demo-nautilus-sklqs from kubectl-1435 started at 2021-01-12 23:40:54 +0000 UTC (1 container statuses recorded) Jan 12 23:41:00.620: INFO: Container update-demo ready: true, restart count 0 Jan 12 23:41:00.620: INFO: Logging pods the apiserver thinks is on node leguer-worker2 before test Jan 12 23:41:00.626: INFO: rally-a8f48c6d-4cyi45kq-knr4r from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 12 23:41:00.626: INFO: Container rally-a8f48c6d-4cyi45kq ready: true, restart count 0 Jan 12 23:41:00.626: INFO: rally-a8f48c6d-f3hls6a3-dwt8n from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:32 +0000 UTC (1 container statuses recorded) Jan 12 23:41:00.626: INFO: Container rally-a8f48c6d-f3hls6a3 ready: true, restart count 0 Jan 12 23:41:00.626: INFO: rally-a8f48c6d-1y3amfc0-hh9qk from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:32 +0000 UTC (1 container statuses recorded) Jan 12 23:41:00.626: INFO: Container rally-a8f48c6d-1y3amfc0 ready: true, restart count 0 Jan 12 23:41:00.626: INFO: rally-a8f48c6d-9pqmjehi-85slb from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 12 23:41:00.626: INFO: Container rally-a8f48c6d-9pqmjehi ready: true, restart count 0 Jan 12 23:41:00.626: INFO: rally-a8f48c6d-vnukxqu0-llj24 from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 12 23:41:00.626: INFO: Container rally-a8f48c6d-vnukxqu0 ready: true, restart count 0 Jan 12 23:41:00.626: INFO: rally-a8f48c6d-vnukxqu0-v85kr from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 12 23:41:00.626: INFO: Container rally-a8f48c6d-vnukxqu0 ready: true, restart count 0 Jan 12 23:41:00.626: INFO: chaos-daemon-ffkg7 from default started at 2021-01-10 20:58:25 +0000 UTC (1 container statuses recorded) Jan 12 23:41:00.626: INFO: Container chaos-daemon ready: true, restart count 0 Jan 12 23:41:00.626: INFO: kindnet-8wggd from kube-system started at 2021-01-10 17:38:10 +0000 UTC (1 container statuses recorded) Jan 12 23:41:00.626: INFO: Container kindnet-cni ready: true, restart count 0 Jan 12 23:41:00.626: INFO: kube-proxy-29gxg from kube-system started at 2021-01-10 17:38:09 +0000 UTC (1 container statuses recorded) Jan 12 23:41:00.626: INFO: Container kube-proxy ready: true, restart count 0 Jan 12 23:41:00.626: INFO: update-demo-nautilus-sqd7b from kubectl-1435 started at 2021-01-12 23:40:54 +0000 UTC (1 container statuses recorded) Jan 12 23:41:00.626: INFO: Container update-demo ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-1f5db7c3-f112-40ad-b0fe-12a089944e7c 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-1f5db7c3-f112-40ad-b0fe-12a089944e7c off the node leguer-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-1f5db7c3-f112-40ad-b0fe-12a089944e7c [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:41:08.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6158" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:83 • [SLOW TEST:8.387 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":309,"completed":165,"skipped":2692,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:41:08.802: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 12 23:41:08.953: INFO: The status of Pod test-webserver-267f0cbc-0c81-494c-a003-d0bc638d4448 is Pending, waiting for it to be Running (with Ready = true) Jan 12 23:41:10.958: INFO: The status of Pod test-webserver-267f0cbc-0c81-494c-a003-d0bc638d4448 is Pending, waiting for it to be Running (with Ready = true) Jan 12 23:41:12.959: INFO: The status of Pod test-webserver-267f0cbc-0c81-494c-a003-d0bc638d4448 is Running (Ready = false) Jan 12 23:41:14.956: INFO: The status of Pod test-webserver-267f0cbc-0c81-494c-a003-d0bc638d4448 is Running (Ready = false) Jan 12 23:41:16.957: INFO: The status of Pod test-webserver-267f0cbc-0c81-494c-a003-d0bc638d4448 is Running (Ready = false) Jan 12 23:41:18.957: INFO: The status of Pod test-webserver-267f0cbc-0c81-494c-a003-d0bc638d4448 is Running (Ready = false) Jan 12 23:41:20.958: INFO: The status of Pod test-webserver-267f0cbc-0c81-494c-a003-d0bc638d4448 is Running (Ready = false) Jan 12 23:41:22.957: INFO: The status of Pod test-webserver-267f0cbc-0c81-494c-a003-d0bc638d4448 is Running (Ready = false) Jan 12 23:41:24.958: INFO: The status of Pod test-webserver-267f0cbc-0c81-494c-a003-d0bc638d4448 is Running (Ready = false) Jan 12 23:41:26.957: INFO: The status of Pod test-webserver-267f0cbc-0c81-494c-a003-d0bc638d4448 is Running (Ready = false) Jan 12 23:41:28.958: INFO: The status of Pod test-webserver-267f0cbc-0c81-494c-a003-d0bc638d4448 is Running (Ready = false) Jan 12 23:41:30.958: INFO: The status of Pod test-webserver-267f0cbc-0c81-494c-a003-d0bc638d4448 is Running (Ready = false) Jan 12 23:41:32.958: INFO: The status of Pod test-webserver-267f0cbc-0c81-494c-a003-d0bc638d4448 is Running (Ready = false) Jan 12 23:41:34.957: INFO: The status of Pod test-webserver-267f0cbc-0c81-494c-a003-d0bc638d4448 is Running (Ready = true) Jan 12 23:41:34.960: INFO: Container started at 2021-01-12 23:41:11 +0000 UTC, pod became ready at 2021-01-12 23:41:33 +0000 UTC [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:41:34.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7585" for this suite. • [SLOW TEST:26.167 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":309,"completed":166,"skipped":2703,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:41:34.970: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Jan 12 23:41:35.158: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9572 e3b22451-3a4a-448e-bc2d-685cec27284f 429703 0 2021-01-12 23:41:35 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-01-12 23:41:35 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jan 12 23:41:35.159: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9572 e3b22451-3a4a-448e-bc2d-685cec27284f 429704 0 2021-01-12 23:41:35 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-01-12 23:41:35 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Jan 12 23:41:35.159: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9572 e3b22451-3a4a-448e-bc2d-685cec27284f 429705 0 2021-01-12 23:41:35 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-01-12 23:41:35 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Jan 12 23:41:45.195: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9572 e3b22451-3a4a-448e-bc2d-685cec27284f 429739 0 2021-01-12 23:41:35 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-01-12 23:41:35 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jan 12 23:41:45.195: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9572 e3b22451-3a4a-448e-bc2d-685cec27284f 429740 0 2021-01-12 23:41:35 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-01-12 23:41:35 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} Jan 12 23:41:45.195: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9572 e3b22451-3a4a-448e-bc2d-685cec27284f 429741 0 2021-01-12 23:41:35 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-01-12 23:41:35 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:41:45.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9572" for this suite. • [SLOW TEST:10.233 seconds] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":309,"completed":167,"skipped":2713,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:41:45.203: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0112 23:41:55.322468 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Jan 12 23:42:57.344: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:42:57.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1264" for this suite. • [SLOW TEST:72.169 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":309,"completed":168,"skipped":2730,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:42:57.373: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should check if v1 is in available api versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: validating api versions Jan 12 23:42:57.476: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-9638 api-versions' Jan 12 23:42:57.681: INFO: stderr: "" Jan 12 23:42:57.682: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nflowcontrol.apiserver.k8s.io/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1\nnode.k8s.io/v1beta1\npingcap.com/v1alpha1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:42:57.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9638" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":309,"completed":169,"skipped":2732,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:42:57.693: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:85 [It] deployment should support proportional scaling [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 12 23:42:57.776: INFO: Creating deployment "webserver-deployment" Jan 12 23:42:57.780: INFO: Waiting for observed generation 1 Jan 12 23:42:59.878: INFO: Waiting for all required pods to come up Jan 12 23:42:59.919: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Jan 12 23:43:11.985: INFO: Waiting for deployment "webserver-deployment" to complete Jan 12 23:43:11.991: INFO: Updating deployment "webserver-deployment" with a non-existent image Jan 12 23:43:11.999: INFO: Updating deployment webserver-deployment Jan 12 23:43:11.999: INFO: Waiting for observed generation 2 Jan 12 23:43:14.049: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Jan 12 23:43:14.052: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Jan 12 23:43:14.055: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Jan 12 23:43:14.061: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Jan 12 23:43:14.061: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Jan 12 23:43:14.063: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Jan 12 23:43:14.066: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Jan 12 23:43:14.066: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Jan 12 23:43:14.073: INFO: Updating deployment webserver-deployment Jan 12 23:43:14.073: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Jan 12 23:43:14.469: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Jan 12 23:43:14.809: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:79 Jan 12 23:43:15.277: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-6118 28187bef-b352-4cce-9d21-3e9e74475f66 430188 3 2021-01-12 23:42:57 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2021-01-12 23:42:57 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-01-12 23:43:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc006ccd108 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-795d758f88" is progressing.,LastUpdateTime:2021-01-12 23:43:13 +0000 UTC,LastTransitionTime:2021-01-12 23:42:57 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2021-01-12 23:43:14 +0000 UTC,LastTransitionTime:2021-01-12 23:43:14 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Jan 12 23:43:15.339: INFO: New ReplicaSet "webserver-deployment-795d758f88" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-795d758f88 deployment-6118 4fc3c4f2-01d1-4437-b022-2be6ba93b4ce 430228 3 2021-01-12 23:43:12 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 28187bef-b352-4cce-9d21-3e9e74475f66 0xc006ccd4e7 0xc006ccd4e8}] [] [{kube-controller-manager Update apps/v1 2021-01-12 23:43:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"28187bef-b352-4cce-9d21-3e9e74475f66\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 795d758f88,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc006ccd568 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 12 23:43:15.339: INFO: All old ReplicaSets of Deployment "webserver-deployment": Jan 12 23:43:15.339: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-dd94f59b7 deployment-6118 ec98c638-8667-4460-8dd8-47aeaf0fdc27 430224 3 2021-01-12 23:42:57 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 28187bef-b352-4cce-9d21-3e9e74475f66 0xc006ccd5c7 0xc006ccd5c8}] [] [{kube-controller-manager Update apps/v1 2021-01-12 23:43:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"28187bef-b352-4cce-9d21-3e9e74475f66\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: dd94f59b7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc006ccd638 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Jan 12 23:43:15.427: INFO: Pod "webserver-deployment-795d758f88-9swvv" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-9swvv webserver-deployment-795d758f88- deployment-6118 319c6148-d5c1-46a6-a5e8-cd7adef46991 430128 0 2021-01-12 23:43:12 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 4fc3c4f2-01d1-4437-b022-2be6ba93b4ce 0xc006ccda77 0xc006ccda78}] [] [{kube-controller-manager Update v1 2021-01-12 23:43:12 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4fc3c4f2-01d1-4437-b022-2be6ba93b4ce\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-12 23:43:12 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xlzk9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xlzk9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xlzk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:43:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:43:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:43:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:43:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2021-01-12 23:43:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 12 23:43:15.427: INFO: Pod "webserver-deployment-795d758f88-bc4cb" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-bc4cb webserver-deployment-795d758f88- deployment-6118 713923a8-f4d9-4536-9b1b-a1165745cfd4 430155 0 2021-01-12 23:43:12 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 4fc3c4f2-01d1-4437-b022-2be6ba93b4ce 0xc006ccdc20 0xc006ccdc21}] [] [{kube-controller-manager Update v1 2021-01-12 23:43:12 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4fc3c4f2-01d1-4437-b022-2be6ba93b4ce\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-12 23:43:12 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xlzk9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xlzk9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xlzk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:43:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:43:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:43:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:43:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2021-01-12 23:43:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 12 23:43:15.427: INFO: Pod "webserver-deployment-795d758f88-brhkr" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-brhkr webserver-deployment-795d758f88- deployment-6118 2eebfdfa-949a-42a2-96f3-0c174a9648fe 430190 0 2021-01-12 23:43:14 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 4fc3c4f2-01d1-4437-b022-2be6ba93b4ce 0xc006ccddc0 0xc006ccddc1}] [] [{kube-controller-manager Update v1 2021-01-12 23:43:14 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4fc3c4f2-01d1-4437-b022-2be6ba93b4ce\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xlzk9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xlzk9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xlzk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:43:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 12 23:43:15.428: INFO: Pod "webserver-deployment-795d758f88-c6ktt" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-c6ktt webserver-deployment-795d758f88- deployment-6118 d9bd9e19-4245-42e3-a227-a891bc6e8e5b 430161 0 2021-01-12 23:43:12 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 4fc3c4f2-01d1-4437-b022-2be6ba93b4ce 0xc006ccdf00 0xc006ccdf01}] [] [{kube-controller-manager Update v1 2021-01-12 23:43:12 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4fc3c4f2-01d1-4437-b022-2be6ba93b4ce\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-12 23:43:12 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xlzk9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xlzk9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xlzk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:43:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:43:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:43:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:43:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2021-01-12 23:43:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 12 23:43:15.428: INFO: Pod "webserver-deployment-795d758f88-jll6l" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-jll6l webserver-deployment-795d758f88- deployment-6118 c7727c06-ecfb-43f3-8d87-59447d0f60b2 430208 0 2021-01-12 23:43:14 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 4fc3c4f2-01d1-4437-b022-2be6ba93b4ce 0xc0039f80a0 0xc0039f80a1}] [] [{kube-controller-manager Update v1 2021-01-12 23:43:14 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4fc3c4f2-01d1-4437-b022-2be6ba93b4ce\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xlzk9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xlzk9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xlzk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:43:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 12 23:43:15.428: INFO: Pod "webserver-deployment-795d758f88-ml8mq" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-ml8mq webserver-deployment-795d758f88- deployment-6118 ca0db053-a7a9-4f04-9964-6fb9df0cf946 430207 0 2021-01-12 23:43:14 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 4fc3c4f2-01d1-4437-b022-2be6ba93b4ce 0xc0039f81e0 0xc0039f81e1}] [] [{kube-controller-manager Update v1 2021-01-12 23:43:14 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4fc3c4f2-01d1-4437-b022-2be6ba93b4ce\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xlzk9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xlzk9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xlzk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:43:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 12 23:43:15.428: INFO: Pod "webserver-deployment-795d758f88-mx8vm" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-mx8vm webserver-deployment-795d758f88- deployment-6118 6b8e83fb-fbbb-4ad3-a2ca-8a43360ee0e4 430152 0 2021-01-12 23:43:12 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 4fc3c4f2-01d1-4437-b022-2be6ba93b4ce 0xc0039f8320 0xc0039f8321}] [] [{kube-controller-manager Update v1 2021-01-12 23:43:12 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4fc3c4f2-01d1-4437-b022-2be6ba93b4ce\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-12 23:43:12 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xlzk9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xlzk9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xlzk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:43:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:43:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:43:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:43:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2021-01-12 23:43:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 12 23:43:15.428: INFO: Pod "webserver-deployment-795d758f88-pgp89" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-pgp89 webserver-deployment-795d758f88- deployment-6118 780b0173-50e0-4367-bd2d-44397aeb7270 430223 0 2021-01-12 23:43:15 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 4fc3c4f2-01d1-4437-b022-2be6ba93b4ce 0xc0039f84c0 0xc0039f84c1}] [] [{kube-controller-manager Update v1 2021-01-12 23:43:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4fc3c4f2-01d1-4437-b022-2be6ba93b4ce\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xlzk9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xlzk9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xlzk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:43:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 12 23:43:15.429: INFO: Pod "webserver-deployment-795d758f88-s5kpd" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-s5kpd webserver-deployment-795d758f88- deployment-6118 fd4f3312-d619-4874-bee8-5555d59c0035 430191 0 2021-01-12 23:43:14 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 4fc3c4f2-01d1-4437-b022-2be6ba93b4ce 0xc0039f8600 0xc0039f8601}] [] [{kube-controller-manager Update v1 2021-01-12 23:43:14 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4fc3c4f2-01d1-4437-b022-2be6ba93b4ce\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xlzk9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xlzk9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xlzk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:43:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 12 23:43:15.429: INFO: Pod "webserver-deployment-795d758f88-sqjzs" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-sqjzs webserver-deployment-795d758f88- deployment-6118 23b985b0-7f12-420a-874e-03e274585064 430210 0 2021-01-12 23:43:14 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 4fc3c4f2-01d1-4437-b022-2be6ba93b4ce 0xc0039f8740 0xc0039f8741}] [] [{kube-controller-manager Update v1 2021-01-12 23:43:14 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4fc3c4f2-01d1-4437-b022-2be6ba93b4ce\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xlzk9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xlzk9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xlzk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:43:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 12 23:43:15.429: INFO: Pod "webserver-deployment-795d758f88-wdrcx" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-wdrcx webserver-deployment-795d758f88- deployment-6118 34570133-40ec-4ada-bead-34a44a80fd03 430225 0 2021-01-12 23:43:14 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 4fc3c4f2-01d1-4437-b022-2be6ba93b4ce 0xc0039f8880 0xc0039f8881}] [] [{kube-controller-manager Update v1 2021-01-12 23:43:14 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4fc3c4f2-01d1-4437-b022-2be6ba93b4ce\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-12 23:43:15 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xlzk9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xlzk9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xlzk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:43:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:43:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:43:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:43:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2021-01-12 23:43:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 12 23:43:15.429: INFO: Pod "webserver-deployment-795d758f88-x7kr7" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-x7kr7 webserver-deployment-795d758f88- deployment-6118 aeca60ed-68ed-4988-95a1-f34e6de68d8f 430134 0 2021-01-12 23:43:12 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 4fc3c4f2-01d1-4437-b022-2be6ba93b4ce 0xc0039f8a20 0xc0039f8a21}] [] [{kube-controller-manager Update v1 2021-01-12 23:43:12 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4fc3c4f2-01d1-4437-b022-2be6ba93b4ce\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-12 23:43:12 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xlzk9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xlzk9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xlzk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:43:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:43:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:43:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:43:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2021-01-12 23:43:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 12 23:43:15.430: INFO: Pod "webserver-deployment-795d758f88-xbnvb" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-xbnvb webserver-deployment-795d758f88- deployment-6118 68a6776a-7b17-476f-9d50-d8a42fe3f67e 430213 0 2021-01-12 23:43:14 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 4fc3c4f2-01d1-4437-b022-2be6ba93b4ce 0xc0039f8bc0 0xc0039f8bc1}] [] [{kube-controller-manager Update v1 2021-01-12 23:43:14 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4fc3c4f2-01d1-4437-b022-2be6ba93b4ce\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xlzk9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xlzk9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xlzk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:43:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 12 23:43:15.430: INFO: Pod "webserver-deployment-dd94f59b7-2w7rq" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-2w7rq webserver-deployment-dd94f59b7- deployment-6118 94c8e213-4c8b-4e58-bcaf-a2f62adfec47 430205 0 2021-01-12 23:43:14 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ec98c638-8667-4460-8dd8-47aeaf0fdc27 0xc0039f8d00 0xc0039f8d01}] [] [{kube-controller-manager Update v1 2021-01-12 23:43:14 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ec98c638-8667-4460-8dd8-47aeaf0fdc27\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xlzk9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xlzk9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xlzk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:43:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 12 23:43:15.430: INFO: Pod "webserver-deployment-dd94f59b7-4rkrb" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-4rkrb webserver-deployment-dd94f59b7- deployment-6118 df1aa042-5229-4f97-b975-692eaafa9936 430206 0 2021-01-12 23:43:14 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ec98c638-8667-4460-8dd8-47aeaf0fdc27 0xc0039f8e30 0xc0039f8e31}] [] [{kube-controller-manager Update v1 2021-01-12 23:43:14 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ec98c638-8667-4460-8dd8-47aeaf0fdc27\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xlzk9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xlzk9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xlzk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:43:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 12 23:43:15.430: INFO: Pod "webserver-deployment-dd94f59b7-5stdz" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-5stdz webserver-deployment-dd94f59b7- deployment-6118 07529093-f771-472a-b66d-b86053dc71b8 430218 0 2021-01-12 23:43:14 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ec98c638-8667-4460-8dd8-47aeaf0fdc27 0xc0039f8f60 0xc0039f8f61}] [] [{kube-controller-manager Update v1 2021-01-12 23:43:14 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ec98c638-8667-4460-8dd8-47aeaf0fdc27\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xlzk9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xlzk9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xlzk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:43:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 12 23:43:15.430: INFO: Pod "webserver-deployment-dd94f59b7-647p7" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-647p7 webserver-deployment-dd94f59b7- deployment-6118 25a39793-547a-4d51-ac57-d38a645096cb 430216 0 2021-01-12 23:43:14 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ec98c638-8667-4460-8dd8-47aeaf0fdc27 0xc0039f9090 0xc0039f9091}] [] [{kube-controller-manager Update v1 2021-01-12 23:43:14 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ec98c638-8667-4460-8dd8-47aeaf0fdc27\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xlzk9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xlzk9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xlzk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:43:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 12 23:43:15.430: INFO: Pod "webserver-deployment-dd94f59b7-7jqv2" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-7jqv2 webserver-deployment-dd94f59b7- deployment-6118 1cb0e486-fa40-4233-bef9-11a540c9f459 430100 0 2021-01-12 23:42:59 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ec98c638-8667-4460-8dd8-47aeaf0fdc27 0xc0039f91d0 0xc0039f91d1}] [] [{kube-controller-manager Update v1 2021-01-12 23:42:59 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ec98c638-8667-4460-8dd8-47aeaf0fdc27\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-12 23:43:10 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.27\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xlzk9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xlzk9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xlzk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:42:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:43:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:43:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:42:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.27,StartTime:2021-01-12 23:42:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-01-12 23:43:09 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://6ba11e1ff4f701bd2ad5fa50f63baf5d40fe9ce8ef5ec7c09ec2f430bb8b568c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.27,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 12 23:43:15.431: INFO: Pod "webserver-deployment-dd94f59b7-chqxf" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-chqxf webserver-deployment-dd94f59b7- deployment-6118 3afaa1b7-5ee7-478d-a5ed-29a740734b32 430182 0 2021-01-12 23:43:14 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ec98c638-8667-4460-8dd8-47aeaf0fdc27 0xc0039f9377 0xc0039f9378}] [] [{kube-controller-manager Update v1 2021-01-12 23:43:14 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ec98c638-8667-4460-8dd8-47aeaf0fdc27\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xlzk9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xlzk9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xlzk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:43:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 12 23:43:15.431: INFO: Pod "webserver-deployment-dd94f59b7-dkrbj" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-dkrbj webserver-deployment-dd94f59b7- deployment-6118 a2f8d9d8-085a-415a-bba9-7bf6c4967629 430217 0 2021-01-12 23:43:14 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ec98c638-8667-4460-8dd8-47aeaf0fdc27 0xc0039f9720 0xc0039f9721}] [] [{kube-controller-manager Update v1 2021-01-12 23:43:14 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ec98c638-8667-4460-8dd8-47aeaf0fdc27\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xlzk9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xlzk9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xlzk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:43:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 12 23:43:15.431: INFO: Pod "webserver-deployment-dd94f59b7-dlvnc" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-dlvnc webserver-deployment-dd94f59b7- deployment-6118 498dd2d4-8561-454d-8819-483c3fcd4a1c 430214 0 2021-01-12 23:43:14 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ec98c638-8667-4460-8dd8-47aeaf0fdc27 0xc0039f9850 0xc0039f9851}] [] [{kube-controller-manager Update v1 2021-01-12 23:43:14 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ec98c638-8667-4460-8dd8-47aeaf0fdc27\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-12 23:43:15 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xlzk9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xlzk9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xlzk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:43:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:43:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:43:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:43:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2021-01-12 23:43:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 12 23:43:15.431: INFO: Pod "webserver-deployment-dd94f59b7-fwbcm" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-fwbcm webserver-deployment-dd94f59b7- deployment-6118 863a3abc-e390-4ad7-b4d0-b5d5729748b0 430087 0 2021-01-12 23:42:58 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ec98c638-8667-4460-8dd8-47aeaf0fdc27 0xc0039f9ba7 0xc0039f9ba8}] [] [{kube-controller-manager Update v1 2021-01-12 23:42:58 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ec98c638-8667-4460-8dd8-47aeaf0fdc27\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-12 23:43:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.80\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xlzk9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xlzk9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xlzk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:42:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:43:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:43:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:42:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:10.244.1.80,StartTime:2021-01-12 23:42:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-01-12 23:43:09 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://816654efbdcf64c700230e13fc62e8dde94739a1cec4d3d031ce7470fdcd4bae,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.80,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 12 23:43:15.432: INFO: Pod "webserver-deployment-dd94f59b7-jc7sg" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-jc7sg webserver-deployment-dd94f59b7- deployment-6118 1c782b6c-f88c-4a04-a771-84f5af56f81c 430215 0 2021-01-12 23:43:14 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ec98c638-8667-4460-8dd8-47aeaf0fdc27 0xc0039f9d57 0xc0039f9d58}] [] [{kube-controller-manager Update v1 2021-01-12 23:43:14 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ec98c638-8667-4460-8dd8-47aeaf0fdc27\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xlzk9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xlzk9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xlzk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:43:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 12 23:43:15.432: INFO: Pod "webserver-deployment-dd94f59b7-jg6qb" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-jg6qb webserver-deployment-dd94f59b7- deployment-6118 2e319be4-bb0e-4a76-931f-9d2bb5fa7024 430078 0 2021-01-12 23:42:58 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ec98c638-8667-4460-8dd8-47aeaf0fdc27 0xc0039f9e80 0xc0039f9e81}] [] [{kube-controller-manager Update v1 2021-01-12 23:42:58 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ec98c638-8667-4460-8dd8-47aeaf0fdc27\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-12 23:43:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.26\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xlzk9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xlzk9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xlzk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:42:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:43:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:43:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:42:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.26,StartTime:2021-01-12 23:42:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-01-12 23:43:08 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://3dde9fa61948bdba1bd83561ac067feb232e7b9457e96e7963b2ecbb98a37840,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.26,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 12 23:43:15.432: INFO: Pod "webserver-deployment-dd94f59b7-nhfjp" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-nhfjp webserver-deployment-dd94f59b7- deployment-6118 637c1fa0-a7d1-47cb-9168-ca77d8e61de0 430202 0 2021-01-12 23:43:14 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ec98c638-8667-4460-8dd8-47aeaf0fdc27 0xc00031c057 0xc00031c058}] [] [{kube-controller-manager Update v1 2021-01-12 23:43:14 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ec98c638-8667-4460-8dd8-47aeaf0fdc27\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xlzk9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xlzk9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xlzk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:43:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 12 23:43:15.432: INFO: Pod "webserver-deployment-dd94f59b7-q5d57" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-q5d57 webserver-deployment-dd94f59b7- deployment-6118 f7d9d5a6-3df1-490d-8158-3c69fa7bf78f 430097 0 2021-01-12 23:42:59 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ec98c638-8667-4460-8dd8-47aeaf0fdc27 0xc00031c1a0 0xc00031c1a1}] [] [{kube-controller-manager Update v1 2021-01-12 23:42:59 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ec98c638-8667-4460-8dd8-47aeaf0fdc27\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-12 23:43:10 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.28\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xlzk9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xlzk9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xlzk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:42:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:43:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:43:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:42:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.28,StartTime:2021-01-12 23:42:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-01-12 23:43:09 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://319e7d9227044275e20f989560039e9687f80675d8e87cbe048ceec794722169,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.28,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 12 23:43:15.432: INFO: Pod "webserver-deployment-dd94f59b7-qfgzp" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-qfgzp webserver-deployment-dd94f59b7- deployment-6118 e88092e7-a905-4967-82db-7b630c91d5d3 430068 0 2021-01-12 23:42:58 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ec98c638-8667-4460-8dd8-47aeaf0fdc27 0xc00031c437 0xc00031c438}] [] [{kube-controller-manager Update v1 2021-01-12 23:42:58 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ec98c638-8667-4460-8dd8-47aeaf0fdc27\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-12 23:43:08 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.78\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xlzk9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xlzk9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xlzk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:42:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:43:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:43:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:42:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:10.244.1.78,StartTime:2021-01-12 23:42:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-01-12 23:43:07 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://85028810338b0a1291c139a6771674f2a702e0599c49508dc7d456d6d48cce44,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.78,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 12 23:43:15.433: INFO: Pod "webserver-deployment-dd94f59b7-r7hzz" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-r7hzz webserver-deployment-dd94f59b7- deployment-6118 e51d4abb-40a2-4193-929c-7fceb38d94b3 430032 0 2021-01-12 23:42:57 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ec98c638-8667-4460-8dd8-47aeaf0fdc27 0xc00031c6d7 0xc00031c6d8}] [] [{kube-controller-manager Update v1 2021-01-12 23:42:57 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ec98c638-8667-4460-8dd8-47aeaf0fdc27\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-12 23:43:04 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.77\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xlzk9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xlzk9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xlzk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:42:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:43:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:43:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:42:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:10.244.1.77,StartTime:2021-01-12 23:42:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-01-12 23:43:03 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://c90abd443179593bb03c22c98e6702f24e476b0233ca8239e12f491b1cd78d12,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.77,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 12 23:43:15.433: INFO: Pod "webserver-deployment-dd94f59b7-rsxrz" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-rsxrz webserver-deployment-dd94f59b7- deployment-6118 3afc3924-4401-4c75-85e7-1587c029edfb 430233 0 2021-01-12 23:43:14 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ec98c638-8667-4460-8dd8-47aeaf0fdc27 0xc00031c8c7 0xc00031c8c8}] [] [{kube-controller-manager Update v1 2021-01-12 23:43:14 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ec98c638-8667-4460-8dd8-47aeaf0fdc27\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-12 23:43:15 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xlzk9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xlzk9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xlzk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:43:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:43:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:43:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:43:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2021-01-12 23:43:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 12 23:43:15.433: INFO: Pod "webserver-deployment-dd94f59b7-sb9xt" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-sb9xt webserver-deployment-dd94f59b7- deployment-6118 648d2485-2b34-4885-a9b5-7fd28bf98043 430055 0 2021-01-12 23:42:57 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ec98c638-8667-4460-8dd8-47aeaf0fdc27 0xc00031cb17 0xc00031cb18}] [] [{kube-controller-manager Update v1 2021-01-12 23:42:57 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ec98c638-8667-4460-8dd8-47aeaf0fdc27\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-12 23:43:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.24\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xlzk9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xlzk9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xlzk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:42:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:43:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:43:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:42:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.24,StartTime:2021-01-12 23:42:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-01-12 23:43:06 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://a1f26d9c39c72b9086f9e8d1f34e20217881f7bb0782d2a252e7e3563329eba0,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.24,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 12 23:43:15.433: INFO: Pod "webserver-deployment-dd94f59b7-t8cbs" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-t8cbs webserver-deployment-dd94f59b7- deployment-6118 df1fec7a-a693-485a-80dd-bcc785f18b46 430051 0 2021-01-12 23:42:57 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ec98c638-8667-4460-8dd8-47aeaf0fdc27 0xc00031ce17 0xc00031ce18}] [] [{kube-controller-manager Update v1 2021-01-12 23:42:57 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ec98c638-8667-4460-8dd8-47aeaf0fdc27\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-12 23:43:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.25\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xlzk9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xlzk9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xlzk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:42:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:43:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:43:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:42:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.25,StartTime:2021-01-12 23:42:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-01-12 23:43:06 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://8815270a14142c7f2d4310cab74ccbbb2c15fee6023e50d28dd97c21f4d5ef11,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.25,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 12 23:43:15.433: INFO: Pod "webserver-deployment-dd94f59b7-w4t9w" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-w4t9w webserver-deployment-dd94f59b7- deployment-6118 dcc091de-21b6-4cb9-96c7-1e1ee7f95652 430201 0 2021-01-12 23:43:14 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ec98c638-8667-4460-8dd8-47aeaf0fdc27 0xc00031cfc7 0xc00031cfc8}] [] [{kube-controller-manager Update v1 2021-01-12 23:43:14 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ec98c638-8667-4460-8dd8-47aeaf0fdc27\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xlzk9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xlzk9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xlzk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:43:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 12 23:43:15.434: INFO: Pod "webserver-deployment-dd94f59b7-xf86v" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-xf86v webserver-deployment-dd94f59b7- deployment-6118 8170695a-a805-408d-830c-dfeebe8f592b 430235 0 2021-01-12 23:43:14 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ec98c638-8667-4460-8dd8-47aeaf0fdc27 0xc00031d1b0 0xc00031d1b1}] [] [{kube-controller-manager Update v1 2021-01-12 23:43:14 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ec98c638-8667-4460-8dd8-47aeaf0fdc27\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-12 23:43:15 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xlzk9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xlzk9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xlzk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:43:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:43:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:43:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:43:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2021-01-12 23:43:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:43:15.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6118" for this suite. • [SLOW TEST:17.961 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":309,"completed":170,"skipped":2818,"failed":0} SSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:43:15.655: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating pod pod-subpath-test-configmap-l96r STEP: Creating a pod to test atomic-volume-subpath Jan 12 23:43:15.959: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-l96r" in namespace "subpath-1984" to be "Succeeded or Failed" Jan 12 23:43:16.061: INFO: Pod "pod-subpath-test-configmap-l96r": Phase="Pending", Reason="", readiness=false. Elapsed: 101.333805ms Jan 12 23:43:18.224: INFO: Pod "pod-subpath-test-configmap-l96r": Phase="Pending", Reason="", readiness=false. Elapsed: 2.264759119s Jan 12 23:43:20.414: INFO: Pod "pod-subpath-test-configmap-l96r": Phase="Pending", Reason="", readiness=false. Elapsed: 4.454790488s Jan 12 23:43:22.948: INFO: Pod "pod-subpath-test-configmap-l96r": Phase="Pending", Reason="", readiness=false. Elapsed: 6.98835412s Jan 12 23:43:25.134: INFO: Pod "pod-subpath-test-configmap-l96r": Phase="Pending", Reason="", readiness=false. Elapsed: 9.174542847s Jan 12 23:43:27.182: INFO: Pod "pod-subpath-test-configmap-l96r": Phase="Pending", Reason="", readiness=false. Elapsed: 11.222493645s Jan 12 23:43:29.359: INFO: Pod "pod-subpath-test-configmap-l96r": Phase="Pending", Reason="", readiness=false. Elapsed: 13.399798404s Jan 12 23:43:31.815: INFO: Pod "pod-subpath-test-configmap-l96r": Phase="Pending", Reason="", readiness=false. Elapsed: 15.856008196s Jan 12 23:43:33.820: INFO: Pod "pod-subpath-test-configmap-l96r": Phase="Running", Reason="", readiness=true. Elapsed: 17.860358614s Jan 12 23:43:35.924: INFO: Pod "pod-subpath-test-configmap-l96r": Phase="Running", Reason="", readiness=true. Elapsed: 19.964033699s Jan 12 23:43:37.965: INFO: Pod "pod-subpath-test-configmap-l96r": Phase="Running", Reason="", readiness=true. Elapsed: 22.005602778s Jan 12 23:43:40.058: INFO: Pod "pod-subpath-test-configmap-l96r": Phase="Running", Reason="", readiness=true. Elapsed: 24.098136645s Jan 12 23:43:42.162: INFO: Pod "pod-subpath-test-configmap-l96r": Phase="Running", Reason="", readiness=true. Elapsed: 26.202902224s Jan 12 23:43:44.373: INFO: Pod "pod-subpath-test-configmap-l96r": Phase="Running", Reason="", readiness=true. Elapsed: 28.413520989s Jan 12 23:43:46.834: INFO: Pod "pod-subpath-test-configmap-l96r": Phase="Running", Reason="", readiness=true. Elapsed: 30.874128746s Jan 12 23:43:48.837: INFO: Pod "pod-subpath-test-configmap-l96r": Phase="Running", Reason="", readiness=true. Elapsed: 32.877912541s Jan 12 23:43:50.841: INFO: Pod "pod-subpath-test-configmap-l96r": Phase="Running", Reason="", readiness=true. Elapsed: 34.881696352s Jan 12 23:43:52.845: INFO: Pod "pod-subpath-test-configmap-l96r": Phase="Running", Reason="", readiness=true. Elapsed: 36.88529884s Jan 12 23:43:54.849: INFO: Pod "pod-subpath-test-configmap-l96r": Phase="Succeeded", Reason="", readiness=false. Elapsed: 38.889704043s STEP: Saw pod success Jan 12 23:43:54.849: INFO: Pod "pod-subpath-test-configmap-l96r" satisfied condition "Succeeded or Failed" Jan 12 23:43:54.851: INFO: Trying to get logs from node leguer-worker2 pod pod-subpath-test-configmap-l96r container test-container-subpath-configmap-l96r: STEP: delete the pod Jan 12 23:43:55.216: INFO: Waiting for pod pod-subpath-test-configmap-l96r to disappear Jan 12 23:43:55.225: INFO: Pod pod-subpath-test-configmap-l96r no longer exists STEP: Deleting pod pod-subpath-test-configmap-l96r Jan 12 23:43:55.225: INFO: Deleting pod "pod-subpath-test-configmap-l96r" in namespace "subpath-1984" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:43:55.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1984" for this suite. • [SLOW TEST:39.577 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":309,"completed":171,"skipped":2824,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:43:55.233: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-6885 [It] Should recreate evicted statefulset [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-6885 STEP: Creating statefulset with conflicting port in namespace statefulset-6885 STEP: Waiting until pod test-pod will start running in namespace statefulset-6885 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-6885 Jan 12 23:44:01.410: INFO: Observed stateful pod in namespace: statefulset-6885, name: ss-0, uid: b0f22478-7611-42ca-9a02-8413f11d4e1e, status phase: Pending. Waiting for statefulset controller to delete. Jan 12 23:44:01.435: INFO: Observed stateful pod in namespace: statefulset-6885, name: ss-0, uid: b0f22478-7611-42ca-9a02-8413f11d4e1e, status phase: Failed. Waiting for statefulset controller to delete. Jan 12 23:44:01.457: INFO: Observed stateful pod in namespace: statefulset-6885, name: ss-0, uid: b0f22478-7611-42ca-9a02-8413f11d4e1e, status phase: Failed. Waiting for statefulset controller to delete. Jan 12 23:44:01.492: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-6885 STEP: Removing pod with conflicting port in namespace statefulset-6885 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-6885 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Jan 12 23:44:07.563: INFO: Deleting all statefulset in ns statefulset-6885 Jan 12 23:44:07.566: INFO: Scaling statefulset ss to 0 Jan 12 23:44:27.604: INFO: Waiting for statefulset status.replicas updated to 0 Jan 12 23:44:27.608: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:44:27.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6885" for this suite. • [SLOW TEST:32.397 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 Should recreate evicted statefulset [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":309,"completed":172,"skipped":2842,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:44:27.630: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test emptydir 0666 on tmpfs Jan 12 23:44:27.732: INFO: Waiting up to 5m0s for pod "pod-5aac9c70-c886-43c0-9426-434f44ed4a24" in namespace "emptydir-6091" to be "Succeeded or Failed" Jan 12 23:44:27.735: INFO: Pod "pod-5aac9c70-c886-43c0-9426-434f44ed4a24": Phase="Pending", Reason="", readiness=false. Elapsed: 2.902748ms Jan 12 23:44:29.744: INFO: Pod "pod-5aac9c70-c886-43c0-9426-434f44ed4a24": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012035799s Jan 12 23:44:31.752: INFO: Pod "pod-5aac9c70-c886-43c0-9426-434f44ed4a24": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02017397s Jan 12 23:44:33.758: INFO: Pod "pod-5aac9c70-c886-43c0-9426-434f44ed4a24": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.025940404s STEP: Saw pod success Jan 12 23:44:33.758: INFO: Pod "pod-5aac9c70-c886-43c0-9426-434f44ed4a24" satisfied condition "Succeeded or Failed" Jan 12 23:44:33.762: INFO: Trying to get logs from node leguer-worker2 pod pod-5aac9c70-c886-43c0-9426-434f44ed4a24 container test-container: STEP: delete the pod Jan 12 23:44:33.901: INFO: Waiting for pod pod-5aac9c70-c886-43c0-9426-434f44ed4a24 to disappear Jan 12 23:44:33.908: INFO: Pod pod-5aac9c70-c886-43c0-9426-434f44ed4a24 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:44:33.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6091" for this suite. • [SLOW TEST:6.285 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:45 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":173,"skipped":2843,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:44:33.916: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jan 12 23:44:38.276: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:44:38.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3126" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":309,"completed":174,"skipped":2892,"failed":0} SSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:44:38.486: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward api env vars Jan 12 23:44:38.807: INFO: Waiting up to 5m0s for pod "downward-api-151d27aa-c66a-42e5-b190-6cd317825027" in namespace "downward-api-9103" to be "Succeeded or Failed" Jan 12 23:44:38.826: INFO: Pod "downward-api-151d27aa-c66a-42e5-b190-6cd317825027": Phase="Pending", Reason="", readiness=false. Elapsed: 19.129601ms Jan 12 23:44:40.847: INFO: Pod "downward-api-151d27aa-c66a-42e5-b190-6cd317825027": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040221088s Jan 12 23:44:42.851: INFO: Pod "downward-api-151d27aa-c66a-42e5-b190-6cd317825027": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044522709s STEP: Saw pod success Jan 12 23:44:42.851: INFO: Pod "downward-api-151d27aa-c66a-42e5-b190-6cd317825027" satisfied condition "Succeeded or Failed" Jan 12 23:44:42.854: INFO: Trying to get logs from node leguer-worker2 pod downward-api-151d27aa-c66a-42e5-b190-6cd317825027 container dapi-container: STEP: delete the pod Jan 12 23:44:42.892: INFO: Waiting for pod downward-api-151d27aa-c66a-42e5-b190-6cd317825027 to disappear Jan 12 23:44:42.896: INFO: Pod downward-api-151d27aa-c66a-42e5-b190-6cd317825027 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:44:42.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9103" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":309,"completed":175,"skipped":2896,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:44:42.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward api env vars Jan 12 23:44:43.298: INFO: Waiting up to 5m0s for pod "downward-api-d4fceeea-c95e-478e-a6f8-fdbdc64a93dd" in namespace "downward-api-8351" to be "Succeeded or Failed" Jan 12 23:44:43.329: INFO: Pod "downward-api-d4fceeea-c95e-478e-a6f8-fdbdc64a93dd": Phase="Pending", Reason="", readiness=false. Elapsed: 30.838473ms Jan 12 23:44:45.335: INFO: Pod "downward-api-d4fceeea-c95e-478e-a6f8-fdbdc64a93dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036826473s Jan 12 23:44:47.341: INFO: Pod "downward-api-d4fceeea-c95e-478e-a6f8-fdbdc64a93dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042887721s STEP: Saw pod success Jan 12 23:44:47.341: INFO: Pod "downward-api-d4fceeea-c95e-478e-a6f8-fdbdc64a93dd" satisfied condition "Succeeded or Failed" Jan 12 23:44:47.344: INFO: Trying to get logs from node leguer-worker2 pod downward-api-d4fceeea-c95e-478e-a6f8-fdbdc64a93dd container dapi-container: STEP: delete the pod Jan 12 23:44:47.414: INFO: Waiting for pod downward-api-d4fceeea-c95e-478e-a6f8-fdbdc64a93dd to disappear Jan 12 23:44:47.462: INFO: Pod downward-api-d4fceeea-c95e-478e-a6f8-fdbdc64a93dd no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:44:47.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8351" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":309,"completed":176,"skipped":2957,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:44:47.472: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test emptydir 0644 on tmpfs Jan 12 23:44:47.543: INFO: Waiting up to 5m0s for pod "pod-aca6a195-1153-4eed-99ab-2c102e7c1460" in namespace "emptydir-5797" to be "Succeeded or Failed" Jan 12 23:44:47.612: INFO: Pod "pod-aca6a195-1153-4eed-99ab-2c102e7c1460": Phase="Pending", Reason="", readiness=false. Elapsed: 69.062532ms Jan 12 23:44:49.672: INFO: Pod "pod-aca6a195-1153-4eed-99ab-2c102e7c1460": Phase="Pending", Reason="", readiness=false. Elapsed: 2.129009581s Jan 12 23:44:51.677: INFO: Pod "pod-aca6a195-1153-4eed-99ab-2c102e7c1460": Phase="Pending", Reason="", readiness=false. Elapsed: 4.133488242s Jan 12 23:44:53.684: INFO: Pod "pod-aca6a195-1153-4eed-99ab-2c102e7c1460": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.140694944s STEP: Saw pod success Jan 12 23:44:53.684: INFO: Pod "pod-aca6a195-1153-4eed-99ab-2c102e7c1460" satisfied condition "Succeeded or Failed" Jan 12 23:44:53.687: INFO: Trying to get logs from node leguer-worker2 pod pod-aca6a195-1153-4eed-99ab-2c102e7c1460 container test-container: STEP: delete the pod Jan 12 23:44:53.717: INFO: Waiting for pod pod-aca6a195-1153-4eed-99ab-2c102e7c1460 to disappear Jan 12 23:44:53.727: INFO: Pod pod-aca6a195-1153-4eed-99ab-2c102e7c1460 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:44:53.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5797" for this suite. • [SLOW TEST:6.263 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:45 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":177,"skipped":2967,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:44:53.736: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating the pod Jan 12 23:44:58.506: INFO: Successfully updated pod "annotationupdated040dff0-045c-488e-817a-70464c17006f" [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:45:00.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9987" for this suite. • [SLOW TEST:6.827 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36 should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":309,"completed":178,"skipped":2978,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:45:00.563: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating pod liveness-d731d665-11fd-4297-b46d-7bf969c026fd in namespace container-probe-5917 Jan 12 23:45:04.668: INFO: Started pod liveness-d731d665-11fd-4297-b46d-7bf969c026fd in namespace container-probe-5917 STEP: checking the pod's current state and verifying that restartCount is present Jan 12 23:45:04.671: INFO: Initial restart count of pod liveness-d731d665-11fd-4297-b46d-7bf969c026fd is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:49:05.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5917" for this suite. • [SLOW TEST:245.357 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":309,"completed":179,"skipped":2993,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:49:05.920: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 12 23:49:06.390: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:49:08.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-7627" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":309,"completed":180,"skipped":2997,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:49:08.115: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:49:12.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-7517" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":309,"completed":181,"skipped":3000,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:49:12.405: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating projection with secret that has name projected-secret-test-map-2a6d3c58-4baa-4f27-883f-37b5f70c6b12 STEP: Creating a pod to test consume secrets Jan 12 23:49:12.513: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c19397c0-2a1d-4cdf-af6b-641e11fce6de" in namespace "projected-2946" to be "Succeeded or Failed" Jan 12 23:49:12.517: INFO: Pod "pod-projected-secrets-c19397c0-2a1d-4cdf-af6b-641e11fce6de": Phase="Pending", Reason="", readiness=false. Elapsed: 3.441392ms Jan 12 23:49:14.521: INFO: Pod "pod-projected-secrets-c19397c0-2a1d-4cdf-af6b-641e11fce6de": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007351416s Jan 12 23:49:16.542: INFO: Pod "pod-projected-secrets-c19397c0-2a1d-4cdf-af6b-641e11fce6de": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028637736s STEP: Saw pod success Jan 12 23:49:16.542: INFO: Pod "pod-projected-secrets-c19397c0-2a1d-4cdf-af6b-641e11fce6de" satisfied condition "Succeeded or Failed" Jan 12 23:49:16.545: INFO: Trying to get logs from node leguer-worker pod pod-projected-secrets-c19397c0-2a1d-4cdf-af6b-641e11fce6de container projected-secret-volume-test: STEP: delete the pod Jan 12 23:49:16.583: INFO: Waiting for pod pod-projected-secrets-c19397c0-2a1d-4cdf-af6b-641e11fce6de to disappear Jan 12 23:49:16.595: INFO: Pod pod-projected-secrets-c19397c0-2a1d-4cdf-af6b-641e11fce6de no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:49:16.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2946" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":182,"skipped":3016,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:49:16.603: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [BeforeEach] Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1392 STEP: creating an pod Jan 12 23:49:16.815: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-1664 run logs-generator --image=k8s.gcr.io/e2e-test-images/agnhost:2.21 --restart=Never -- logs-generator --log-lines-total 100 --run-duration 20s' Jan 12 23:49:16.949: INFO: stderr: "" Jan 12 23:49:16.949: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Waiting for log generator to start. Jan 12 23:49:16.949: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Jan 12 23:49:16.949: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-1664" to be "running and ready, or succeeded" Jan 12 23:49:16.954: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.695576ms Jan 12 23:49:18.958: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008964985s Jan 12 23:49:20.963: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.013861887s Jan 12 23:49:20.963: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Jan 12 23:49:20.963: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Jan 12 23:49:20.963: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-1664 logs logs-generator logs-generator' Jan 12 23:49:21.087: INFO: stderr: "" Jan 12 23:49:21.087: INFO: stdout: "I0112 23:49:19.411129 1 logs_generator.go:76] 0 POST /api/v1/namespaces/kube-system/pods/bnm 518\nI0112 23:49:19.611300 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/kube-system/pods/2bt8 570\nI0112 23:49:19.811295 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/kube-system/pods/gzx 208\nI0112 23:49:20.011295 1 logs_generator.go:76] 3 POST /api/v1/namespaces/ns/pods/wlp 260\nI0112 23:49:20.211351 1 logs_generator.go:76] 4 POST /api/v1/namespaces/ns/pods/v9q 267\nI0112 23:49:20.411335 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/kube-system/pods/7ccv 543\nI0112 23:49:20.611332 1 logs_generator.go:76] 6 POST /api/v1/namespaces/ns/pods/bwd 234\nI0112 23:49:20.811288 1 logs_generator.go:76] 7 GET /api/v1/namespaces/default/pods/2ql 222\nI0112 23:49:21.011326 1 logs_generator.go:76] 8 POST /api/v1/namespaces/kube-system/pods/vxqj 478\n" STEP: limiting log lines Jan 12 23:49:21.088: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-1664 logs logs-generator logs-generator --tail=1' Jan 12 23:49:21.199: INFO: stderr: "" Jan 12 23:49:21.199: INFO: stdout: "I0112 23:49:21.011326 1 logs_generator.go:76] 8 POST /api/v1/namespaces/kube-system/pods/vxqj 478\n" Jan 12 23:49:21.199: INFO: got output "I0112 23:49:21.011326 1 logs_generator.go:76] 8 POST /api/v1/namespaces/kube-system/pods/vxqj 478\n" STEP: limiting log bytes Jan 12 23:49:21.199: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-1664 logs logs-generator logs-generator --limit-bytes=1' Jan 12 23:49:21.307: INFO: stderr: "" Jan 12 23:49:21.307: INFO: stdout: "I" Jan 12 23:49:21.307: INFO: got output "I" STEP: exposing timestamps Jan 12 23:49:21.307: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-1664 logs logs-generator logs-generator --tail=1 --timestamps' Jan 12 23:49:21.413: INFO: stderr: "" Jan 12 23:49:21.413: INFO: stdout: "2021-01-12T23:49:21.211520306Z I0112 23:49:21.211277 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/ns/pods/sfk2 538\n" Jan 12 23:49:21.413: INFO: got output "2021-01-12T23:49:21.211520306Z I0112 23:49:21.211277 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/ns/pods/sfk2 538\n" STEP: restricting to a time range Jan 12 23:49:23.913: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-1664 logs logs-generator logs-generator --since=1s' Jan 12 23:49:24.028: INFO: stderr: "" Jan 12 23:49:24.028: INFO: stdout: "I0112 23:49:23.211294 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/default/pods/nb8 273\nI0112 23:49:23.411284 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/ns/pods/7dg9 461\nI0112 23:49:23.611281 1 logs_generator.go:76] 21 PUT /api/v1/namespaces/default/pods/5l4d 259\nI0112 23:49:23.811310 1 logs_generator.go:76] 22 GET /api/v1/namespaces/kube-system/pods/g45 404\nI0112 23:49:24.011265 1 logs_generator.go:76] 23 PUT /api/v1/namespaces/ns/pods/nkz 205\n" Jan 12 23:49:24.029: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-1664 logs logs-generator logs-generator --since=24h' Jan 12 23:49:24.127: INFO: stderr: "" Jan 12 23:49:24.127: INFO: stdout: "I0112 23:49:19.411129 1 logs_generator.go:76] 0 POST /api/v1/namespaces/kube-system/pods/bnm 518\nI0112 23:49:19.611300 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/kube-system/pods/2bt8 570\nI0112 23:49:19.811295 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/kube-system/pods/gzx 208\nI0112 23:49:20.011295 1 logs_generator.go:76] 3 POST /api/v1/namespaces/ns/pods/wlp 260\nI0112 23:49:20.211351 1 logs_generator.go:76] 4 POST /api/v1/namespaces/ns/pods/v9q 267\nI0112 23:49:20.411335 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/kube-system/pods/7ccv 543\nI0112 23:49:20.611332 1 logs_generator.go:76] 6 POST /api/v1/namespaces/ns/pods/bwd 234\nI0112 23:49:20.811288 1 logs_generator.go:76] 7 GET /api/v1/namespaces/default/pods/2ql 222\nI0112 23:49:21.011326 1 logs_generator.go:76] 8 POST /api/v1/namespaces/kube-system/pods/vxqj 478\nI0112 23:49:21.211277 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/ns/pods/sfk2 538\nI0112 23:49:21.411238 1 logs_generator.go:76] 10 GET /api/v1/namespaces/default/pods/krtq 418\nI0112 23:49:21.611307 1 logs_generator.go:76] 11 GET /api/v1/namespaces/kube-system/pods/gfm 316\nI0112 23:49:21.811325 1 logs_generator.go:76] 12 PUT /api/v1/namespaces/kube-system/pods/pwb 589\nI0112 23:49:22.011333 1 logs_generator.go:76] 13 POST /api/v1/namespaces/kube-system/pods/j6jg 278\nI0112 23:49:22.211340 1 logs_generator.go:76] 14 PUT /api/v1/namespaces/default/pods/hrcg 219\nI0112 23:49:22.411294 1 logs_generator.go:76] 15 POST /api/v1/namespaces/kube-system/pods/nxh 255\nI0112 23:49:22.611271 1 logs_generator.go:76] 16 GET /api/v1/namespaces/ns/pods/cm4q 502\nI0112 23:49:22.811362 1 logs_generator.go:76] 17 PUT /api/v1/namespaces/ns/pods/rprc 446\nI0112 23:49:23.011304 1 logs_generator.go:76] 18 GET /api/v1/namespaces/kube-system/pods/5jq5 307\nI0112 23:49:23.211294 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/default/pods/nb8 273\nI0112 23:49:23.411284 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/ns/pods/7dg9 461\nI0112 23:49:23.611281 1 logs_generator.go:76] 21 PUT /api/v1/namespaces/default/pods/5l4d 259\nI0112 23:49:23.811310 1 logs_generator.go:76] 22 GET /api/v1/namespaces/kube-system/pods/g45 404\nI0112 23:49:24.011265 1 logs_generator.go:76] 23 PUT /api/v1/namespaces/ns/pods/nkz 205\n" [AfterEach] Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1397 Jan 12 23:49:24.127: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-1664 delete pod logs-generator' Jan 12 23:49:29.846: INFO: stderr: "" Jan 12 23:49:29.846: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:49:29.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1664" for this suite. • [SLOW TEST:13.281 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1389 should be able to retrieve and filter logs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":309,"completed":183,"skipped":3039,"failed":0} SSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:49:29.884: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:85 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 12 23:49:29.965: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Jan 12 23:49:29.997: INFO: Pod name sample-pod: Found 0 pods out of 1 Jan 12 23:49:35.001: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jan 12 23:49:35.001: INFO: Creating deployment "test-rolling-update-deployment" Jan 12 23:49:35.005: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Jan 12 23:49:35.011: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Jan 12 23:49:37.022: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Jan 12 23:49:37.025: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746092175, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746092175, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746092175, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746092175, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-6b6bf9df46\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 12 23:49:39.029: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:79 Jan 12 23:49:39.038: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-6659 e84f75f5-7804-48b5-8415-1a78a9de3c95 431824 1 2021-01-12 23:49:35 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2021-01-12 23:49:35 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-01-12 23:49:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.21 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc005520758 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2021-01-12 23:49:35 +0000 UTC,LastTransitionTime:2021-01-12 23:49:35 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-6b6bf9df46" has successfully progressed.,LastUpdateTime:2021-01-12 23:49:38 +0000 UTC,LastTransitionTime:2021-01-12 23:49:35 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Jan 12 23:49:39.042: INFO: New ReplicaSet "test-rolling-update-deployment-6b6bf9df46" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-6b6bf9df46 deployment-6659 0b10f47d-bba8-467e-964d-8b422811d38a 431813 1 2021-01-12 23:49:35 +0000 UTC map[name:sample-pod pod-template-hash:6b6bf9df46] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment e84f75f5-7804-48b5-8415-1a78a9de3c95 0xc005520be7 0xc005520be8}] [] [{kube-controller-manager Update apps/v1 2021-01-12 23:49:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e84f75f5-7804-48b5-8415-1a78a9de3c95\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 6b6bf9df46,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:6b6bf9df46] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.21 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc005520c78 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jan 12 23:49:39.042: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Jan 12 23:49:39.042: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-6659 f0f08083-29f2-48e1-97b2-78ec85bab6e8 431823 2 2021-01-12 23:49:29 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment e84f75f5-7804-48b5-8415-1a78a9de3c95 0xc005520ad7 0xc005520ad8}] [] [{e2e.test Update apps/v1 2021-01-12 23:49:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-01-12 23:49:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e84f75f5-7804-48b5-8415-1a78a9de3c95\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc005520b78 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 12 23:49:39.045: INFO: Pod "test-rolling-update-deployment-6b6bf9df46-tj5pv" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-6b6bf9df46-tj5pv test-rolling-update-deployment-6b6bf9df46- deployment-6659 9fb0980f-fa5d-4c70-9c23-7fdcd89d5add 431812 0 2021-01-12 23:49:35 +0000 UTC map[name:sample-pod pod-template-hash:6b6bf9df46] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-6b6bf9df46 0b10f47d-bba8-467e-964d-8b422811d38a 0xc005521087 0xc005521088}] [] [{kube-controller-manager Update v1 2021-01-12 23:49:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0b10f47d-bba8-467e-964d-8b422811d38a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-12 23:49:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.106\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r9t9w,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r9t9w,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r9t9w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:49:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:49:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:49:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:49:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:10.244.1.106,StartTime:2021-01-12 23:49:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-01-12 23:49:37 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a,ContainerID:containerd://ef5f40533a6a307cd51e46b68f2dece16e6e79ed37950886a05fd58255f3fcda,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.106,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:49:39.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6659" for this suite. • [SLOW TEST:9.169 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":309,"completed":184,"skipped":3045,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:49:39.053: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test emptydir 0777 on node default medium Jan 12 23:49:39.210: INFO: Waiting up to 5m0s for pod "pod-dee13a38-07cd-4899-874c-6aa272f643ee" in namespace "emptydir-5561" to be "Succeeded or Failed" Jan 12 23:49:39.230: INFO: Pod "pod-dee13a38-07cd-4899-874c-6aa272f643ee": Phase="Pending", Reason="", readiness=false. Elapsed: 19.931847ms Jan 12 23:49:41.235: INFO: Pod "pod-dee13a38-07cd-4899-874c-6aa272f643ee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025251681s Jan 12 23:49:43.267: INFO: Pod "pod-dee13a38-07cd-4899-874c-6aa272f643ee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.057789395s STEP: Saw pod success Jan 12 23:49:43.268: INFO: Pod "pod-dee13a38-07cd-4899-874c-6aa272f643ee" satisfied condition "Succeeded or Failed" Jan 12 23:49:43.270: INFO: Trying to get logs from node leguer-worker pod pod-dee13a38-07cd-4899-874c-6aa272f643ee container test-container: STEP: delete the pod Jan 12 23:49:43.316: INFO: Waiting for pod pod-dee13a38-07cd-4899-874c-6aa272f643ee to disappear Jan 12 23:49:43.321: INFO: Pod pod-dee13a38-07cd-4899-874c-6aa272f643ee no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:49:43.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5561" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":185,"skipped":3050,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should run through the lifecycle of Pods and PodStatus [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:49:43.333: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 [It] should run through the lifecycle of Pods and PodStatus [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a Pod with a static label STEP: watching for Pod to be ready Jan 12 23:49:43.447: INFO: observed Pod pod-test in namespace pods-3935 in phase Pending conditions [] Jan 12 23:49:43.450: INFO: observed Pod pod-test in namespace pods-3935 in phase Pending conditions [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-12 23:49:43 +0000 UTC }] Jan 12 23:49:43.513: INFO: observed Pod pod-test in namespace pods-3935 in phase Pending conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-12 23:49:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-12 23:49:43 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-12 23:49:43 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-12 23:49:43 +0000 UTC }] STEP: patching the Pod with a new Label and updated data Jan 12 23:49:47.578: INFO: observed event type ADDED STEP: getting the Pod and ensuring that it's patched STEP: getting the PodStatus STEP: replacing the Pod's status Ready condition to False STEP: check the Pod again to ensure its Ready conditions are False STEP: deleting the Pod via a Collection with a LabelSelector STEP: watching for the Pod to be deleted Jan 12 23:49:47.629: INFO: observed event type ADDED Jan 12 23:49:47.629: INFO: observed event type MODIFIED Jan 12 23:49:47.630: INFO: observed event type MODIFIED Jan 12 23:49:47.630: INFO: observed event type MODIFIED Jan 12 23:49:47.630: INFO: observed event type MODIFIED Jan 12 23:49:47.630: INFO: observed event type MODIFIED Jan 12 23:49:47.630: INFO: observed event type MODIFIED [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:49:47.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3935" for this suite. •{"msg":"PASSED [k8s.io] Pods should run through the lifecycle of Pods and PodStatus [Conformance]","total":309,"completed":186,"skipped":3080,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:49:47.658: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should be able to create a functioning NodePort service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating service nodeport-test with type=NodePort in namespace services-8437 STEP: creating replication controller nodeport-test in namespace services-8437 I0112 23:49:48.150525 7 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-8437, replica count: 2 I0112 23:49:51.201051 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0112 23:49:54.201289 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 12 23:49:54.201: INFO: Creating new exec pod Jan 12 23:49:59.249: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-8437 exec execpod2t5gt -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Jan 12 23:49:59.481: INFO: stderr: "I0112 23:49:59.386373 2692 log.go:181] (0xc00018c420) (0xc00083e1e0) Create stream\nI0112 23:49:59.386444 2692 log.go:181] (0xc00018c420) (0xc00083e1e0) Stream added, broadcasting: 1\nI0112 23:49:59.389211 2692 log.go:181] (0xc00018c420) Reply frame received for 1\nI0112 23:49:59.389270 2692 log.go:181] (0xc00018c420) (0xc00063a3c0) Create stream\nI0112 23:49:59.389290 2692 log.go:181] (0xc00018c420) (0xc00063a3c0) Stream added, broadcasting: 3\nI0112 23:49:59.390407 2692 log.go:181] (0xc00018c420) Reply frame received for 3\nI0112 23:49:59.390457 2692 log.go:181] (0xc00018c420) (0xc00083e320) Create stream\nI0112 23:49:59.390474 2692 log.go:181] (0xc00018c420) (0xc00083e320) Stream added, broadcasting: 5\nI0112 23:49:59.391473 2692 log.go:181] (0xc00018c420) Reply frame received for 5\nI0112 23:49:59.473328 2692 log.go:181] (0xc00018c420) Data frame received for 5\nI0112 23:49:59.473370 2692 log.go:181] (0xc00083e320) (5) Data frame handling\nI0112 23:49:59.473401 2692 log.go:181] (0xc00083e320) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0112 23:49:59.473786 2692 log.go:181] (0xc00018c420) Data frame received for 5\nI0112 23:49:59.473800 2692 log.go:181] (0xc00083e320) (5) Data frame handling\nI0112 23:49:59.473821 2692 log.go:181] (0xc00083e320) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0112 23:49:59.474104 2692 log.go:181] (0xc00018c420) Data frame received for 3\nI0112 23:49:59.474125 2692 log.go:181] (0xc00063a3c0) (3) Data frame handling\nI0112 23:49:59.474148 2692 log.go:181] (0xc00018c420) Data frame received for 5\nI0112 23:49:59.474169 2692 log.go:181] (0xc00083e320) (5) Data frame handling\nI0112 23:49:59.476053 2692 log.go:181] (0xc00018c420) Data frame received for 1\nI0112 23:49:59.476071 2692 log.go:181] (0xc00083e1e0) (1) Data frame handling\nI0112 23:49:59.476081 2692 log.go:181] (0xc00083e1e0) (1) Data frame sent\nI0112 23:49:59.476180 2692 log.go:181] (0xc00018c420) (0xc00083e1e0) Stream removed, broadcasting: 1\nI0112 23:49:59.476397 2692 log.go:181] (0xc00018c420) Go away received\nI0112 23:49:59.476516 2692 log.go:181] (0xc00018c420) (0xc00083e1e0) Stream removed, broadcasting: 1\nI0112 23:49:59.476529 2692 log.go:181] (0xc00018c420) (0xc00063a3c0) Stream removed, broadcasting: 3\nI0112 23:49:59.476535 2692 log.go:181] (0xc00018c420) (0xc00083e320) Stream removed, broadcasting: 5\n" Jan 12 23:49:59.481: INFO: stdout: "" Jan 12 23:49:59.482: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-8437 exec execpod2t5gt -- /bin/sh -x -c nc -zv -t -w 2 10.96.60.11 80' Jan 12 23:49:59.689: INFO: stderr: "I0112 23:49:59.618804 2710 log.go:181] (0xc00062f290) (0xc000e0cbe0) Create stream\nI0112 23:49:59.618879 2710 log.go:181] (0xc00062f290) (0xc000e0cbe0) Stream added, broadcasting: 1\nI0112 23:49:59.622338 2710 log.go:181] (0xc00062f290) Reply frame received for 1\nI0112 23:49:59.622422 2710 log.go:181] (0xc00062f290) (0xc000c86000) Create stream\nI0112 23:49:59.622462 2710 log.go:181] (0xc00062f290) (0xc000c86000) Stream added, broadcasting: 3\nI0112 23:49:59.623524 2710 log.go:181] (0xc00062f290) Reply frame received for 3\nI0112 23:49:59.623558 2710 log.go:181] (0xc00062f290) (0xc000e0c000) Create stream\nI0112 23:49:59.623568 2710 log.go:181] (0xc00062f290) (0xc000e0c000) Stream added, broadcasting: 5\nI0112 23:49:59.624457 2710 log.go:181] (0xc00062f290) Reply frame received for 5\nI0112 23:49:59.681811 2710 log.go:181] (0xc00062f290) Data frame received for 3\nI0112 23:49:59.681857 2710 log.go:181] (0xc000c86000) (3) Data frame handling\nI0112 23:49:59.681897 2710 log.go:181] (0xc00062f290) Data frame received for 5\nI0112 23:49:59.681908 2710 log.go:181] (0xc000e0c000) (5) Data frame handling\nI0112 23:49:59.681921 2710 log.go:181] (0xc000e0c000) (5) Data frame sent\nI0112 23:49:59.681937 2710 log.go:181] (0xc00062f290) Data frame received for 5\nI0112 23:49:59.681950 2710 log.go:181] (0xc000e0c000) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.60.11 80\nConnection to 10.96.60.11 80 port [tcp/http] succeeded!\nI0112 23:49:59.683368 2710 log.go:181] (0xc00062f290) Data frame received for 1\nI0112 23:49:59.683395 2710 log.go:181] (0xc000e0cbe0) (1) Data frame handling\nI0112 23:49:59.683412 2710 log.go:181] (0xc000e0cbe0) (1) Data frame sent\nI0112 23:49:59.683428 2710 log.go:181] (0xc00062f290) (0xc000e0cbe0) Stream removed, broadcasting: 1\nI0112 23:49:59.683443 2710 log.go:181] (0xc00062f290) Go away received\nI0112 23:49:59.683915 2710 log.go:181] (0xc00062f290) (0xc000e0cbe0) Stream removed, broadcasting: 1\nI0112 23:49:59.683937 2710 log.go:181] (0xc00062f290) (0xc000c86000) Stream removed, broadcasting: 3\nI0112 23:49:59.683959 2710 log.go:181] (0xc00062f290) (0xc000e0c000) Stream removed, broadcasting: 5\n" Jan 12 23:49:59.689: INFO: stdout: "" Jan 12 23:49:59.689: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-8437 exec execpod2t5gt -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.13 31142' Jan 12 23:49:59.896: INFO: stderr: "I0112 23:49:59.826533 2728 log.go:181] (0xc0005f2000) (0xc000c28000) Create stream\nI0112 23:49:59.826608 2728 log.go:181] (0xc0005f2000) (0xc000c28000) Stream added, broadcasting: 1\nI0112 23:49:59.829648 2728 log.go:181] (0xc0005f2000) Reply frame received for 1\nI0112 23:49:59.829729 2728 log.go:181] (0xc0005f2000) (0xc0005ea3c0) Create stream\nI0112 23:49:59.829747 2728 log.go:181] (0xc0005f2000) (0xc0005ea3c0) Stream added, broadcasting: 3\nI0112 23:49:59.831471 2728 log.go:181] (0xc0005f2000) Reply frame received for 3\nI0112 23:49:59.831508 2728 log.go:181] (0xc0005f2000) (0xc000a90000) Create stream\nI0112 23:49:59.831518 2728 log.go:181] (0xc0005f2000) (0xc000a90000) Stream added, broadcasting: 5\nI0112 23:49:59.832571 2728 log.go:181] (0xc0005f2000) Reply frame received for 5\nI0112 23:49:59.886889 2728 log.go:181] (0xc0005f2000) Data frame received for 3\nI0112 23:49:59.886933 2728 log.go:181] (0xc0005ea3c0) (3) Data frame handling\nI0112 23:49:59.887187 2728 log.go:181] (0xc0005f2000) Data frame received for 5\nI0112 23:49:59.887219 2728 log.go:181] (0xc000a90000) (5) Data frame handling\nI0112 23:49:59.887233 2728 log.go:181] (0xc000a90000) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.13 31142\nConnection to 172.18.0.13 31142 port [tcp/31142] succeeded!\nI0112 23:49:59.887243 2728 log.go:181] (0xc0005f2000) Data frame received for 5\nI0112 23:49:59.887305 2728 log.go:181] (0xc000a90000) (5) Data frame handling\nI0112 23:49:59.889495 2728 log.go:181] (0xc0005f2000) Data frame received for 1\nI0112 23:49:59.889516 2728 log.go:181] (0xc000c28000) (1) Data frame handling\nI0112 23:49:59.889527 2728 log.go:181] (0xc000c28000) (1) Data frame sent\nI0112 23:49:59.889538 2728 log.go:181] (0xc0005f2000) (0xc000c28000) Stream removed, broadcasting: 1\nI0112 23:49:59.889945 2728 log.go:181] (0xc0005f2000) Go away received\nI0112 23:49:59.890015 2728 log.go:181] (0xc0005f2000) (0xc000c28000) Stream removed, broadcasting: 1\nI0112 23:49:59.890074 2728 log.go:181] (0xc0005f2000) (0xc0005ea3c0) Stream removed, broadcasting: 3\nI0112 23:49:59.890090 2728 log.go:181] (0xc0005f2000) (0xc000a90000) Stream removed, broadcasting: 5\n" Jan 12 23:49:59.896: INFO: stdout: "" Jan 12 23:49:59.896: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-8437 exec execpod2t5gt -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.12 31142' Jan 12 23:50:00.149: INFO: stderr: "I0112 23:50:00.048186 2746 log.go:181] (0xc000143ad0) (0xc000bb4a00) Create stream\nI0112 23:50:00.048249 2746 log.go:181] (0xc000143ad0) (0xc000bb4a00) Stream added, broadcasting: 1\nI0112 23:50:00.050619 2746 log.go:181] (0xc000143ad0) Reply frame received for 1\nI0112 23:50:00.050676 2746 log.go:181] (0xc000143ad0) (0xc000632000) Create stream\nI0112 23:50:00.050702 2746 log.go:181] (0xc000143ad0) (0xc000632000) Stream added, broadcasting: 3\nI0112 23:50:00.051781 2746 log.go:181] (0xc000143ad0) Reply frame received for 3\nI0112 23:50:00.051843 2746 log.go:181] (0xc000143ad0) (0xc0006320a0) Create stream\nI0112 23:50:00.051868 2746 log.go:181] (0xc000143ad0) (0xc0006320a0) Stream added, broadcasting: 5\nI0112 23:50:00.053078 2746 log.go:181] (0xc000143ad0) Reply frame received for 5\nI0112 23:50:00.141251 2746 log.go:181] (0xc000143ad0) Data frame received for 5\nI0112 23:50:00.141294 2746 log.go:181] (0xc0006320a0) (5) Data frame handling\nI0112 23:50:00.141312 2746 log.go:181] (0xc0006320a0) (5) Data frame sent\nI0112 23:50:00.141328 2746 log.go:181] (0xc000143ad0) Data frame received for 5\nI0112 23:50:00.141339 2746 log.go:181] (0xc0006320a0) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.12 31142\nConnection to 172.18.0.12 31142 port [tcp/31142] succeeded!\nI0112 23:50:00.141374 2746 log.go:181] (0xc000143ad0) Data frame received for 3\nI0112 23:50:00.141403 2746 log.go:181] (0xc000632000) (3) Data frame handling\nI0112 23:50:00.142751 2746 log.go:181] (0xc000143ad0) Data frame received for 1\nI0112 23:50:00.142771 2746 log.go:181] (0xc000bb4a00) (1) Data frame handling\nI0112 23:50:00.142785 2746 log.go:181] (0xc000bb4a00) (1) Data frame sent\nI0112 23:50:00.142824 2746 log.go:181] (0xc000143ad0) (0xc000bb4a00) Stream removed, broadcasting: 1\nI0112 23:50:00.142843 2746 log.go:181] (0xc000143ad0) Go away received\nI0112 23:50:00.143308 2746 log.go:181] (0xc000143ad0) (0xc000bb4a00) Stream removed, broadcasting: 1\nI0112 23:50:00.143335 2746 log.go:181] (0xc000143ad0) (0xc000632000) Stream removed, broadcasting: 3\nI0112 23:50:00.143350 2746 log.go:181] (0xc000143ad0) (0xc0006320a0) Stream removed, broadcasting: 5\n" Jan 12 23:50:00.149: INFO: stdout: "" [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:50:00.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8437" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 • [SLOW TEST:12.532 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":309,"completed":187,"skipped":3096,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:50:00.191: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating pod busybox-e1a0dc04-6440-4b3f-a6ef-537ca5eee37b in namespace container-probe-1998 Jan 12 23:50:04.436: INFO: Started pod busybox-e1a0dc04-6440-4b3f-a6ef-537ca5eee37b in namespace container-probe-1998 STEP: checking the pod's current state and verifying that restartCount is present Jan 12 23:50:04.439: INFO: Initial restart count of pod busybox-e1a0dc04-6440-4b3f-a6ef-537ca5eee37b is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:54:05.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1998" for this suite. • [SLOW TEST:245.164 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":309,"completed":188,"skipped":3113,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:54:05.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 12 23:54:05.438: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Jan 12 23:54:06.045: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-01-12T23:54:06Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-01-12T23:54:06Z]] name:name1 resourceVersion:432529 uid:12f6a1b0-f9c8-4e8f-b792-8bac3084500a] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Jan 12 23:54:16.053: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-01-12T23:54:16Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-01-12T23:54:16Z]] name:name2 resourceVersion:432558 uid:a44e274f-da39-4840-b5d8-a21989b80f87] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Jan 12 23:54:26.061: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-01-12T23:54:06Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-01-12T23:54:26Z]] name:name1 resourceVersion:432578 uid:12f6a1b0-f9c8-4e8f-b792-8bac3084500a] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Jan 12 23:54:36.069: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-01-12T23:54:16Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-01-12T23:54:36Z]] name:name2 resourceVersion:432598 uid:a44e274f-da39-4840-b5d8-a21989b80f87] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Jan 12 23:54:46.079: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-01-12T23:54:06Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-01-12T23:54:26Z]] name:name1 resourceVersion:432618 uid:12f6a1b0-f9c8-4e8f-b792-8bac3084500a] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Jan 12 23:54:56.087: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-01-12T23:54:16Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-01-12T23:54:36Z]] name:name2 resourceVersion:432638 uid:a44e274f-da39-4840-b5d8-a21989b80f87] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:55:06.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-3109" for this suite. • [SLOW TEST:61.251 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":309,"completed":189,"skipped":3125,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:55:06.607: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:85 [It] should run the lifecycle of a Deployment [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a Deployment STEP: waiting for Deployment to be created STEP: waiting for all Replicas to be Ready Jan 12 23:55:06.735: INFO: observed Deployment test-deployment in namespace deployment-3455 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jan 12 23:55:06.735: INFO: observed Deployment test-deployment in namespace deployment-3455 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jan 12 23:55:06.804: INFO: observed Deployment test-deployment in namespace deployment-3455 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jan 12 23:55:06.804: INFO: observed Deployment test-deployment in namespace deployment-3455 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jan 12 23:55:06.864: INFO: observed Deployment test-deployment in namespace deployment-3455 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jan 12 23:55:06.864: INFO: observed Deployment test-deployment in namespace deployment-3455 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jan 12 23:55:06.948: INFO: observed Deployment test-deployment in namespace deployment-3455 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jan 12 23:55:06.949: INFO: observed Deployment test-deployment in namespace deployment-3455 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jan 12 23:55:11.100: INFO: observed Deployment test-deployment in namespace deployment-3455 with ReadyReplicas 1 and labels map[test-deployment-static:true] Jan 12 23:55:11.100: INFO: observed Deployment test-deployment in namespace deployment-3455 with ReadyReplicas 1 and labels map[test-deployment-static:true] Jan 12 23:55:11.288: INFO: observed Deployment test-deployment in namespace deployment-3455 with ReadyReplicas 2 and labels map[test-deployment-static:true] STEP: patching the Deployment Jan 12 23:55:11.296: INFO: observed event type ADDED STEP: waiting for Replicas to scale Jan 12 23:55:11.297: INFO: observed Deployment test-deployment in namespace deployment-3455 with ReadyReplicas 0 Jan 12 23:55:11.297: INFO: observed Deployment test-deployment in namespace deployment-3455 with ReadyReplicas 0 Jan 12 23:55:11.297: INFO: observed Deployment test-deployment in namespace deployment-3455 with ReadyReplicas 0 Jan 12 23:55:11.297: INFO: observed Deployment test-deployment in namespace deployment-3455 with ReadyReplicas 0 Jan 12 23:55:11.298: INFO: observed Deployment test-deployment in namespace deployment-3455 with ReadyReplicas 0 Jan 12 23:55:11.298: INFO: observed Deployment test-deployment in namespace deployment-3455 with ReadyReplicas 0 Jan 12 23:55:11.298: INFO: observed Deployment test-deployment in namespace deployment-3455 with ReadyReplicas 0 Jan 12 23:55:11.298: INFO: observed Deployment test-deployment in namespace deployment-3455 with ReadyReplicas 0 Jan 12 23:55:11.298: INFO: observed Deployment test-deployment in namespace deployment-3455 with ReadyReplicas 1 Jan 12 23:55:11.298: INFO: observed Deployment test-deployment in namespace deployment-3455 with ReadyReplicas 1 Jan 12 23:55:11.298: INFO: observed Deployment test-deployment in namespace deployment-3455 with ReadyReplicas 2 Jan 12 23:55:11.298: INFO: observed Deployment test-deployment in namespace deployment-3455 with ReadyReplicas 2 Jan 12 23:55:11.298: INFO: observed Deployment test-deployment in namespace deployment-3455 with ReadyReplicas 2 Jan 12 23:55:11.298: INFO: observed Deployment test-deployment in namespace deployment-3455 with ReadyReplicas 2 Jan 12 23:55:11.370: INFO: observed Deployment test-deployment in namespace deployment-3455 with ReadyReplicas 2 Jan 12 23:55:11.370: INFO: observed Deployment test-deployment in namespace deployment-3455 with ReadyReplicas 2 Jan 12 23:55:11.444: INFO: observed Deployment test-deployment in namespace deployment-3455 with ReadyReplicas 2 Jan 12 23:55:11.444: INFO: observed Deployment test-deployment in namespace deployment-3455 with ReadyReplicas 2 Jan 12 23:55:11.576: INFO: observed Deployment test-deployment in namespace deployment-3455 with ReadyReplicas 2 Jan 12 23:55:11.576: INFO: observed Deployment test-deployment in namespace deployment-3455 with ReadyReplicas 2 Jan 12 23:55:11.750: INFO: observed Deployment test-deployment in namespace deployment-3455 with ReadyReplicas 1 STEP: listing Deployments Jan 12 23:55:11.754: INFO: Found test-deployment with labels: map[test-deployment:patched test-deployment-static:true] STEP: updating the Deployment Jan 12 23:55:11.766: INFO: observed Deployment test-deployment in namespace deployment-3455 with ReadyReplicas 1 STEP: fetching the DeploymentStatus Jan 12 23:55:11.947: INFO: observed Deployment test-deployment in namespace deployment-3455 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Jan 12 23:55:12.024: INFO: observed Deployment test-deployment in namespace deployment-3455 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Jan 12 23:55:12.114: INFO: observed Deployment test-deployment in namespace deployment-3455 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Jan 12 23:55:12.663: INFO: observed Deployment test-deployment in namespace deployment-3455 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Jan 12 23:55:12.979: INFO: observed Deployment test-deployment in namespace deployment-3455 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Jan 12 23:55:13.058: INFO: observed Deployment test-deployment in namespace deployment-3455 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Jan 12 23:55:13.451: INFO: observed Deployment test-deployment in namespace deployment-3455 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Jan 12 23:55:13.633: INFO: observed Deployment test-deployment in namespace deployment-3455 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] STEP: patching the DeploymentStatus STEP: fetching the DeploymentStatus Jan 12 23:55:18.060: INFO: observed Deployment test-deployment in namespace deployment-3455 with ReadyReplicas 1 Jan 12 23:55:18.060: INFO: observed Deployment test-deployment in namespace deployment-3455 with ReadyReplicas 1 Jan 12 23:55:18.060: INFO: observed Deployment test-deployment in namespace deployment-3455 with ReadyReplicas 1 Jan 12 23:55:18.060: INFO: observed Deployment test-deployment in namespace deployment-3455 with ReadyReplicas 1 Jan 12 23:55:18.060: INFO: observed Deployment test-deployment in namespace deployment-3455 with ReadyReplicas 1 Jan 12 23:55:18.061: INFO: observed Deployment test-deployment in namespace deployment-3455 with ReadyReplicas 1 Jan 12 23:55:18.061: INFO: observed Deployment test-deployment in namespace deployment-3455 with ReadyReplicas 1 Jan 12 23:55:18.061: INFO: observed Deployment test-deployment in namespace deployment-3455 with ReadyReplicas 1 STEP: deleting the Deployment Jan 12 23:55:18.291: INFO: observed event type MODIFIED Jan 12 23:55:18.291: INFO: observed event type MODIFIED Jan 12 23:55:18.291: INFO: observed event type MODIFIED Jan 12 23:55:18.291: INFO: observed event type MODIFIED Jan 12 23:55:18.291: INFO: observed event type MODIFIED Jan 12 23:55:18.291: INFO: observed event type MODIFIED Jan 12 23:55:18.291: INFO: observed event type MODIFIED Jan 12 23:55:18.291: INFO: observed event type MODIFIED Jan 12 23:55:18.291: INFO: observed event type MODIFIED Jan 12 23:55:18.292: INFO: observed event type MODIFIED Jan 12 23:55:18.292: INFO: observed event type MODIFIED [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:79 Jan 12 23:55:18.461: INFO: Log out all the ReplicaSets if there is no deployment created Jan 12 23:55:18.485: INFO: ReplicaSet "test-deployment-768947d6f5": &ReplicaSet{ObjectMeta:{test-deployment-768947d6f5 deployment-3455 264ebd2c-946d-41c7-bb2b-e722bfb64bf4 432817 3 2021-01-12 23:55:12 +0000 UTC map[pod-template-hash:768947d6f5 test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:3] [{apps/v1 Deployment test-deployment 43bdbee5-383a-43bf-819b-0d28f631b1ae 0xc0042a9417 0xc0042a9418}] [] [{kube-controller-manager Update apps/v1 2021-01-12 23:55:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"43bdbee5-383a-43bf-819b-0d28f631b1ae\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 768947d6f5,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:768947d6f5 test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0042a9480 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:2,FullyLabeledReplicas:2,ObservedGeneration:3,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jan 12 23:55:18.489: INFO: pod: "test-deployment-768947d6f5-7b4bk": &Pod{ObjectMeta:{test-deployment-768947d6f5-7b4bk test-deployment-768947d6f5- deployment-3455 977b6cf2-d792-4924-ab7c-af463061363c 432794 0 2021-01-12 23:55:12 +0000 UTC map[pod-template-hash:768947d6f5 test-deployment-static:true] map[] [{apps/v1 ReplicaSet test-deployment-768947d6f5 264ebd2c-946d-41c7-bb2b-e722bfb64bf4 0xc0042a9857 0xc0042a9858}] [] [{kube-controller-manager Update v1 2021-01-12 23:55:12 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"264ebd2c-946d-41c7-bb2b-e722bfb64bf4\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-12 23:55:17 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.51\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rjfq2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rjfq2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rjfq2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:55:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:55:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:55:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:55:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.51,StartTime:2021-01-12 23:55:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-01-12 23:55:16 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://4c62c8767d9d146a68a362d804c5511fed76e78afb359c0755c1a84116166da1,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.51,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 12 23:55:18.490: INFO: pod: "test-deployment-768947d6f5-pn2w2": &Pod{ObjectMeta:{test-deployment-768947d6f5-pn2w2 test-deployment-768947d6f5- deployment-3455 0e3b5cb2-b25c-4d09-8461-290b35db2049 432815 0 2021-01-12 23:55:17 +0000 UTC map[pod-template-hash:768947d6f5 test-deployment-static:true] map[] [{apps/v1 ReplicaSet test-deployment-768947d6f5 264ebd2c-946d-41c7-bb2b-e722bfb64bf4 0xc0042a9a07 0xc0042a9a08}] [] [{kube-controller-manager Update v1 2021-01-12 23:55:17 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"264ebd2c-946d-41c7-bb2b-e722bfb64bf4\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-12 23:55:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rjfq2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rjfq2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rjfq2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:55:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:55:18 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [test-deployment],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:55:18 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [test-deployment],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:55:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2021-01-12 23:55:18 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 12 23:55:18.490: INFO: ReplicaSet "test-deployment-7c65d4bcf9": &ReplicaSet{ObjectMeta:{test-deployment-7c65d4bcf9 deployment-3455 33f7db64-9c16-4679-8fe8-b80b1c2bb921 432814 4 2021-01-12 23:55:11 +0000 UTC map[pod-template-hash:7c65d4bcf9 test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-deployment 43bdbee5-383a-43bf-819b-0d28f631b1ae 0xc0042a94e7 0xc0042a94e8}] [] [{kube-controller-manager Update apps/v1 2021-01-12 23:55:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"43bdbee5-383a-43bf-819b-0d28f631b1ae\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 7c65d4bcf9,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:7c65d4bcf9 test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment k8s.gcr.io/pause:3.2 [/bin/sleep 100000] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0042a9568 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:4,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 12 23:55:18.493: INFO: ReplicaSet "test-deployment-8b6954bfb": &ReplicaSet{ObjectMeta:{test-deployment-8b6954bfb deployment-3455 8b592438-105f-4c92-bd15-b83523a45828 432732 2 2021-01-12 23:55:06 +0000 UTC map[pod-template-hash:8b6954bfb test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-deployment 43bdbee5-383a-43bf-819b-0d28f631b1ae 0xc0042a95c7 0xc0042a95c8}] [] [{kube-controller-manager Update apps/v1 2021-01-12 23:55:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"43bdbee5-383a-43bf-819b-0d28f631b1ae\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 8b6954bfb,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:8b6954bfb test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment k8s.gcr.io/e2e-test-images/agnhost:2.21 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0042a9630 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jan 12 23:55:18.496: INFO: pod: "test-deployment-8b6954bfb-t42wf": &Pod{ObjectMeta:{test-deployment-8b6954bfb-t42wf test-deployment-8b6954bfb- deployment-3455 b48b8a18-de81-4f62-a8ee-4a04b2137c92 432701 0 2021-01-12 23:55:06 +0000 UTC map[pod-template-hash:8b6954bfb test-deployment-static:true] map[] [{apps/v1 ReplicaSet test-deployment-8b6954bfb 8b592438-105f-4c92-bd15-b83523a45828 0xc00333b5c7 0xc00333b5c8}] [] [{kube-controller-manager Update v1 2021-01-12 23:55:06 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8b592438-105f-4c92-bd15-b83523a45828\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-12 23:55:11 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.109\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rjfq2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rjfq2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rjfq2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:55:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:55:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:55:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-12 23:55:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:10.244.1.109,StartTime:2021-01-12 23:55:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-01-12 23:55:10 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a,ContainerID:containerd://eaa513d0563eebb72d9a8de999a934212e6fbbdff561444de8b33b2c43fd4ed4,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.109,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:55:18.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3455" for this suite. • [SLOW TEST:11.896 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run the lifecycle of a Deployment [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","total":309,"completed":190,"skipped":3134,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:55:18.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:55:35.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9541" for this suite. • [SLOW TEST:16.508 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":309,"completed":191,"skipped":3151,"failed":0} S ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:55:35.011: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Jan 12 23:55:40.156: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:55:40.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-6029" for this suite. • [SLOW TEST:5.316 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":309,"completed":192,"skipped":3152,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:55:40.328: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward API volume plugin Jan 12 23:55:40.465: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3f76a590-6e7c-4aed-8eae-cfdf102ebd1a" in namespace "downward-api-9613" to be "Succeeded or Failed" Jan 12 23:55:40.479: INFO: Pod "downwardapi-volume-3f76a590-6e7c-4aed-8eae-cfdf102ebd1a": Phase="Pending", Reason="", readiness=false. Elapsed: 14.896854ms Jan 12 23:55:42.505: INFO: Pod "downwardapi-volume-3f76a590-6e7c-4aed-8eae-cfdf102ebd1a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039943826s Jan 12 23:55:44.509: INFO: Pod "downwardapi-volume-3f76a590-6e7c-4aed-8eae-cfdf102ebd1a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044518013s STEP: Saw pod success Jan 12 23:55:44.509: INFO: Pod "downwardapi-volume-3f76a590-6e7c-4aed-8eae-cfdf102ebd1a" satisfied condition "Succeeded or Failed" Jan 12 23:55:44.513: INFO: Trying to get logs from node leguer-worker2 pod downwardapi-volume-3f76a590-6e7c-4aed-8eae-cfdf102ebd1a container client-container: STEP: delete the pod Jan 12 23:55:44.642: INFO: Waiting for pod downwardapi-volume-3f76a590-6e7c-4aed-8eae-cfdf102ebd1a to disappear Jan 12 23:55:44.659: INFO: Pod downwardapi-volume-3f76a590-6e7c-4aed-8eae-cfdf102ebd1a no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:55:44.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9613" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":193,"skipped":3162,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:55:44.666: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Jan 12 23:55:44.838: INFO: Waiting up to 1m0s for all nodes to be ready Jan 12 23:56:44.866: INFO: Waiting for terminating namespaces to be deleted... [It] validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Create pods that use 2/3 of node resources. Jan 12 23:56:44.930: INFO: Created pod: pod0-sched-preemption-low-priority Jan 12 23:56:44.985: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a critical pod that use same resources as that of a lower priority pod Jan 12 23:57:59.050: FAIL: Unexpected error: <*errors.errorString | 0xc000214200>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/scheduling.runPausePod(0xc001379ce0, 0x4dbb667, 0xc, 0x4db845f, 0xb, 0x0, 0x0, 0x0, 0x0, 0xc006886200, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:932 +0x10b k8s.io/kubernetes/test/e2e/scheduling.glob..func5.4() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:274 +0xba5 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00374e780) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc00374e780) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc00374e780, 0x4fa8cc8) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 STEP: Collecting events from namespace "sched-preemption-6587". STEP: Found 12 events. Jan 12 23:57:59.105: INFO: At 2021-01-12 23:56:44 +0000 UTC - event for pod0-sched-preemption-low-priority: {default-scheduler } FailedScheduling: 0/3 nodes are available: 1 Insufficient scheduling.k8s.io/foo, 1 node(s) didn't match Pod's node affinity, 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate. Jan 12 23:57:59.105: INFO: At 2021-01-12 23:56:44 +0000 UTC - event for pod1-sched-preemption-medium-priority: {default-scheduler } FailedScheduling: 0/3 nodes are available: 1 Insufficient scheduling.k8s.io/foo, 1 node(s) didn't match Pod's node affinity, 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate. Jan 12 23:57:59.105: INFO: At 2021-01-12 23:56:54 +0000 UTC - event for pod0-sched-preemption-low-priority: {default-scheduler } Scheduled: Successfully assigned sched-preemption-6587/pod0-sched-preemption-low-priority to leguer-worker Jan 12 23:57:59.105: INFO: At 2021-01-12 23:56:54 +0000 UTC - event for pod1-sched-preemption-medium-priority: {default-scheduler } Scheduled: Successfully assigned sched-preemption-6587/pod1-sched-preemption-medium-priority to leguer-worker2 Jan 12 23:57:59.105: INFO: At 2021-01-12 23:56:56 +0000 UTC - event for pod0-sched-preemption-low-priority: {kubelet leguer-worker} Pulled: Container image "k8s.gcr.io/pause:3.2" already present on machine Jan 12 23:57:59.105: INFO: At 2021-01-12 23:56:56 +0000 UTC - event for pod1-sched-preemption-medium-priority: {kubelet leguer-worker2} Pulled: Container image "k8s.gcr.io/pause:3.2" already present on machine Jan 12 23:57:59.105: INFO: At 2021-01-12 23:56:57 +0000 UTC - event for pod1-sched-preemption-medium-priority: {kubelet leguer-worker2} Created: Created container pod1-sched-preemption-medium-priority Jan 12 23:57:59.105: INFO: At 2021-01-12 23:56:57 +0000 UTC - event for pod1-sched-preemption-medium-priority: {kubelet leguer-worker2} Started: Started container pod1-sched-preemption-medium-priority Jan 12 23:57:59.105: INFO: At 2021-01-12 23:56:58 +0000 UTC - event for pod0-sched-preemption-low-priority: {kubelet leguer-worker} Created: Created container pod0-sched-preemption-low-priority Jan 12 23:57:59.105: INFO: At 2021-01-12 23:56:58 +0000 UTC - event for pod0-sched-preemption-low-priority: {kubelet leguer-worker} Started: Started container pod0-sched-preemption-low-priority Jan 12 23:57:59.105: INFO: At 2021-01-12 23:56:59 +0000 UTC - event for pod0-sched-preemption-low-priority: {default-scheduler } Preempted: Preempted by kube-system/critical-pod on node leguer-worker Jan 12 23:57:59.105: INFO: At 2021-01-12 23:57:00 +0000 UTC - event for pod0-sched-preemption-low-priority: {kubelet leguer-worker} Killing: Stopping container pod0-sched-preemption-low-priority Jan 12 23:57:59.115: INFO: POD NODE PHASE GRACE CONDITIONS Jan 12 23:57:59.115: INFO: pod0-sched-preemption-low-priority leguer-worker Running 1s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-12 23:56:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-12 23:57:02 +0000 UTC ContainersNotReady containers with unready status: [pod0-sched-preemption-low-priority]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-12 23:57:02 +0000 UTC ContainersNotReady containers with unready status: [pod0-sched-preemption-low-priority]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-12 23:56:54 +0000 UTC }] Jan 12 23:57:59.115: INFO: pod1-sched-preemption-medium-priority leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-12 23:56:54 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-01-12 23:56:58 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-01-12 23:56:58 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-12 23:56:54 +0000 UTC }] Jan 12 23:57:59.115: INFO: Jan 12 23:57:59.115: INFO: pod0-sched-preemption-low-priority[sched-preemption-6587].container[pod0-sched-preemption-low-priority]=The container could not be located when the pod was deleted. The container used to be Running Jan 12 23:57:59.121: INFO: Logging node info for node leguer-control-plane Jan 12 23:57:59.124: INFO: Node Info: &Node{ObjectMeta:{leguer-control-plane d4252648-b75f-4d20-9e17-617b71463d1d 433247 0 2021-01-10 17:37:43 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:leguer-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-01-10 17:37:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-01-10 17:37:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{}}}}} {kube-controller-manager Update v1 2021-01-10 17:38:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/leguer/leguer-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-01-12 23:57:14 +0000 UTC,LastTransitionTime:2021-01-10 17:37:43 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-01-12 23:57:14 +0000 UTC,LastTransitionTime:2021-01-10 17:37:43 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-01-12 23:57:14 +0000 UTC,LastTransitionTime:2021-01-10 17:37:43 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-01-12 23:57:14 +0000 UTC,LastTransitionTime:2021-01-10 17:38:11 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.17,},NodeAddress{Type:Hostname,Address:leguer-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5f1cb3b1931a44e6bb33804f4b6ca7e5,SystemUUID:c2287e83-2c9f-458f-8294-12965d8d5e30,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu Groovy Gorilla (development branch),ContainerRuntimeVersion:containerd://1.4.0,KubeletVersion:v1.20.0,KubeProxyVersion:v1.20.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.20.0],SizeBytes:136866161,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20200725-4d6bea59],SizeBytes:118720874,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.20.0],SizeBytes:95511851,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.20.0],SizeBytes:88147263,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.20.0],SizeBytes:66088749,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-base:v2.1.0],SizeBytes:53876619,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.14],SizeBytes:41982521,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/pause:3.3],SizeBytes:685708,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 12 23:57:59.125: INFO: Logging kubelet events for node leguer-control-plane Jan 12 23:57:59.128: INFO: Logging pods the kubelet thinks is on node leguer-control-plane Jan 12 23:57:59.166: INFO: kube-controller-manager-leguer-control-plane started at 2021-01-10 17:37:52 +0000 UTC (0+1 container statuses recorded) Jan 12 23:57:59.166: INFO: Container kube-controller-manager ready: true, restart count 0 Jan 12 23:57:59.166: INFO: coredns-74ff55c5b-flmf7 started at 2021-01-10 17:38:14 +0000 UTC (0+1 container statuses recorded) Jan 12 23:57:59.166: INFO: Container coredns ready: true, restart count 0 Jan 12 23:57:59.166: INFO: local-path-provisioner-78776bfc44-45fhs started at 2021-01-10 17:38:21 +0000 UTC (0+1 container statuses recorded) Jan 12 23:57:59.166: INFO: Container local-path-provisioner ready: true, restart count 0 Jan 12 23:57:59.166: INFO: kube-apiserver-leguer-control-plane started at 2021-01-10 17:37:52 +0000 UTC (0+1 container statuses recorded) Jan 12 23:57:59.166: INFO: Container kube-apiserver ready: true, restart count 0 Jan 12 23:57:59.166: INFO: kube-scheduler-leguer-control-plane started at 2021-01-10 17:37:52 +0000 UTC (0+1 container statuses recorded) Jan 12 23:57:59.166: INFO: Container kube-scheduler ready: true, restart count 0 Jan 12 23:57:59.166: INFO: kindnet-rjz52 started at 2021-01-10 17:37:59 +0000 UTC (0+1 container statuses recorded) Jan 12 23:57:59.166: INFO: Container kindnet-cni ready: true, restart count 0 Jan 12 23:57:59.166: INFO: kube-proxy-chqjl started at 2021-01-10 17:37:59 +0000 UTC (0+1 container statuses recorded) Jan 12 23:57:59.166: INFO: Container kube-proxy ready: true, restart count 0 Jan 12 23:57:59.166: INFO: coredns-74ff55c5b-whxn7 started at 2021-01-10 17:38:14 +0000 UTC (0+1 container statuses recorded) Jan 12 23:57:59.166: INFO: Container coredns ready: true, restart count 0 Jan 12 23:57:59.166: INFO: etcd-leguer-control-plane started at 2021-01-10 17:37:52 +0000 UTC (0+1 container statuses recorded) Jan 12 23:57:59.166: INFO: Container etcd ready: true, restart count 0 W0112 23:57:59.172214 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Jan 12 23:57:59.252: INFO: Latency metrics for node leguer-control-plane Jan 12 23:57:59.252: INFO: Logging node info for node leguer-worker Jan 12 23:57:59.255: INFO: Node Info: &Node{ObjectMeta:{leguer-worker be2127ed-84bb-4f09-b0c3-a4d52ee42a88 433186 0 2021-01-10 17:38:10 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:leguer-worker kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-01-10 17:38:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}} {kubeadm Update v1 2021-01-10 17:38:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {e2e.test Update v1 2021-01-12 23:56:44 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakecpu":{},"f:scheduling.k8s.io/foo":{}}}}} {kubelet Update v1 2021-01-12 23:56:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:example.com/fakecpu":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{},"f:scheduling.k8s.io/foo":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/leguer/leguer-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-01-12 23:56:54 +0000 UTC,LastTransitionTime:2021-01-10 17:38:10 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-01-12 23:56:54 +0000 UTC,LastTransitionTime:2021-01-10 17:38:10 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-01-12 23:56:54 +0000 UTC,LastTransitionTime:2021-01-10 17:38:10 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-01-12 23:56:54 +0000 UTC,LastTransitionTime:2021-01-10 17:38:40 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.13,},NodeAddress{Type:Hostname,Address:leguer-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f78fc8f9ff5e4436aba7096d346ace73,SystemUUID:41d601d8-c86d-45a1-bcd0-9f7bc235c1a9,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu Groovy Gorilla (development branch),ContainerRuntimeVersion:containerd://1.4.0,KubeletVersion:v1.20.0,KubeProxyVersion:v1.20.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.20.0],SizeBytes:136866161,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20200725-4d6bea59],SizeBytes:118720874,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.20.0],SizeBytes:95511851,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.20.0],SizeBytes:88147263,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.20.0],SizeBytes:66088749,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-base:v2.1.0],SizeBytes:53876619,},ContainerImage{Names:[docker.io/bitnami/kubectl@sha256:5850c0d121efff662edf659483e8f63d1e63ffffb6f22a4dd3d07d77bde1bff7 docker.io/bitnami/kubectl:latest],SizeBytes:48439357,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a k8s.gcr.io/e2e-test-images/agnhost:2.21],SizeBytes:46251468,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.14],SizeBytes:41982521,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[docker.io/pingcap/chaos-daemon@sha256:2138de599ca94104101fcd8f3bd04401f8c88f2bc721d0f77d55a81b85f4d92f docker.io/pingcap/chaos-daemon:v0.8.0],SizeBytes:21885344,},ContainerImage{Names:[docker.io/pingcap/chaos-mesh@sha256:b90c04665e0275602e52df37d4b6fdd4621df7d4e4c823fc63356559f67dea72 docker.io/pingcap/chaos-mesh:v0.8.0],SizeBytes:13863887,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:latest docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb docker.io/appropriate/curl:edge],SizeBytes:2854657,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:1804628,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[k8s.gcr.io/pause:3.3],SizeBytes:685708,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:299513,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 12 23:57:59.256: INFO: Logging kubelet events for node leguer-worker Jan 12 23:57:59.259: INFO: Logging pods the kubelet thinks is on node leguer-worker Jan 12 23:57:59.282: INFO: rally-a8f48c6d-3kmika18-pllzg started at 2021-01-10 20:04:23 +0000 UTC (0+1 container statuses recorded) Jan 12 23:57:59.282: INFO: Container rally-a8f48c6d-3kmika18 ready: true, restart count 0 Jan 12 23:57:59.282: INFO: rally-a8f48c6d-4cyi45kq-j5tzz started at 2021-01-10 20:04:23 +0000 UTC (0+1 container statuses recorded) Jan 12 23:57:59.282: INFO: Container rally-a8f48c6d-4cyi45kq ready: true, restart count 0 Jan 12 23:57:59.282: INFO: kube-proxy-bmbcs started at 2021-01-10 17:38:10 +0000 UTC (0+1 container statuses recorded) Jan 12 23:57:59.282: INFO: Container kube-proxy ready: true, restart count 0 Jan 12 23:57:59.282: INFO: rally-a8f48c6d-1y3amfc0-lp8st started at 2021-01-10 20:04:32 +0000 UTC (0+1 container statuses recorded) Jan 12 23:57:59.282: INFO: Container rally-a8f48c6d-1y3amfc0 ready: true, restart count 0 Jan 12 23:57:59.282: INFO: rally-a8f48c6d-f3hls6a3-57dwc started at 2021-01-10 20:04:32 +0000 UTC (0+1 container statuses recorded) Jan 12 23:57:59.282: INFO: Container rally-a8f48c6d-f3hls6a3 ready: true, restart count 0 Jan 12 23:57:59.282: INFO: rally-a8f48c6d-3kmika18-pdtzv started at 2021-01-10 20:04:23 +0000 UTC (0+1 container statuses recorded) Jan 12 23:57:59.282: INFO: Container rally-a8f48c6d-3kmika18 ready: true, restart count 0 Jan 12 23:57:59.282: INFO: chaos-controller-manager-69c479c674-s796v started at 2021-01-10 20:58:24 +0000 UTC (0+1 container statuses recorded) Jan 12 23:57:59.282: INFO: Container chaos-mesh ready: true, restart count 0 Jan 12 23:57:59.282: INFO: chaos-daemon-lv692 started at 2021-01-10 20:58:25 +0000 UTC (0+1 container statuses recorded) Jan 12 23:57:59.282: INFO: Container chaos-daemon ready: true, restart count 0 Jan 12 23:57:59.282: INFO: rally-a8f48c6d-9pqmjehi-9zwjj started at 2021-01-10 20:04:23 +0000 UTC (0+1 container statuses recorded) Jan 12 23:57:59.282: INFO: Container rally-a8f48c6d-9pqmjehi ready: true, restart count 0 Jan 12 23:57:59.282: INFO: kindnet-psm25 started at 2021-01-10 17:38:10 +0000 UTC (0+1 container statuses recorded) Jan 12 23:57:59.282: INFO: Container kindnet-cni ready: true, restart count 0 Jan 12 23:57:59.282: INFO: pod0-sched-preemption-low-priority started at 2021-01-12 23:56:54 +0000 UTC (0+1 container statuses recorded) Jan 12 23:57:59.282: INFO: Container pod0-sched-preemption-low-priority ready: false, restart count 0 W0112 23:57:59.288786 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Jan 12 23:57:59.379: INFO: Latency metrics for node leguer-worker Jan 12 23:57:59.379: INFO: Logging node info for node leguer-worker2 Jan 12 23:57:59.383: INFO: Node Info: &Node{ObjectMeta:{leguer-worker2 84515ca3-a509-4991-bd99-5ee03e95ab68 433185 0 2021-01-10 17:38:09 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:leguer-worker2 kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-01-10 17:38:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}} {kubeadm Update v1 2021-01-10 17:38:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {e2e.test Update v1 2021-01-12 23:56:44 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakecpu":{},"f:scheduling.k8s.io/foo":{}}}}} {kubelet Update v1 2021-01-12 23:56:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:example.com/fakecpu":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{},"f:scheduling.k8s.io/foo":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/leguer/leguer-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-01-12 23:56:54 +0000 UTC,LastTransitionTime:2021-01-10 17:38:09 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-01-12 23:56:54 +0000 UTC,LastTransitionTime:2021-01-10 17:38:09 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-01-12 23:56:54 +0000 UTC,LastTransitionTime:2021-01-10 17:38:09 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-01-12 23:56:54 +0000 UTC,LastTransitionTime:2021-01-10 17:38:40 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.12,},NodeAddress{Type:Hostname,Address:leguer-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5410119f3b524d4ea2fb70f2afd71d27,SystemUUID:af4b61da-c8b7-48ba-a1bc-2644ee3b3a57,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu Groovy Gorilla (development branch),ContainerRuntimeVersion:containerd://1.4.0,KubeletVersion:v1.20.0,KubeProxyVersion:v1.20.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.20.0],SizeBytes:136866161,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20200725-4d6bea59],SizeBytes:118720874,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.20.0],SizeBytes:95511851,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.20.0],SizeBytes:88147263,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.20.0],SizeBytes:66088749,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-base:v2.1.0],SizeBytes:53876619,},ContainerImage{Names:[docker.io/bitnami/kubectl@sha256:5850c0d121efff662edf659483e8f63d1e63ffffb6f22a4dd3d07d77bde1bff7 docker.io/bitnami/kubectl:latest],SizeBytes:48439357,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a k8s.gcr.io/e2e-test-images/agnhost:2.21],SizeBytes:46251468,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.14],SizeBytes:41982521,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[docker.io/pingcap/chaos-daemon@sha256:2138de599ca94104101fcd8f3bd04401f8c88f2bc721d0f77d55a81b85f4d92f docker.io/pingcap/chaos-daemon:v0.8.0],SizeBytes:21885344,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb docker.io/appropriate/curl:edge],SizeBytes:2854657,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:1804628,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[k8s.gcr.io/pause:3.3],SizeBytes:685708,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:299513,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 12 23:57:59.383: INFO: Logging kubelet events for node leguer-worker2 Jan 12 23:57:59.386: INFO: Logging pods the kubelet thinks is on node leguer-worker2 Jan 12 23:57:59.405: INFO: rally-a8f48c6d-vnukxqu0-llj24 started at 2021-01-10 20:04:23 +0000 UTC (0+1 container statuses recorded) Jan 12 23:57:59.405: INFO: Container rally-a8f48c6d-vnukxqu0 ready: true, restart count 0 Jan 12 23:57:59.405: INFO: rally-a8f48c6d-1y3amfc0-hh9qk started at 2021-01-10 20:04:32 +0000 UTC (0+1 container statuses recorded) Jan 12 23:57:59.405: INFO: Container rally-a8f48c6d-1y3amfc0 ready: true, restart count 0 Jan 12 23:57:59.405: INFO: kube-proxy-29gxg started at 2021-01-10 17:38:09 +0000 UTC (0+1 container statuses recorded) Jan 12 23:57:59.405: INFO: Container kube-proxy ready: true, restart count 0 Jan 12 23:57:59.405: INFO: rally-a8f48c6d-9pqmjehi-85slb started at 2021-01-10 20:04:23 +0000 UTC (0+1 container statuses recorded) Jan 12 23:57:59.405: INFO: Container rally-a8f48c6d-9pqmjehi ready: true, restart count 0 Jan 12 23:57:59.405: INFO: kindnet-8wggd started at 2021-01-10 17:38:10 +0000 UTC (0+1 container statuses recorded) Jan 12 23:57:59.405: INFO: Container kindnet-cni ready: true, restart count 0 Jan 12 23:57:59.405: INFO: rally-a8f48c6d-vnukxqu0-v85kr started at 2021-01-10 20:04:23 +0000 UTC (0+1 container statuses recorded) Jan 12 23:57:59.405: INFO: Container rally-a8f48c6d-vnukxqu0 ready: true, restart count 0 Jan 12 23:57:59.405: INFO: chaos-daemon-ffkg7 started at 2021-01-10 20:58:25 +0000 UTC (0+1 container statuses recorded) Jan 12 23:57:59.406: INFO: Container chaos-daemon ready: true, restart count 0 Jan 12 23:57:59.406: INFO: rally-a8f48c6d-f3hls6a3-dwt8n started at 2021-01-10 20:04:32 +0000 UTC (0+1 container statuses recorded) Jan 12 23:57:59.406: INFO: Container rally-a8f48c6d-f3hls6a3 ready: true, restart count 0 Jan 12 23:57:59.406: INFO: rally-a8f48c6d-4cyi45kq-knr4r started at 2021-01-10 20:04:23 +0000 UTC (0+1 container statuses recorded) Jan 12 23:57:59.406: INFO: Container rally-a8f48c6d-4cyi45kq ready: true, restart count 0 Jan 12 23:57:59.406: INFO: pod1-sched-preemption-medium-priority started at 2021-01-12 23:56:54 +0000 UTC (0+1 container statuses recorded) Jan 12 23:57:59.406: INFO: Container pod1-sched-preemption-medium-priority ready: true, restart count 0 W0112 23:57:59.414910 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Jan 12 23:57:59.494: INFO: Latency metrics for node leguer-worker2 Jan 12 23:57:59.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-6587" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • Failure [134.920 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates lower priority pod preemption by critical pod [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 12 23:57:59.050: Unexpected error: <*errors.errorString | 0xc000214200>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:932 ------------------------------ {"msg":"FAILED [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]","total":309,"completed":193,"skipped":3176,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:57:59.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 12 23:57:59.671: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Jan 12 23:58:03.236: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3045 --namespace=crd-publish-openapi-3045 create -f -' Jan 12 23:58:09.789: INFO: stderr: "" Jan 12 23:58:09.789: INFO: stdout: "e2e-test-crd-publish-openapi-3899-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Jan 12 23:58:09.789: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3045 --namespace=crd-publish-openapi-3045 delete e2e-test-crd-publish-openapi-3899-crds test-foo' Jan 12 23:58:09.900: INFO: stderr: "" Jan 12 23:58:09.900: INFO: stdout: "e2e-test-crd-publish-openapi-3899-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Jan 12 23:58:09.900: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3045 --namespace=crd-publish-openapi-3045 apply -f -' Jan 12 23:58:10.194: INFO: stderr: "" Jan 12 23:58:10.194: INFO: stdout: "e2e-test-crd-publish-openapi-3899-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Jan 12 23:58:10.194: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3045 --namespace=crd-publish-openapi-3045 delete e2e-test-crd-publish-openapi-3899-crds test-foo' Jan 12 23:58:10.307: INFO: stderr: "" Jan 12 23:58:10.307: INFO: stdout: "e2e-test-crd-publish-openapi-3899-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Jan 12 23:58:10.307: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3045 --namespace=crd-publish-openapi-3045 create -f -' Jan 12 23:58:10.682: INFO: rc: 1 Jan 12 23:58:10.683: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3045 --namespace=crd-publish-openapi-3045 apply -f -' Jan 12 23:58:10.978: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Jan 12 23:58:10.978: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3045 --namespace=crd-publish-openapi-3045 create -f -' Jan 12 23:58:11.233: INFO: rc: 1 Jan 12 23:58:11.233: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3045 --namespace=crd-publish-openapi-3045 apply -f -' Jan 12 23:58:11.500: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Jan 12 23:58:11.500: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3045 explain e2e-test-crd-publish-openapi-3899-crds' Jan 12 23:58:11.801: INFO: stderr: "" Jan 12 23:58:11.801: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3899-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Jan 12 23:58:11.801: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3045 explain e2e-test-crd-publish-openapi-3899-crds.metadata' Jan 12 23:58:12.112: INFO: stderr: "" Jan 12 23:58:12.112: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3899-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC.\n\n Populated by the system. Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested.\n\n Populated by the system when a graceful deletion is requested. Read-only.\n More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server.\n\n If this field is specified and the generated name exists, the server will\n NOT return a 409 - instead, it will either return 201 Created or 500 with\n Reason ServerTimeout indicating a unique name could not be found in the\n time allotted, and the client should retry (optionally after the time\n indicated in the Retry-After header).\n\n Applied only if Name is not specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within which each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty.\n\n Must be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources.\n\n Populated by the system. Read-only. Value must be treated as opaque by\n clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only.\n\n DEPRECATED Kubernetes will stop propagating this field in 1.20 release and\n the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations.\n\n Populated by the system. Read-only. More info:\n http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Jan 12 23:58:12.112: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3045 explain e2e-test-crd-publish-openapi-3899-crds.spec' Jan 12 23:58:12.380: INFO: stderr: "" Jan 12 23:58:12.380: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3899-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Jan 12 23:58:12.380: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3045 explain e2e-test-crd-publish-openapi-3899-crds.spec.bars' Jan 12 23:58:12.694: INFO: stderr: "" Jan 12 23:58:12.694: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3899-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Jan 12 23:58:12.694: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3045 explain e2e-test-crd-publish-openapi-3899-crds.spec.bars2' Jan 12 23:58:12.976: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 12 23:58:16.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3045" for this suite. • [SLOW TEST:16.979 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":309,"completed":194,"skipped":3190,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 12 23:58:16.566: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Jan 12 23:58:17.367: INFO: Pod name wrapped-volume-race-e0538222-be8e-4caa-913b-4dc9ea9ccfa1: Found 0 pods out of 5 Jan 12 23:58:22.376: INFO: Pod name wrapped-volume-race-e0538222-be8e-4caa-913b-4dc9ea9ccfa1: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-e0538222-be8e-4caa-913b-4dc9ea9ccfa1 in namespace emptydir-wrapper-1133, will wait for the garbage collector to delete the pods Jan 12 23:58:40.463: INFO: Deleting ReplicationController wrapped-volume-race-e0538222-be8e-4caa-913b-4dc9ea9ccfa1 took: 7.427706ms Jan 12 23:58:41.063: INFO: Terminating ReplicationController wrapped-volume-race-e0538222-be8e-4caa-913b-4dc9ea9ccfa1 pods took: 600.295904ms STEP: Creating RC which spawns configmap-volume pods Jan 12 23:59:50.220: INFO: Pod name wrapped-volume-race-b3c7670b-7fb6-499d-9a2f-23848b6ade7d: Found 0 pods out of 5 Jan 12 23:59:55.229: INFO: Pod name wrapped-volume-race-b3c7670b-7fb6-499d-9a2f-23848b6ade7d: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-b3c7670b-7fb6-499d-9a2f-23848b6ade7d in namespace emptydir-wrapper-1133, will wait for the garbage collector to delete the pods Jan 13 00:00:11.312: INFO: Deleting ReplicationController wrapped-volume-race-b3c7670b-7fb6-499d-9a2f-23848b6ade7d took: 8.008568ms Jan 13 00:00:11.913: INFO: Terminating ReplicationController wrapped-volume-race-b3c7670b-7fb6-499d-9a2f-23848b6ade7d pods took: 600.36842ms STEP: Creating RC which spawns configmap-volume pods Jan 13 00:01:00.443: INFO: Pod name wrapped-volume-race-d5067067-1d7a-4beb-9158-9d9a6ca1028e: Found 0 pods out of 5 Jan 13 00:01:05.452: INFO: Pod name wrapped-volume-race-d5067067-1d7a-4beb-9158-9d9a6ca1028e: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-d5067067-1d7a-4beb-9158-9d9a6ca1028e in namespace emptydir-wrapper-1133, will wait for the garbage collector to delete the pods Jan 13 00:01:21.537: INFO: Deleting ReplicationController wrapped-volume-race-d5067067-1d7a-4beb-9158-9d9a6ca1028e took: 8.079774ms Jan 13 00:01:22.137: INFO: Terminating ReplicationController wrapped-volume-race-d5067067-1d7a-4beb-9158-9d9a6ca1028e pods took: 600.179379ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:01:50.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-1133" for this suite. • [SLOW TEST:214.200 seconds] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":309,"completed":195,"skipped":3212,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSSSSSSSS ------------------------------ [sig-network] Services should find a service from listing all namespaces [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:01:50.767: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should find a service from listing all namespaces [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: fetching services [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:01:50.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1282" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 •{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":309,"completed":196,"skipped":3223,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:01:50.866: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Jan 13 00:01:50.927: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Jan 13 00:02:04.381: INFO: >>> kubeConfig: /root/.kube/config Jan 13 00:02:07.427: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:02:19.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-568" for this suite. • [SLOW TEST:28.837 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":309,"completed":197,"skipped":3260,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:02:19.703: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 13 00:02:19.777: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:02:20.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-4136" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":309,"completed":198,"skipped":3283,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:02:20.956: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:157 [It] should call prestop when killing a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating server pod server in namespace prestop-1629 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-1629 STEP: Deleting pre-stop pod Jan 13 00:02:36.122: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:02:36.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-1629" for this suite. • [SLOW TEST:15.229 seconds] [k8s.io] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should call prestop when killing a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":309,"completed":199,"skipped":3299,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:02:36.185: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward API volume plugin Jan 13 00:02:36.687: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9861a4e5-e76a-41c6-abb0-341e7d6d2e10" in namespace "downward-api-7974" to be "Succeeded or Failed" Jan 13 00:02:36.733: INFO: Pod "downwardapi-volume-9861a4e5-e76a-41c6-abb0-341e7d6d2e10": Phase="Pending", Reason="", readiness=false. Elapsed: 46.176948ms Jan 13 00:02:38.743: INFO: Pod "downwardapi-volume-9861a4e5-e76a-41c6-abb0-341e7d6d2e10": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056662284s Jan 13 00:02:40.747: INFO: Pod "downwardapi-volume-9861a4e5-e76a-41c6-abb0-341e7d6d2e10": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.060181683s STEP: Saw pod success Jan 13 00:02:40.747: INFO: Pod "downwardapi-volume-9861a4e5-e76a-41c6-abb0-341e7d6d2e10" satisfied condition "Succeeded or Failed" Jan 13 00:02:40.750: INFO: Trying to get logs from node leguer-worker pod downwardapi-volume-9861a4e5-e76a-41c6-abb0-341e7d6d2e10 container client-container: STEP: delete the pod Jan 13 00:02:40.954: INFO: Waiting for pod downwardapi-volume-9861a4e5-e76a-41c6-abb0-341e7d6d2e10 to disappear Jan 13 00:02:40.959: INFO: Pod downwardapi-volume-9861a4e5-e76a-41c6-abb0-341e7d6d2e10 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:02:40.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7974" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":309,"completed":200,"skipped":3304,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:02:40.966: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating pod pod-subpath-test-projected-x4ht STEP: Creating a pod to test atomic-volume-subpath Jan 13 00:02:41.151: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-x4ht" in namespace "subpath-5185" to be "Succeeded or Failed" Jan 13 00:02:41.167: INFO: Pod "pod-subpath-test-projected-x4ht": Phase="Pending", Reason="", readiness=false. Elapsed: 15.548056ms Jan 13 00:02:43.170: INFO: Pod "pod-subpath-test-projected-x4ht": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018730814s Jan 13 00:02:45.174: INFO: Pod "pod-subpath-test-projected-x4ht": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022588563s Jan 13 00:02:47.192: INFO: Pod "pod-subpath-test-projected-x4ht": Phase="Running", Reason="", readiness=true. Elapsed: 6.040995863s Jan 13 00:02:49.223: INFO: Pod "pod-subpath-test-projected-x4ht": Phase="Running", Reason="", readiness=true. Elapsed: 8.071971858s Jan 13 00:02:51.228: INFO: Pod "pod-subpath-test-projected-x4ht": Phase="Running", Reason="", readiness=true. Elapsed: 10.076457569s Jan 13 00:02:53.235: INFO: Pod "pod-subpath-test-projected-x4ht": Phase="Running", Reason="", readiness=true. Elapsed: 12.08333254s Jan 13 00:02:55.271: INFO: Pod "pod-subpath-test-projected-x4ht": Phase="Running", Reason="", readiness=true. Elapsed: 14.119858604s Jan 13 00:02:57.275: INFO: Pod "pod-subpath-test-projected-x4ht": Phase="Running", Reason="", readiness=true. Elapsed: 16.12403103s Jan 13 00:02:59.280: INFO: Pod "pod-subpath-test-projected-x4ht": Phase="Running", Reason="", readiness=true. Elapsed: 18.128883386s Jan 13 00:03:01.285: INFO: Pod "pod-subpath-test-projected-x4ht": Phase="Running", Reason="", readiness=true. Elapsed: 20.134097606s Jan 13 00:03:03.307: INFO: Pod "pod-subpath-test-projected-x4ht": Phase="Running", Reason="", readiness=true. Elapsed: 22.155608197s Jan 13 00:03:05.312: INFO: Pod "pod-subpath-test-projected-x4ht": Phase="Running", Reason="", readiness=true. Elapsed: 24.160437086s Jan 13 00:03:07.317: INFO: Pod "pod-subpath-test-projected-x4ht": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.165305976s STEP: Saw pod success Jan 13 00:03:07.317: INFO: Pod "pod-subpath-test-projected-x4ht" satisfied condition "Succeeded or Failed" Jan 13 00:03:07.320: INFO: Trying to get logs from node leguer-worker pod pod-subpath-test-projected-x4ht container test-container-subpath-projected-x4ht: STEP: delete the pod Jan 13 00:03:07.368: INFO: Waiting for pod pod-subpath-test-projected-x4ht to disappear Jan 13 00:03:07.391: INFO: Pod pod-subpath-test-projected-x4ht no longer exists STEP: Deleting pod pod-subpath-test-projected-x4ht Jan 13 00:03:07.391: INFO: Deleting pod "pod-subpath-test-projected-x4ht" in namespace "subpath-5185" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:03:07.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5185" for this suite. • [SLOW TEST:26.453 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":309,"completed":201,"skipped":3314,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:03:07.419: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: getting the auto-created API token STEP: reading a file in the container Jan 13 00:03:12.036: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7870 pod-service-account-2368a2d8-55aa-4f4e-b60f-cb7501809edd -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Jan 13 00:03:12.253: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7870 pod-service-account-2368a2d8-55aa-4f4e-b60f-cb7501809edd -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Jan 13 00:03:12.483: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7870 pod-service-account-2368a2d8-55aa-4f4e-b60f-cb7501809edd -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:03:12.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-7870" for this suite. • [SLOW TEST:5.321 seconds] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":309,"completed":202,"skipped":3319,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:03:12.741: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:03:30.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-3342" for this suite. • [SLOW TEST:18.202 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":309,"completed":203,"skipped":3333,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:03:30.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Jan 13 00:03:31.610: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Jan 13 00:03:33.625: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746093011, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746093011, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746093011, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746093011, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-7d6697c5b7\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 13 00:03:35.630: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746093011, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746093011, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746093011, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746093011, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-7d6697c5b7\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 13 00:03:38.665: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 13 00:03:38.670: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:03:39.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-1934" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:9.088 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":309,"completed":204,"skipped":3356,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:03:40.032: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating secret with name secret-test-7c3d0616-1a5f-4753-bf7e-a17ab39420ed STEP: Creating a pod to test consume secrets Jan 13 00:03:40.582: INFO: Waiting up to 5m0s for pod "pod-secrets-2b847dd9-0996-4aa9-baaa-e969c8a7b6fe" in namespace "secrets-3643" to be "Succeeded or Failed" Jan 13 00:03:40.762: INFO: Pod "pod-secrets-2b847dd9-0996-4aa9-baaa-e969c8a7b6fe": Phase="Pending", Reason="", readiness=false. Elapsed: 179.675147ms Jan 13 00:03:42.767: INFO: Pod "pod-secrets-2b847dd9-0996-4aa9-baaa-e969c8a7b6fe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.184935346s Jan 13 00:03:44.771: INFO: Pod "pod-secrets-2b847dd9-0996-4aa9-baaa-e969c8a7b6fe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.189024312s STEP: Saw pod success Jan 13 00:03:44.771: INFO: Pod "pod-secrets-2b847dd9-0996-4aa9-baaa-e969c8a7b6fe" satisfied condition "Succeeded or Failed" Jan 13 00:03:44.774: INFO: Trying to get logs from node leguer-worker pod pod-secrets-2b847dd9-0996-4aa9-baaa-e969c8a7b6fe container secret-volume-test: STEP: delete the pod Jan 13 00:03:44.882: INFO: Waiting for pod pod-secrets-2b847dd9-0996-4aa9-baaa-e969c8a7b6fe to disappear Jan 13 00:03:44.885: INFO: Pod pod-secrets-2b847dd9-0996-4aa9-baaa-e969c8a7b6fe no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:03:44.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3643" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":205,"skipped":3359,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:03:44.895: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating the pod Jan 13 00:03:45.033: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:03:50.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-664" for this suite. • [SLOW TEST:6.123 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":309,"completed":206,"skipped":3371,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:03:51.019: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jan 13 00:03:55.183: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:03:55.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7298" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":309,"completed":207,"skipped":3373,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:03:55.217: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should check is all data is printed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 13 00:03:55.391: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-7757 version' Jan 13 00:03:55.578: INFO: stderr: "" Jan 13 00:03:55.578: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"20\", GitVersion:\"v1.20.1\", GitCommit:\"c4d752765b3bbac2237bf87cf0b1c2e307844666\", GitTreeState:\"clean\", BuildDate:\"2020-12-18T12:09:25Z\", GoVersion:\"go1.15.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"20\", GitVersion:\"v1.20.0\", GitCommit:\"af46c47ce925f4c4ad5cc8d1fca46c7b77d13b38\", GitTreeState:\"clean\", BuildDate:\"2020-12-08T22:31:47Z\", GoVersion:\"go1.15.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:03:55.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7757" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":309,"completed":208,"skipped":3378,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:03:55.589: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jan 13 00:04:01.290: INFO: Successfully updated pod "pod-update-activedeadlineseconds-3e6ee53e-0048-481b-9aea-c14097ac493c" Jan 13 00:04:01.290: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-3e6ee53e-0048-481b-9aea-c14097ac493c" in namespace "pods-225" to be "terminated due to deadline exceeded" Jan 13 00:04:01.292: INFO: Pod "pod-update-activedeadlineseconds-3e6ee53e-0048-481b-9aea-c14097ac493c": Phase="Running", Reason="", readiness=true. Elapsed: 2.574553ms Jan 13 00:04:03.297: INFO: Pod "pod-update-activedeadlineseconds-3e6ee53e-0048-481b-9aea-c14097ac493c": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.007773132s Jan 13 00:04:03.298: INFO: Pod "pod-update-activedeadlineseconds-3e6ee53e-0048-481b-9aea-c14097ac493c" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:04:03.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-225" for this suite. • [SLOW TEST:7.719 seconds] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":309,"completed":209,"skipped":3391,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:04:03.307: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name projected-configmap-test-volume-a613eea2-fa58-4de6-ad61-a275b4971975 STEP: Creating a pod to test consume configMaps Jan 13 00:04:03.487: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c711437e-1f91-46f4-950f-f582584922d4" in namespace "projected-7631" to be "Succeeded or Failed" Jan 13 00:04:03.494: INFO: Pod "pod-projected-configmaps-c711437e-1f91-46f4-950f-f582584922d4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.777522ms Jan 13 00:04:05.498: INFO: Pod "pod-projected-configmaps-c711437e-1f91-46f4-950f-f582584922d4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010836166s Jan 13 00:04:07.502: INFO: Pod "pod-projected-configmaps-c711437e-1f91-46f4-950f-f582584922d4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014495492s STEP: Saw pod success Jan 13 00:04:07.502: INFO: Pod "pod-projected-configmaps-c711437e-1f91-46f4-950f-f582584922d4" satisfied condition "Succeeded or Failed" Jan 13 00:04:07.505: INFO: Trying to get logs from node leguer-worker2 pod pod-projected-configmaps-c711437e-1f91-46f4-950f-f582584922d4 container agnhost-container: STEP: delete the pod Jan 13 00:04:07.549: INFO: Waiting for pod pod-projected-configmaps-c711437e-1f91-46f4-950f-f582584922d4 to disappear Jan 13 00:04:07.567: INFO: Pod pod-projected-configmaps-c711437e-1f91-46f4-950f-f582584922d4 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:04:07.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7631" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":309,"completed":210,"skipped":3391,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:04:07.576: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating projection with secret that has name projected-secret-test-5b0ff0f5-3920-48e0-ae27-1869ed7f42b8 STEP: Creating a pod to test consume secrets Jan 13 00:04:07.722: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6ef72e72-7e49-4cf4-93d1-e83fd843c688" in namespace "projected-2659" to be "Succeeded or Failed" Jan 13 00:04:07.909: INFO: Pod "pod-projected-secrets-6ef72e72-7e49-4cf4-93d1-e83fd843c688": Phase="Pending", Reason="", readiness=false. Elapsed: 186.812879ms Jan 13 00:04:09.913: INFO: Pod "pod-projected-secrets-6ef72e72-7e49-4cf4-93d1-e83fd843c688": Phase="Pending", Reason="", readiness=false. Elapsed: 2.191519637s Jan 13 00:04:11.918: INFO: Pod "pod-projected-secrets-6ef72e72-7e49-4cf4-93d1-e83fd843c688": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.195852152s STEP: Saw pod success Jan 13 00:04:11.918: INFO: Pod "pod-projected-secrets-6ef72e72-7e49-4cf4-93d1-e83fd843c688" satisfied condition "Succeeded or Failed" Jan 13 00:04:11.921: INFO: Trying to get logs from node leguer-worker2 pod pod-projected-secrets-6ef72e72-7e49-4cf4-93d1-e83fd843c688 container projected-secret-volume-test: STEP: delete the pod Jan 13 00:04:11.985: INFO: Waiting for pod pod-projected-secrets-6ef72e72-7e49-4cf4-93d1-e83fd843c688 to disappear Jan 13 00:04:11.993: INFO: Pod pod-projected-secrets-6ef72e72-7e49-4cf4-93d1-e83fd843c688 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:04:11.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2659" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":309,"completed":211,"skipped":3417,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:04:12.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:92 Jan 13 00:04:12.050: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 13 00:04:12.057: INFO: Waiting for terminating namespaces to be deleted... Jan 13 00:04:12.059: INFO: Logging pods the apiserver thinks is on node leguer-worker before test Jan 13 00:04:12.063: INFO: rally-a8f48c6d-3kmika18-pdtzv from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 13 00:04:12.063: INFO: Container rally-a8f48c6d-3kmika18 ready: true, restart count 0 Jan 13 00:04:12.063: INFO: rally-a8f48c6d-3kmika18-pllzg from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 13 00:04:12.063: INFO: Container rally-a8f48c6d-3kmika18 ready: true, restart count 0 Jan 13 00:04:12.063: INFO: rally-a8f48c6d-4cyi45kq-j5tzz from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 13 00:04:12.063: INFO: Container rally-a8f48c6d-4cyi45kq ready: true, restart count 0 Jan 13 00:04:12.063: INFO: rally-a8f48c6d-f3hls6a3-57dwc from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:32 +0000 UTC (1 container statuses recorded) Jan 13 00:04:12.063: INFO: Container rally-a8f48c6d-f3hls6a3 ready: true, restart count 0 Jan 13 00:04:12.063: INFO: rally-a8f48c6d-1y3amfc0-lp8st from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:32 +0000 UTC (1 container statuses recorded) Jan 13 00:04:12.064: INFO: Container rally-a8f48c6d-1y3amfc0 ready: true, restart count 0 Jan 13 00:04:12.064: INFO: rally-a8f48c6d-9pqmjehi-9zwjj from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 13 00:04:12.064: INFO: Container rally-a8f48c6d-9pqmjehi ready: true, restart count 0 Jan 13 00:04:12.064: INFO: chaos-controller-manager-69c479c674-s796v from default started at 2021-01-10 20:58:24 +0000 UTC (1 container statuses recorded) Jan 13 00:04:12.064: INFO: Container chaos-mesh ready: true, restart count 0 Jan 13 00:04:12.064: INFO: chaos-daemon-lv692 from default started at 2021-01-10 20:58:25 +0000 UTC (1 container statuses recorded) Jan 13 00:04:12.064: INFO: Container chaos-daemon ready: true, restart count 0 Jan 13 00:04:12.064: INFO: kindnet-psm25 from kube-system started at 2021-01-10 17:38:10 +0000 UTC (1 container statuses recorded) Jan 13 00:04:12.064: INFO: Container kindnet-cni ready: true, restart count 0 Jan 13 00:04:12.064: INFO: kube-proxy-bmbcs from kube-system started at 2021-01-10 17:38:10 +0000 UTC (1 container statuses recorded) Jan 13 00:04:12.064: INFO: Container kube-proxy ready: true, restart count 0 Jan 13 00:04:12.064: INFO: Logging pods the apiserver thinks is on node leguer-worker2 before test Jan 13 00:04:12.070: INFO: rally-a8f48c6d-4cyi45kq-knr4r from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 13 00:04:12.070: INFO: Container rally-a8f48c6d-4cyi45kq ready: true, restart count 0 Jan 13 00:04:12.070: INFO: rally-a8f48c6d-f3hls6a3-dwt8n from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:32 +0000 UTC (1 container statuses recorded) Jan 13 00:04:12.070: INFO: Container rally-a8f48c6d-f3hls6a3 ready: true, restart count 0 Jan 13 00:04:12.070: INFO: rally-a8f48c6d-1y3amfc0-hh9qk from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:32 +0000 UTC (1 container statuses recorded) Jan 13 00:04:12.070: INFO: Container rally-a8f48c6d-1y3amfc0 ready: true, restart count 0 Jan 13 00:04:12.070: INFO: rally-a8f48c6d-9pqmjehi-85slb from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 13 00:04:12.070: INFO: Container rally-a8f48c6d-9pqmjehi ready: true, restart count 0 Jan 13 00:04:12.070: INFO: rally-a8f48c6d-vnukxqu0-llj24 from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 13 00:04:12.070: INFO: Container rally-a8f48c6d-vnukxqu0 ready: true, restart count 0 Jan 13 00:04:12.070: INFO: rally-a8f48c6d-vnukxqu0-v85kr from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 13 00:04:12.070: INFO: Container rally-a8f48c6d-vnukxqu0 ready: true, restart count 0 Jan 13 00:04:12.070: INFO: chaos-daemon-ffkg7 from default started at 2021-01-10 20:58:25 +0000 UTC (1 container statuses recorded) Jan 13 00:04:12.070: INFO: Container chaos-daemon ready: true, restart count 0 Jan 13 00:04:12.070: INFO: kindnet-8wggd from kube-system started at 2021-01-10 17:38:10 +0000 UTC (1 container statuses recorded) Jan 13 00:04:12.070: INFO: Container kindnet-cni ready: true, restart count 0 Jan 13 00:04:12.070: INFO: kube-proxy-29gxg from kube-system started at 2021-01-10 17:38:09 +0000 UTC (1 container statuses recorded) Jan 13 00:04:12.070: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: verifying the node has the label node leguer-worker STEP: verifying the node has the label node leguer-worker2 Jan 13 00:04:12.157: INFO: Pod rally-a8f48c6d-3kmika18-pdtzv requesting resource cpu=0m on Node leguer-worker Jan 13 00:04:12.157: INFO: Pod rally-a8f48c6d-3kmika18-pllzg requesting resource cpu=0m on Node leguer-worker Jan 13 00:04:12.157: INFO: Pod rally-a8f48c6d-4cyi45kq-j5tzz requesting resource cpu=0m on Node leguer-worker Jan 13 00:04:12.157: INFO: Pod rally-a8f48c6d-4cyi45kq-knr4r requesting resource cpu=0m on Node leguer-worker2 Jan 13 00:04:12.157: INFO: Pod rally-a8f48c6d-f3hls6a3-57dwc requesting resource cpu=0m on Node leguer-worker Jan 13 00:04:12.157: INFO: Pod rally-a8f48c6d-f3hls6a3-dwt8n requesting resource cpu=0m on Node leguer-worker2 Jan 13 00:04:12.157: INFO: Pod rally-a8f48c6d-1y3amfc0-hh9qk requesting resource cpu=0m on Node leguer-worker2 Jan 13 00:04:12.157: INFO: Pod rally-a8f48c6d-1y3amfc0-lp8st requesting resource cpu=0m on Node leguer-worker Jan 13 00:04:12.157: INFO: Pod rally-a8f48c6d-9pqmjehi-85slb requesting resource cpu=0m on Node leguer-worker2 Jan 13 00:04:12.157: INFO: Pod rally-a8f48c6d-9pqmjehi-9zwjj requesting resource cpu=0m on Node leguer-worker Jan 13 00:04:12.157: INFO: Pod rally-a8f48c6d-vnukxqu0-llj24 requesting resource cpu=0m on Node leguer-worker2 Jan 13 00:04:12.157: INFO: Pod rally-a8f48c6d-vnukxqu0-v85kr requesting resource cpu=0m on Node leguer-worker2 Jan 13 00:04:12.157: INFO: Pod chaos-controller-manager-69c479c674-s796v requesting resource cpu=25m on Node leguer-worker Jan 13 00:04:12.157: INFO: Pod chaos-daemon-ffkg7 requesting resource cpu=0m on Node leguer-worker2 Jan 13 00:04:12.157: INFO: Pod chaos-daemon-lv692 requesting resource cpu=0m on Node leguer-worker Jan 13 00:04:12.157: INFO: Pod kindnet-8wggd requesting resource cpu=100m on Node leguer-worker2 Jan 13 00:04:12.157: INFO: Pod kindnet-psm25 requesting resource cpu=100m on Node leguer-worker Jan 13 00:04:12.157: INFO: Pod kube-proxy-29gxg requesting resource cpu=0m on Node leguer-worker2 Jan 13 00:04:12.157: INFO: Pod kube-proxy-bmbcs requesting resource cpu=0m on Node leguer-worker STEP: Starting Pods to consume most of the cluster CPU. Jan 13 00:04:12.157: INFO: Creating a pod which consumes cpu=11112m on Node leguer-worker Jan 13 00:04:12.165: INFO: Creating a pod which consumes cpu=11130m on Node leguer-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-4e7daeec-e3c9-459e-b602-ecdbe4550462.1659a1cefeed3455], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1624/filler-pod-4e7daeec-e3c9-459e-b602-ecdbe4550462 to leguer-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-4e7daeec-e3c9-459e-b602-ecdbe4550462.1659a1cf4d649016], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-4e7daeec-e3c9-459e-b602-ecdbe4550462.1659a1cfabf91d2c], Reason = [Created], Message = [Created container filler-pod-4e7daeec-e3c9-459e-b602-ecdbe4550462] STEP: Considering event: Type = [Normal], Name = [filler-pod-4e7daeec-e3c9-459e-b602-ecdbe4550462.1659a1cfc5bb3bc4], Reason = [Started], Message = [Started container filler-pod-4e7daeec-e3c9-459e-b602-ecdbe4550462] STEP: Considering event: Type = [Normal], Name = [filler-pod-5a956a67-067a-4f94-850a-0f7a59b8141b.1659a1cf02942a29], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1624/filler-pod-5a956a67-067a-4f94-850a-0f7a59b8141b to leguer-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-5a956a67-067a-4f94-850a-0f7a59b8141b.1659a1cf8b2363e3], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-5a956a67-067a-4f94-850a-0f7a59b8141b.1659a1cfcfc7bbbe], Reason = [Created], Message = [Created container filler-pod-5a956a67-067a-4f94-850a-0f7a59b8141b] STEP: Considering event: Type = [Normal], Name = [filler-pod-5a956a67-067a-4f94-850a-0f7a59b8141b.1659a1cfde728bb2], Reason = [Started], Message = [Started container filler-pod-5a956a67-067a-4f94-850a-0f7a59b8141b] STEP: Considering event: Type = [Warning], Name = [additional-pod.1659a1d06ae16558], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node leguer-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node leguer-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:04:19.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1624" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:83 • [SLOW TEST:7.372 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":309,"completed":212,"skipped":3426,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:04:19.373: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 13 00:04:19.604: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-daeca7ce-1dea-4156-9d3b-2ecb23b6ea5c" in namespace "security-context-test-9870" to be "Succeeded or Failed" Jan 13 00:04:19.652: INFO: Pod "alpine-nnp-false-daeca7ce-1dea-4156-9d3b-2ecb23b6ea5c": Phase="Pending", Reason="", readiness=false. Elapsed: 48.58007ms Jan 13 00:04:21.657: INFO: Pod "alpine-nnp-false-daeca7ce-1dea-4156-9d3b-2ecb23b6ea5c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053412203s Jan 13 00:04:23.720: INFO: Pod "alpine-nnp-false-daeca7ce-1dea-4156-9d3b-2ecb23b6ea5c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.116408037s Jan 13 00:04:26.212: INFO: Pod "alpine-nnp-false-daeca7ce-1dea-4156-9d3b-2ecb23b6ea5c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.608046237s Jan 13 00:04:28.960: INFO: Pod "alpine-nnp-false-daeca7ce-1dea-4156-9d3b-2ecb23b6ea5c": Phase="Pending", Reason="", readiness=false. Elapsed: 9.356424243s Jan 13 00:04:30.964: INFO: Pod "alpine-nnp-false-daeca7ce-1dea-4156-9d3b-2ecb23b6ea5c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.360495092s Jan 13 00:04:30.964: INFO: Pod "alpine-nnp-false-daeca7ce-1dea-4156-9d3b-2ecb23b6ea5c" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:04:30.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-9870" for this suite. • [SLOW TEST:11.611 seconds] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 when creating containers with AllowPrivilegeEscalation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:291 should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":213,"skipped":3428,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:04:30.984: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test emptydir 0644 on node default medium Jan 13 00:04:31.183: INFO: Waiting up to 5m0s for pod "pod-9c781eb6-43d2-4885-b6b9-b999afca1e34" in namespace "emptydir-1442" to be "Succeeded or Failed" Jan 13 00:04:31.190: INFO: Pod "pod-9c781eb6-43d2-4885-b6b9-b999afca1e34": Phase="Pending", Reason="", readiness=false. Elapsed: 7.210445ms Jan 13 00:04:33.196: INFO: Pod "pod-9c781eb6-43d2-4885-b6b9-b999afca1e34": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012447138s Jan 13 00:04:35.199: INFO: Pod "pod-9c781eb6-43d2-4885-b6b9-b999afca1e34": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016216448s STEP: Saw pod success Jan 13 00:04:35.199: INFO: Pod "pod-9c781eb6-43d2-4885-b6b9-b999afca1e34" satisfied condition "Succeeded or Failed" Jan 13 00:04:35.207: INFO: Trying to get logs from node leguer-worker2 pod pod-9c781eb6-43d2-4885-b6b9-b999afca1e34 container test-container: STEP: delete the pod Jan 13 00:04:35.278: INFO: Waiting for pod pod-9c781eb6-43d2-4885-b6b9-b999afca1e34 to disappear Jan 13 00:04:35.285: INFO: Pod pod-9c781eb6-43d2-4885-b6b9-b999afca1e34 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:04:35.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1442" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":214,"skipped":3439,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:04:35.295: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jan 13 00:04:40.490: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:04:40.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5323" for this suite. • [SLOW TEST:5.269 seconds] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 on terminated container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:134 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":309,"completed":215,"skipped":3461,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:04:40.565: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 13 00:04:41.462: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 13 00:04:43.754: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746093081, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746093081, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746093081, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746093081, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 13 00:04:46.821: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 13 00:04:46.826: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4477-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:04:47.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5592" for this suite. STEP: Destroying namespace "webhook-5592-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:7.519 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":309,"completed":216,"skipped":3471,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:04:48.084: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:05:58.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-7623" for this suite. STEP: Destroying namespace "nsdeletetest-1184" for this suite. Jan 13 00:05:58.418: INFO: Namespace nsdeletetest-1184 was already deleted STEP: Destroying namespace "nsdeletetest-1973" for this suite. • [SLOW TEST:70.339 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":309,"completed":217,"skipped":3477,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:05:58.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5576.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-5576.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5576.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5576.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-5576.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-5576.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-5576.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-5576.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5576.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5576.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-5576.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5576.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-5576.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-5576.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-5576.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-5576.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-5576.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5576.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 13 00:06:06.564: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5576.svc.cluster.local from pod dns-5576/dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca: the server could not find the requested resource (get pods dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca) Jan 13 00:06:06.567: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5576.svc.cluster.local from pod dns-5576/dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca: the server could not find the requested resource (get pods dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca) Jan 13 00:06:06.571: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5576.svc.cluster.local from pod dns-5576/dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca: the server could not find the requested resource (get pods dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca) Jan 13 00:06:06.574: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5576.svc.cluster.local from pod dns-5576/dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca: the server could not find the requested resource (get pods dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca) Jan 13 00:06:06.584: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5576.svc.cluster.local from pod dns-5576/dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca: the server could not find the requested resource (get pods dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca) Jan 13 00:06:06.587: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5576.svc.cluster.local from pod dns-5576/dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca: the server could not find the requested resource (get pods dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca) Jan 13 00:06:06.590: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5576.svc.cluster.local from pod dns-5576/dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca: the server could not find the requested resource (get pods dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca) Jan 13 00:06:06.593: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5576.svc.cluster.local from pod dns-5576/dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca: the server could not find the requested resource (get pods dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca) Jan 13 00:06:06.599: INFO: Lookups using dns-5576/dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5576.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5576.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5576.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5576.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5576.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5576.svc.cluster.local jessie_udp@dns-test-service-2.dns-5576.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5576.svc.cluster.local] Jan 13 00:06:11.604: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5576.svc.cluster.local from pod dns-5576/dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca: the server could not find the requested resource (get pods dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca) Jan 13 00:06:11.608: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5576.svc.cluster.local from pod dns-5576/dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca: the server could not find the requested resource (get pods dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca) Jan 13 00:06:11.612: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5576.svc.cluster.local from pod dns-5576/dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca: the server could not find the requested resource (get pods dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca) Jan 13 00:06:11.616: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5576.svc.cluster.local from pod dns-5576/dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca: the server could not find the requested resource (get pods dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca) Jan 13 00:06:11.625: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5576.svc.cluster.local from pod dns-5576/dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca: the server could not find the requested resource (get pods dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca) Jan 13 00:06:11.628: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5576.svc.cluster.local from pod dns-5576/dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca: the server could not find the requested resource (get pods dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca) Jan 13 00:06:11.631: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5576.svc.cluster.local from pod dns-5576/dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca: the server could not find the requested resource (get pods dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca) Jan 13 00:06:11.634: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5576.svc.cluster.local from pod dns-5576/dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca: the server could not find the requested resource (get pods dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca) Jan 13 00:06:11.639: INFO: Lookups using dns-5576/dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5576.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5576.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5576.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5576.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5576.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5576.svc.cluster.local jessie_udp@dns-test-service-2.dns-5576.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5576.svc.cluster.local] Jan 13 00:06:16.604: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5576.svc.cluster.local from pod dns-5576/dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca: the server could not find the requested resource (get pods dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca) Jan 13 00:06:16.608: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5576.svc.cluster.local from pod dns-5576/dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca: the server could not find the requested resource (get pods dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca) Jan 13 00:06:16.611: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5576.svc.cluster.local from pod dns-5576/dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca: the server could not find the requested resource (get pods dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca) Jan 13 00:06:16.615: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5576.svc.cluster.local from pod dns-5576/dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca: the server could not find the requested resource (get pods dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca) Jan 13 00:06:16.630: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5576.svc.cluster.local from pod dns-5576/dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca: the server could not find the requested resource (get pods dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca) Jan 13 00:06:16.634: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5576.svc.cluster.local from pod dns-5576/dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca: the server could not find the requested resource (get pods dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca) Jan 13 00:06:16.637: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5576.svc.cluster.local from pod dns-5576/dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca: the server could not find the requested resource (get pods dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca) Jan 13 00:06:16.639: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5576.svc.cluster.local from pod dns-5576/dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca: the server could not find the requested resource (get pods dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca) Jan 13 00:06:16.644: INFO: Lookups using dns-5576/dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5576.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5576.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5576.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5576.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5576.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5576.svc.cluster.local jessie_udp@dns-test-service-2.dns-5576.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5576.svc.cluster.local] Jan 13 00:06:21.609: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5576.svc.cluster.local from pod dns-5576/dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca: the server could not find the requested resource (get pods dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca) Jan 13 00:06:21.612: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5576.svc.cluster.local from pod dns-5576/dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca: the server could not find the requested resource (get pods dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca) Jan 13 00:06:21.614: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5576.svc.cluster.local from pod dns-5576/dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca: the server could not find the requested resource (get pods dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca) Jan 13 00:06:21.616: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5576.svc.cluster.local from pod dns-5576/dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca: the server could not find the requested resource (get pods dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca) Jan 13 00:06:21.627: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5576.svc.cluster.local from pod dns-5576/dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca: the server could not find the requested resource (get pods dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca) Jan 13 00:06:21.629: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5576.svc.cluster.local from pod dns-5576/dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca: the server could not find the requested resource (get pods dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca) Jan 13 00:06:21.631: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5576.svc.cluster.local from pod dns-5576/dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca: the server could not find the requested resource (get pods dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca) Jan 13 00:06:21.634: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5576.svc.cluster.local from pod dns-5576/dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca: the server could not find the requested resource (get pods dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca) Jan 13 00:06:21.638: INFO: Lookups using dns-5576/dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5576.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5576.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5576.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5576.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5576.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5576.svc.cluster.local jessie_udp@dns-test-service-2.dns-5576.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5576.svc.cluster.local] Jan 13 00:06:26.605: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5576.svc.cluster.local from pod dns-5576/dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca: the server could not find the requested resource (get pods dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca) Jan 13 00:06:26.609: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5576.svc.cluster.local from pod dns-5576/dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca: the server could not find the requested resource (get pods dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca) Jan 13 00:06:26.612: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5576.svc.cluster.local from pod dns-5576/dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca: the server could not find the requested resource (get pods dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca) Jan 13 00:06:26.615: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5576.svc.cluster.local from pod dns-5576/dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca: the server could not find the requested resource (get pods dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca) Jan 13 00:06:26.626: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5576.svc.cluster.local from pod dns-5576/dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca: the server could not find the requested resource (get pods dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca) Jan 13 00:06:26.630: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5576.svc.cluster.local from pod dns-5576/dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca: the server could not find the requested resource (get pods dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca) Jan 13 00:06:26.632: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5576.svc.cluster.local from pod dns-5576/dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca: the server could not find the requested resource (get pods dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca) Jan 13 00:06:26.635: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5576.svc.cluster.local from pod dns-5576/dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca: the server could not find the requested resource (get pods dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca) Jan 13 00:06:26.640: INFO: Lookups using dns-5576/dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5576.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5576.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5576.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5576.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5576.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5576.svc.cluster.local jessie_udp@dns-test-service-2.dns-5576.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5576.svc.cluster.local] Jan 13 00:06:31.604: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5576.svc.cluster.local from pod dns-5576/dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca: the server could not find the requested resource (get pods dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca) Jan 13 00:06:31.608: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5576.svc.cluster.local from pod dns-5576/dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca: the server could not find the requested resource (get pods dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca) Jan 13 00:06:31.611: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5576.svc.cluster.local from pod dns-5576/dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca: the server could not find the requested resource (get pods dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca) Jan 13 00:06:31.615: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5576.svc.cluster.local from pod dns-5576/dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca: the server could not find the requested resource (get pods dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca) Jan 13 00:06:31.625: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5576.svc.cluster.local from pod dns-5576/dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca: the server could not find the requested resource (get pods dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca) Jan 13 00:06:31.628: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5576.svc.cluster.local from pod dns-5576/dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca: the server could not find the requested resource (get pods dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca) Jan 13 00:06:31.630: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5576.svc.cluster.local from pod dns-5576/dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca: the server could not find the requested resource (get pods dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca) Jan 13 00:06:31.632: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5576.svc.cluster.local from pod dns-5576/dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca: the server could not find the requested resource (get pods dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca) Jan 13 00:06:31.652: INFO: Lookups using dns-5576/dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5576.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5576.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5576.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5576.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5576.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5576.svc.cluster.local jessie_udp@dns-test-service-2.dns-5576.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5576.svc.cluster.local] Jan 13 00:06:36.635: INFO: DNS probes using dns-5576/dns-test-b6f2e8d9-977f-4003-8395-6a299ad433ca succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:06:37.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5576" for this suite. • [SLOW TEST:38.834 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":309,"completed":218,"skipped":3492,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:06:37.259: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:07:12.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3711" for this suite. • [SLOW TEST:34.861 seconds] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 when starting a container that exits /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:42 should run with the expected status [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":309,"completed":219,"skipped":3531,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:07:12.120: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should be able to change the type from NodePort to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a service nodeport-service with the type=NodePort in namespace services-4906 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-4906 STEP: creating replication controller externalsvc in namespace services-4906 I0113 00:07:13.636481 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-4906, replica count: 2 I0113 00:07:16.686869 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0113 00:07:19.687088 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Jan 13 00:07:19.793: INFO: Creating new exec pod Jan 13 00:07:23.861: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-4906 exec execpodsn7qt -- /bin/sh -x -c nslookup nodeport-service.services-4906.svc.cluster.local' Jan 13 00:07:24.109: INFO: stderr: "I0113 00:07:24.006828 3067 log.go:181] (0xc000da2fd0) (0xc000e84820) Create stream\nI0113 00:07:24.006884 3067 log.go:181] (0xc000da2fd0) (0xc000e84820) Stream added, broadcasting: 1\nI0113 00:07:24.009483 3067 log.go:181] (0xc000da2fd0) Reply frame received for 1\nI0113 00:07:24.009529 3067 log.go:181] (0xc000da2fd0) (0xc000176460) Create stream\nI0113 00:07:24.009544 3067 log.go:181] (0xc000da2fd0) (0xc000176460) Stream added, broadcasting: 3\nI0113 00:07:24.010317 3067 log.go:181] (0xc000da2fd0) Reply frame received for 3\nI0113 00:07:24.010354 3067 log.go:181] (0xc000da2fd0) (0xc000d9a1e0) Create stream\nI0113 00:07:24.010403 3067 log.go:181] (0xc000da2fd0) (0xc000d9a1e0) Stream added, broadcasting: 5\nI0113 00:07:24.011128 3067 log.go:181] (0xc000da2fd0) Reply frame received for 5\nI0113 00:07:24.088558 3067 log.go:181] (0xc000da2fd0) Data frame received for 5\nI0113 00:07:24.088604 3067 log.go:181] (0xc000d9a1e0) (5) Data frame handling\nI0113 00:07:24.088637 3067 log.go:181] (0xc000d9a1e0) (5) Data frame sent\n+ nslookup nodeport-service.services-4906.svc.cluster.local\nI0113 00:07:24.100453 3067 log.go:181] (0xc000da2fd0) Data frame received for 3\nI0113 00:07:24.100478 3067 log.go:181] (0xc000176460) (3) Data frame handling\nI0113 00:07:24.100493 3067 log.go:181] (0xc000176460) (3) Data frame sent\nI0113 00:07:24.101303 3067 log.go:181] (0xc000da2fd0) Data frame received for 3\nI0113 00:07:24.101321 3067 log.go:181] (0xc000176460) (3) Data frame handling\nI0113 00:07:24.101336 3067 log.go:181] (0xc000176460) (3) Data frame sent\nI0113 00:07:24.101801 3067 log.go:181] (0xc000da2fd0) Data frame received for 5\nI0113 00:07:24.101817 3067 log.go:181] (0xc000d9a1e0) (5) Data frame handling\nI0113 00:07:24.101958 3067 log.go:181] (0xc000da2fd0) Data frame received for 3\nI0113 00:07:24.101975 3067 log.go:181] (0xc000176460) (3) Data frame handling\nI0113 00:07:24.103743 3067 log.go:181] (0xc000da2fd0) Data frame received for 1\nI0113 00:07:24.103771 3067 log.go:181] (0xc000e84820) (1) Data frame handling\nI0113 00:07:24.103785 3067 log.go:181] (0xc000e84820) (1) Data frame sent\nI0113 00:07:24.103801 3067 log.go:181] (0xc000da2fd0) (0xc000e84820) Stream removed, broadcasting: 1\nI0113 00:07:24.103821 3067 log.go:181] (0xc000da2fd0) Go away received\nI0113 00:07:24.104321 3067 log.go:181] (0xc000da2fd0) (0xc000e84820) Stream removed, broadcasting: 1\nI0113 00:07:24.104348 3067 log.go:181] (0xc000da2fd0) (0xc000176460) Stream removed, broadcasting: 3\nI0113 00:07:24.104357 3067 log.go:181] (0xc000da2fd0) (0xc000d9a1e0) Stream removed, broadcasting: 5\n" Jan 13 00:07:24.110: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-4906.svc.cluster.local\tcanonical name = externalsvc.services-4906.svc.cluster.local.\nName:\texternalsvc.services-4906.svc.cluster.local\nAddress: 10.96.187.103\n\n" STEP: deleting ReplicationController externalsvc in namespace services-4906, will wait for the garbage collector to delete the pods Jan 13 00:07:24.170: INFO: Deleting ReplicationController externalsvc took: 6.680236ms Jan 13 00:07:24.770: INFO: Terminating ReplicationController externalsvc pods took: 600.209951ms Jan 13 00:08:00.244: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:08:00.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4906" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 • [SLOW TEST:48.207 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":309,"completed":220,"skipped":3541,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:08:00.327: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2601 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-2601;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2601 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-2601;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2601.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-2601.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2601.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-2601.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2601.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-2601.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2601.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-2601.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2601.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-2601.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2601.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-2601.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2601.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 211.49.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.49.211_udp@PTR;check="$$(dig +tcp +noall +answer +search 211.49.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.49.211_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2601 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-2601;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2601 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-2601;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2601.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-2601.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2601.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-2601.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2601.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-2601.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2601.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-2601.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2601.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-2601.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2601.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-2601.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2601.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 211.49.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.49.211_udp@PTR;check="$$(dig +tcp +noall +answer +search 211.49.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.49.211_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 13 00:08:10.606: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2601/dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc: the server could not find the requested resource (get pods dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc) Jan 13 00:08:10.609: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2601/dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc: the server could not find the requested resource (get pods dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc) Jan 13 00:08:10.613: INFO: Unable to read wheezy_udp@dns-test-service.dns-2601 from pod dns-2601/dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc: the server could not find the requested resource (get pods dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc) Jan 13 00:08:10.616: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2601 from pod dns-2601/dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc: the server could not find the requested resource (get pods dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc) Jan 13 00:08:10.620: INFO: Unable to read wheezy_udp@dns-test-service.dns-2601.svc from pod dns-2601/dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc: the server could not find the requested resource (get pods dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc) Jan 13 00:08:10.623: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2601.svc from pod dns-2601/dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc: the server could not find the requested resource (get pods dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc) Jan 13 00:08:10.627: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2601.svc from pod dns-2601/dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc: the server could not find the requested resource (get pods dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc) Jan 13 00:08:10.630: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2601.svc from pod dns-2601/dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc: the server could not find the requested resource (get pods dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc) Jan 13 00:08:10.653: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2601/dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc: the server could not find the requested resource (get pods dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc) Jan 13 00:08:10.657: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2601/dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc: the server could not find the requested resource (get pods dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc) Jan 13 00:08:10.660: INFO: Unable to read jessie_udp@dns-test-service.dns-2601 from pod dns-2601/dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc: the server could not find the requested resource (get pods dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc) Jan 13 00:08:10.664: INFO: Unable to read jessie_tcp@dns-test-service.dns-2601 from pod dns-2601/dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc: the server could not find the requested resource (get pods dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc) Jan 13 00:08:10.667: INFO: Unable to read jessie_udp@dns-test-service.dns-2601.svc from pod dns-2601/dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc: the server could not find the requested resource (get pods dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc) Jan 13 00:08:10.669: INFO: Unable to read jessie_tcp@dns-test-service.dns-2601.svc from pod dns-2601/dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc: the server could not find the requested resource (get pods dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc) Jan 13 00:08:10.672: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2601.svc from pod dns-2601/dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc: the server could not find the requested resource (get pods dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc) Jan 13 00:08:10.675: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2601.svc from pod dns-2601/dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc: the server could not find the requested resource (get pods dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc) Jan 13 00:08:10.694: INFO: Lookups using dns-2601/dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2601 wheezy_tcp@dns-test-service.dns-2601 wheezy_udp@dns-test-service.dns-2601.svc wheezy_tcp@dns-test-service.dns-2601.svc wheezy_udp@_http._tcp.dns-test-service.dns-2601.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2601.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2601 jessie_tcp@dns-test-service.dns-2601 jessie_udp@dns-test-service.dns-2601.svc jessie_tcp@dns-test-service.dns-2601.svc jessie_udp@_http._tcp.dns-test-service.dns-2601.svc jessie_tcp@_http._tcp.dns-test-service.dns-2601.svc] Jan 13 00:08:15.699: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2601/dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc: the server could not find the requested resource (get pods dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc) Jan 13 00:08:15.703: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2601/dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc: the server could not find the requested resource (get pods dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc) Jan 13 00:08:15.709: INFO: Unable to read wheezy_udp@dns-test-service.dns-2601 from pod dns-2601/dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc: the server could not find the requested resource (get pods dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc) Jan 13 00:08:15.713: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2601 from pod dns-2601/dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc: the server could not find the requested resource (get pods dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc) Jan 13 00:08:15.716: INFO: Unable to read wheezy_udp@dns-test-service.dns-2601.svc from pod dns-2601/dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc: the server could not find the requested resource (get pods dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc) Jan 13 00:08:15.718: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2601.svc from pod dns-2601/dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc: the server could not find the requested resource (get pods dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc) Jan 13 00:08:15.720: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2601.svc from pod dns-2601/dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc: the server could not find the requested resource (get pods dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc) Jan 13 00:08:15.722: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2601.svc from pod dns-2601/dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc: the server could not find the requested resource (get pods dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc) Jan 13 00:08:15.741: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2601/dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc: the server could not find the requested resource (get pods dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc) Jan 13 00:08:15.743: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2601/dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc: the server could not find the requested resource (get pods dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc) Jan 13 00:08:15.746: INFO: Unable to read jessie_udp@dns-test-service.dns-2601 from pod dns-2601/dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc: the server could not find the requested resource (get pods dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc) Jan 13 00:08:15.748: INFO: Unable to read jessie_tcp@dns-test-service.dns-2601 from pod dns-2601/dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc: the server could not find the requested resource (get pods dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc) Jan 13 00:08:15.751: INFO: Unable to read jessie_udp@dns-test-service.dns-2601.svc from pod dns-2601/dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc: the server could not find the requested resource (get pods dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc) Jan 13 00:08:15.753: INFO: Unable to read jessie_tcp@dns-test-service.dns-2601.svc from pod dns-2601/dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc: the server could not find the requested resource (get pods dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc) Jan 13 00:08:15.789: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2601.svc from pod dns-2601/dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc: the server could not find the requested resource (get pods dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc) Jan 13 00:08:15.792: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2601.svc from pod dns-2601/dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc: the server could not find the requested resource (get pods dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc) Jan 13 00:08:15.809: INFO: Lookups using dns-2601/dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2601 wheezy_tcp@dns-test-service.dns-2601 wheezy_udp@dns-test-service.dns-2601.svc wheezy_tcp@dns-test-service.dns-2601.svc wheezy_udp@_http._tcp.dns-test-service.dns-2601.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2601.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2601 jessie_tcp@dns-test-service.dns-2601 jessie_udp@dns-test-service.dns-2601.svc jessie_tcp@dns-test-service.dns-2601.svc jessie_udp@_http._tcp.dns-test-service.dns-2601.svc jessie_tcp@_http._tcp.dns-test-service.dns-2601.svc] Jan 13 00:08:20.699: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2601/dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc: the server could not find the requested resource (get pods dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc) Jan 13 00:08:20.702: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2601/dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc: the server could not find the requested resource (get pods dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc) Jan 13 00:08:20.706: INFO: Unable to read wheezy_udp@dns-test-service.dns-2601 from pod dns-2601/dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc: the server could not find the requested resource (get pods dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc) Jan 13 00:08:20.708: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2601 from pod dns-2601/dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc: the server could not find the requested resource (get pods dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc) Jan 13 00:08:20.711: INFO: Unable to read wheezy_udp@dns-test-service.dns-2601.svc from pod dns-2601/dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc: the server could not find the requested resource (get pods dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc) Jan 13 00:08:20.714: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2601.svc from pod dns-2601/dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc: the server could not find the requested resource (get pods dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc) Jan 13 00:08:20.716: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2601.svc from pod dns-2601/dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc: the server could not find the requested resource (get pods dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc) Jan 13 00:08:20.719: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2601.svc from pod dns-2601/dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc: the server could not find the requested resource (get pods dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc) Jan 13 00:08:20.736: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2601/dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc: the server could not find the requested resource (get pods dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc) Jan 13 00:08:20.738: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2601/dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc: the server could not find the requested resource (get pods dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc) Jan 13 00:08:20.740: INFO: Unable to read jessie_udp@dns-test-service.dns-2601 from pod dns-2601/dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc: the server could not find the requested resource (get pods dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc) Jan 13 00:08:20.742: INFO: Unable to read jessie_tcp@dns-test-service.dns-2601 from pod dns-2601/dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc: the server could not find the requested resource (get pods dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc) Jan 13 00:08:20.745: INFO: Unable to read jessie_udp@dns-test-service.dns-2601.svc from pod dns-2601/dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc: the server could not find the requested resource (get pods dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc) Jan 13 00:08:20.747: INFO: Unable to read jessie_tcp@dns-test-service.dns-2601.svc from pod dns-2601/dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc: the server could not find the requested resource (get pods dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc) Jan 13 00:08:20.749: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2601.svc from pod dns-2601/dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc: the server could not find the requested resource (get pods dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc) Jan 13 00:08:20.751: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2601.svc from pod dns-2601/dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc: the server could not find the requested resource (get pods dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc) Jan 13 00:08:20.767: INFO: Lookups using dns-2601/dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2601 wheezy_tcp@dns-test-service.dns-2601 wheezy_udp@dns-test-service.dns-2601.svc wheezy_tcp@dns-test-service.dns-2601.svc wheezy_udp@_http._tcp.dns-test-service.dns-2601.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2601.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2601 jessie_tcp@dns-test-service.dns-2601 jessie_udp@dns-test-service.dns-2601.svc jessie_tcp@dns-test-service.dns-2601.svc jessie_udp@_http._tcp.dns-test-service.dns-2601.svc jessie_tcp@_http._tcp.dns-test-service.dns-2601.svc] Jan 13 00:08:25.699: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2601/dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc: the server could not find the requested resource (get pods dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc) Jan 13 00:08:25.703: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2601/dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc: the server could not find the requested resource (get pods dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc) Jan 13 00:08:25.707: INFO: Unable to read wheezy_udp@dns-test-service.dns-2601 from pod dns-2601/dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc: the server could not find the requested resource (get pods dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc) Jan 13 00:08:25.710: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2601 from pod dns-2601/dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc: the server could not find the requested resource (get pods dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc) Jan 13 00:08:25.714: INFO: Unable to read wheezy_udp@dns-test-service.dns-2601.svc from pod dns-2601/dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc: the server could not find the requested resource (get pods dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc) Jan 13 00:08:25.717: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2601.svc from pod dns-2601/dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc: the server could not find the requested resource (get pods dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc) Jan 13 00:08:25.721: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2601.svc from pod dns-2601/dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc: the server could not find the requested resource (get pods dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc) Jan 13 00:08:25.725: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2601.svc from pod dns-2601/dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc: the server could not find the requested resource (get pods dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc) Jan 13 00:08:25.745: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2601/dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc: the server could not find the requested resource (get pods dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc) Jan 13 00:08:25.747: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2601/dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc: the server could not find the requested resource (get pods dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc) Jan 13 00:08:25.750: INFO: Unable to read jessie_udp@dns-test-service.dns-2601 from pod dns-2601/dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc: the server could not find the requested resource (get pods dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc) Jan 13 00:08:25.752: INFO: Unable to read jessie_tcp@dns-test-service.dns-2601 from pod dns-2601/dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc: the server could not find the requested resource (get pods dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc) Jan 13 00:08:25.754: INFO: Unable to read jessie_udp@dns-test-service.dns-2601.svc from pod dns-2601/dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc: the server could not find the requested resource (get pods dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc) Jan 13 00:08:25.756: INFO: Unable to read jessie_tcp@dns-test-service.dns-2601.svc from pod dns-2601/dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc: the server could not find the requested resource (get pods dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc) Jan 13 00:08:25.759: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2601.svc from pod dns-2601/dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc: the server could not find the requested resource (get pods dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc) Jan 13 00:08:25.761: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2601.svc from pod dns-2601/dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc: the server could not find the requested resource (get pods dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc) Jan 13 00:08:25.777: INFO: Lookups using dns-2601/dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2601 wheezy_tcp@dns-test-service.dns-2601 wheezy_udp@dns-test-service.dns-2601.svc wheezy_tcp@dns-test-service.dns-2601.svc wheezy_udp@_http._tcp.dns-test-service.dns-2601.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2601.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2601 jessie_tcp@dns-test-service.dns-2601 jessie_udp@dns-test-service.dns-2601.svc jessie_tcp@dns-test-service.dns-2601.svc jessie_udp@_http._tcp.dns-test-service.dns-2601.svc jessie_tcp@_http._tcp.dns-test-service.dns-2601.svc] Jan 13 00:08:30.699: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2601/dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc: the server could not find the requested resource (get pods dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc) Jan 13 00:08:30.703: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2601/dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc: the server could not find the requested resource (get pods dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc) Jan 13 00:08:30.707: INFO: Unable to read wheezy_udp@dns-test-service.dns-2601 from pod dns-2601/dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc: the server could not find the requested resource (get pods dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc) Jan 13 00:08:30.710: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2601 from pod dns-2601/dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc: the server could not find the requested resource (get pods dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc) Jan 13 00:08:30.714: INFO: Unable to read wheezy_udp@dns-test-service.dns-2601.svc from pod dns-2601/dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc: the server could not find the requested resource (get pods dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc) Jan 13 00:08:30.717: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2601.svc from pod dns-2601/dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc: the server could not find the requested resource (get pods dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc) Jan 13 00:08:30.720: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2601.svc from pod dns-2601/dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc: the server could not find the requested resource (get pods dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc) Jan 13 00:08:30.724: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2601.svc from pod dns-2601/dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc: the server could not find the requested resource (get pods dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc) Jan 13 00:08:30.751: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2601/dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc: the server could not find the requested resource (get pods dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc) Jan 13 00:08:30.753: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2601/dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc: the server could not find the requested resource (get pods dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc) Jan 13 00:08:30.756: INFO: Unable to read jessie_udp@dns-test-service.dns-2601 from pod dns-2601/dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc: the server could not find the requested resource (get pods dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc) Jan 13 00:08:30.758: INFO: Unable to read jessie_tcp@dns-test-service.dns-2601 from pod dns-2601/dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc: the server could not find the requested resource (get pods dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc) Jan 13 00:08:30.761: INFO: Unable to read jessie_udp@dns-test-service.dns-2601.svc from pod dns-2601/dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc: the server could not find the requested resource (get pods dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc) Jan 13 00:08:30.764: INFO: Unable to read jessie_tcp@dns-test-service.dns-2601.svc from pod dns-2601/dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc: the server could not find the requested resource (get pods dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc) Jan 13 00:08:30.766: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2601.svc from pod dns-2601/dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc: the server could not find the requested resource (get pods dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc) Jan 13 00:08:30.769: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2601.svc from pod dns-2601/dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc: the server could not find the requested resource (get pods dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc) Jan 13 00:08:30.786: INFO: Lookups using dns-2601/dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2601 wheezy_tcp@dns-test-service.dns-2601 wheezy_udp@dns-test-service.dns-2601.svc wheezy_tcp@dns-test-service.dns-2601.svc wheezy_udp@_http._tcp.dns-test-service.dns-2601.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2601.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2601 jessie_tcp@dns-test-service.dns-2601 jessie_udp@dns-test-service.dns-2601.svc jessie_tcp@dns-test-service.dns-2601.svc jessie_udp@_http._tcp.dns-test-service.dns-2601.svc jessie_tcp@_http._tcp.dns-test-service.dns-2601.svc] Jan 13 00:08:35.778: INFO: DNS probes using dns-2601/dns-test-b7f36dfb-c072-400c-8dac-9fb69633dfcc succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:08:38.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2601" for this suite. • [SLOW TEST:37.763 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":309,"completed":221,"skipped":3556,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:08:38.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should adopt matching pods on creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:08:45.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-955" for this suite. • [SLOW TEST:7.715 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":309,"completed":222,"skipped":3565,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:08:45.807: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 13 00:08:46.325: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 13 00:08:48.337: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746093326, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746093326, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746093326, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746093326, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 13 00:08:50.342: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746093326, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746093326, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746093326, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746093326, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 13 00:08:54.527: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 13 00:08:54.532: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:08:55.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1466" for this suite. STEP: Destroying namespace "webhook-1466-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:10.022 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":309,"completed":223,"skipped":3572,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] [sig-node] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:08:55.829: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Jan 13 00:08:59.915: INFO: &Pod{ObjectMeta:{send-events-e3fa5668-9270-4847-a81c-6d7248e89911 events-4570 34e3a600-b131-4be1-82bc-2daacf26a58a 436727 0 2021-01-13 00:08:55 +0000 UTC map[name:foo time:890515968] map[] [] [] [{e2e.test Update v1 2021-01-13 00:08:55 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"p\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-13 00:08:59 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.137\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gqq6l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gqq6l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:p,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gqq6l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 00:08:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 00:08:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 00:08:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 00:08:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:10.244.1.137,StartTime:2021-01-13 00:08:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-01-13 00:08:58 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a,ContainerID:containerd://3b62cb3bf048ff0e3c9dba05865b1ac51f57b861ad6bf1e88929a3f592fcbd2b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.137,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Jan 13 00:09:01.921: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Jan 13 00:09:03.926: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:09:03.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-4570" for this suite. • [SLOW TEST:8.150 seconds] [k8s.io] [sig-node] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":309,"completed":224,"skipped":3572,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:09:03.979: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jan 13 00:09:08.118: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:09:08.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6504" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":309,"completed":225,"skipped":3576,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:09:08.141: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 13 00:09:09.152: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 13 00:09:11.165: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746093349, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746093349, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746093349, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746093349, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 13 00:09:14.209: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:09:14.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4135" for this suite. STEP: Destroying namespace "webhook-4135-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:6.845 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":309,"completed":226,"skipped":3671,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:09:14.986: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:09:31.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1604" for this suite. • [SLOW TEST:16.309 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":309,"completed":227,"skipped":3700,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:09:31.296: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-6372 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-6372 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-6372 Jan 13 00:09:31.551: INFO: Found 0 stateful pods, waiting for 1 Jan 13 00:09:41.556: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Jan 13 00:09:41.560: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-6372 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 13 00:09:45.047: INFO: stderr: "I0113 00:09:44.904105 3085 log.go:181] (0xc000126000) (0xc000891b80) Create stream\nI0113 00:09:44.904187 3085 log.go:181] (0xc000126000) (0xc000891b80) Stream added, broadcasting: 1\nI0113 00:09:44.907411 3085 log.go:181] (0xc000126000) Reply frame received for 1\nI0113 00:09:44.907466 3085 log.go:181] (0xc000126000) (0xc00056e280) Create stream\nI0113 00:09:44.907484 3085 log.go:181] (0xc000126000) (0xc00056e280) Stream added, broadcasting: 3\nI0113 00:09:44.908435 3085 log.go:181] (0xc000126000) Reply frame received for 3\nI0113 00:09:44.908476 3085 log.go:181] (0xc000126000) (0xc00019e6e0) Create stream\nI0113 00:09:44.908497 3085 log.go:181] (0xc000126000) (0xc00019e6e0) Stream added, broadcasting: 5\nI0113 00:09:44.909411 3085 log.go:181] (0xc000126000) Reply frame received for 5\nI0113 00:09:45.001789 3085 log.go:181] (0xc000126000) Data frame received for 5\nI0113 00:09:45.001813 3085 log.go:181] (0xc00019e6e0) (5) Data frame handling\nI0113 00:09:45.001825 3085 log.go:181] (0xc00019e6e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0113 00:09:45.038400 3085 log.go:181] (0xc000126000) Data frame received for 3\nI0113 00:09:45.038437 3085 log.go:181] (0xc00056e280) (3) Data frame handling\nI0113 00:09:45.038468 3085 log.go:181] (0xc00056e280) (3) Data frame sent\nI0113 00:09:45.038483 3085 log.go:181] (0xc000126000) Data frame received for 3\nI0113 00:09:45.038497 3085 log.go:181] (0xc00056e280) (3) Data frame handling\nI0113 00:09:45.038796 3085 log.go:181] (0xc000126000) Data frame received for 5\nI0113 00:09:45.038821 3085 log.go:181] (0xc00019e6e0) (5) Data frame handling\nI0113 00:09:45.040981 3085 log.go:181] (0xc000126000) Data frame received for 1\nI0113 00:09:45.041023 3085 log.go:181] (0xc000891b80) (1) Data frame handling\nI0113 00:09:45.041037 3085 log.go:181] (0xc000891b80) (1) Data frame sent\nI0113 00:09:45.041049 3085 log.go:181] (0xc000126000) (0xc000891b80) Stream removed, broadcasting: 1\nI0113 00:09:45.041062 3085 log.go:181] (0xc000126000) Go away received\nI0113 00:09:45.041694 3085 log.go:181] (0xc000126000) (0xc000891b80) Stream removed, broadcasting: 1\nI0113 00:09:45.041735 3085 log.go:181] (0xc000126000) (0xc00056e280) Stream removed, broadcasting: 3\nI0113 00:09:45.041761 3085 log.go:181] (0xc000126000) (0xc00019e6e0) Stream removed, broadcasting: 5\n" Jan 13 00:09:45.047: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 13 00:09:45.047: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 13 00:09:45.052: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jan 13 00:09:55.057: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 13 00:09:55.057: INFO: Waiting for statefulset status.replicas updated to 0 Jan 13 00:09:55.081: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999435s Jan 13 00:09:56.086: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.989360913s Jan 13 00:09:57.089: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.984441623s Jan 13 00:09:58.105: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.980943799s Jan 13 00:09:59.108: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.965379075s Jan 13 00:10:00.113: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.962295103s Jan 13 00:10:01.117: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.957359063s Jan 13 00:10:02.121: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.953363807s Jan 13 00:10:03.125: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.949456392s Jan 13 00:10:04.129: INFO: Verifying statefulset ss doesn't scale past 1 for another 945.316159ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-6372 Jan 13 00:10:05.133: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-6372 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 00:10:05.391: INFO: stderr: "I0113 00:10:05.270311 3102 log.go:181] (0xc0008eadc0) (0xc0009163c0) Create stream\nI0113 00:10:05.270375 3102 log.go:181] (0xc0008eadc0) (0xc0009163c0) Stream added, broadcasting: 1\nI0113 00:10:05.272786 3102 log.go:181] (0xc0008eadc0) Reply frame received for 1\nI0113 00:10:05.272937 3102 log.go:181] (0xc0008eadc0) (0xc000916460) Create stream\nI0113 00:10:05.272969 3102 log.go:181] (0xc0008eadc0) (0xc000916460) Stream added, broadcasting: 3\nI0113 00:10:05.274067 3102 log.go:181] (0xc0008eadc0) Reply frame received for 3\nI0113 00:10:05.274108 3102 log.go:181] (0xc0008eadc0) (0xc000916500) Create stream\nI0113 00:10:05.274123 3102 log.go:181] (0xc0008eadc0) (0xc000916500) Stream added, broadcasting: 5\nI0113 00:10:05.274987 3102 log.go:181] (0xc0008eadc0) Reply frame received for 5\nI0113 00:10:05.382306 3102 log.go:181] (0xc0008eadc0) Data frame received for 3\nI0113 00:10:05.382335 3102 log.go:181] (0xc000916460) (3) Data frame handling\nI0113 00:10:05.382342 3102 log.go:181] (0xc000916460) (3) Data frame sent\nI0113 00:10:05.382348 3102 log.go:181] (0xc0008eadc0) Data frame received for 3\nI0113 00:10:05.382354 3102 log.go:181] (0xc000916460) (3) Data frame handling\nI0113 00:10:05.382409 3102 log.go:181] (0xc0008eadc0) Data frame received for 5\nI0113 00:10:05.382446 3102 log.go:181] (0xc000916500) (5) Data frame handling\nI0113 00:10:05.382478 3102 log.go:181] (0xc000916500) (5) Data frame sent\nI0113 00:10:05.382513 3102 log.go:181] (0xc0008eadc0) Data frame received for 5\nI0113 00:10:05.382539 3102 log.go:181] (0xc000916500) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0113 00:10:05.384378 3102 log.go:181] (0xc0008eadc0) Data frame received for 1\nI0113 00:10:05.384417 3102 log.go:181] (0xc0009163c0) (1) Data frame handling\nI0113 00:10:05.384447 3102 log.go:181] (0xc0009163c0) (1) Data frame sent\nI0113 00:10:05.384481 3102 log.go:181] (0xc0008eadc0) (0xc0009163c0) Stream removed, broadcasting: 1\nI0113 00:10:05.384552 3102 log.go:181] (0xc0008eadc0) Go away received\nI0113 00:10:05.385233 3102 log.go:181] (0xc0008eadc0) (0xc0009163c0) Stream removed, broadcasting: 1\nI0113 00:10:05.385267 3102 log.go:181] (0xc0008eadc0) (0xc000916460) Stream removed, broadcasting: 3\nI0113 00:10:05.385289 3102 log.go:181] (0xc0008eadc0) (0xc000916500) Stream removed, broadcasting: 5\n" Jan 13 00:10:05.391: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 13 00:10:05.391: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 13 00:10:05.394: INFO: Found 1 stateful pods, waiting for 3 Jan 13 00:10:15.400: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 13 00:10:15.400: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 13 00:10:15.400: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Jan 13 00:10:15.408: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-6372 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 13 00:10:15.654: INFO: stderr: "I0113 00:10:15.554228 3120 log.go:181] (0xc000636000) (0xc000718000) Create stream\nI0113 00:10:15.554317 3120 log.go:181] (0xc000636000) (0xc000718000) Stream added, broadcasting: 1\nI0113 00:10:15.555869 3120 log.go:181] (0xc000636000) Reply frame received for 1\nI0113 00:10:15.555903 3120 log.go:181] (0xc000636000) (0xc000c2a1e0) Create stream\nI0113 00:10:15.555919 3120 log.go:181] (0xc000636000) (0xc000c2a1e0) Stream added, broadcasting: 3\nI0113 00:10:15.556812 3120 log.go:181] (0xc000636000) Reply frame received for 3\nI0113 00:10:15.556901 3120 log.go:181] (0xc000636000) (0xc00053a0a0) Create stream\nI0113 00:10:15.556916 3120 log.go:181] (0xc000636000) (0xc00053a0a0) Stream added, broadcasting: 5\nI0113 00:10:15.557844 3120 log.go:181] (0xc000636000) Reply frame received for 5\nI0113 00:10:15.646942 3120 log.go:181] (0xc000636000) Data frame received for 5\nI0113 00:10:15.646983 3120 log.go:181] (0xc00053a0a0) (5) Data frame handling\nI0113 00:10:15.646999 3120 log.go:181] (0xc00053a0a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0113 00:10:15.647044 3120 log.go:181] (0xc000636000) Data frame received for 3\nI0113 00:10:15.647087 3120 log.go:181] (0xc000c2a1e0) (3) Data frame handling\nI0113 00:10:15.647109 3120 log.go:181] (0xc000c2a1e0) (3) Data frame sent\nI0113 00:10:15.647128 3120 log.go:181] (0xc000636000) Data frame received for 3\nI0113 00:10:15.647140 3120 log.go:181] (0xc000c2a1e0) (3) Data frame handling\nI0113 00:10:15.647180 3120 log.go:181] (0xc000636000) Data frame received for 5\nI0113 00:10:15.647221 3120 log.go:181] (0xc00053a0a0) (5) Data frame handling\nI0113 00:10:15.648724 3120 log.go:181] (0xc000636000) Data frame received for 1\nI0113 00:10:15.648753 3120 log.go:181] (0xc000718000) (1) Data frame handling\nI0113 00:10:15.648951 3120 log.go:181] (0xc000718000) (1) Data frame sent\nI0113 00:10:15.649002 3120 log.go:181] (0xc000636000) (0xc000718000) Stream removed, broadcasting: 1\nI0113 00:10:15.649046 3120 log.go:181] (0xc000636000) Go away received\nI0113 00:10:15.649523 3120 log.go:181] (0xc000636000) (0xc000718000) Stream removed, broadcasting: 1\nI0113 00:10:15.649545 3120 log.go:181] (0xc000636000) (0xc000c2a1e0) Stream removed, broadcasting: 3\nI0113 00:10:15.649558 3120 log.go:181] (0xc000636000) (0xc00053a0a0) Stream removed, broadcasting: 5\n" Jan 13 00:10:15.654: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 13 00:10:15.654: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 13 00:10:15.654: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-6372 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 13 00:10:15.891: INFO: stderr: "I0113 00:10:15.784992 3138 log.go:181] (0xc000730160) (0xc0007283c0) Create stream\nI0113 00:10:15.785094 3138 log.go:181] (0xc000730160) (0xc0007283c0) Stream added, broadcasting: 1\nI0113 00:10:15.786949 3138 log.go:181] (0xc000730160) Reply frame received for 1\nI0113 00:10:15.786987 3138 log.go:181] (0xc000730160) (0xc000554000) Create stream\nI0113 00:10:15.787002 3138 log.go:181] (0xc000730160) (0xc000554000) Stream added, broadcasting: 3\nI0113 00:10:15.787916 3138 log.go:181] (0xc000730160) Reply frame received for 3\nI0113 00:10:15.787959 3138 log.go:181] (0xc000730160) (0xc0005540a0) Create stream\nI0113 00:10:15.787973 3138 log.go:181] (0xc000730160) (0xc0005540a0) Stream added, broadcasting: 5\nI0113 00:10:15.788996 3138 log.go:181] (0xc000730160) Reply frame received for 5\nI0113 00:10:15.854820 3138 log.go:181] (0xc000730160) Data frame received for 5\nI0113 00:10:15.854851 3138 log.go:181] (0xc0005540a0) (5) Data frame handling\nI0113 00:10:15.854873 3138 log.go:181] (0xc0005540a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0113 00:10:15.880798 3138 log.go:181] (0xc000730160) Data frame received for 3\nI0113 00:10:15.880966 3138 log.go:181] (0xc000554000) (3) Data frame handling\nI0113 00:10:15.881007 3138 log.go:181] (0xc000554000) (3) Data frame sent\nI0113 00:10:15.881277 3138 log.go:181] (0xc000730160) Data frame received for 5\nI0113 00:10:15.881290 3138 log.go:181] (0xc0005540a0) (5) Data frame handling\nI0113 00:10:15.881307 3138 log.go:181] (0xc000730160) Data frame received for 3\nI0113 00:10:15.881321 3138 log.go:181] (0xc000554000) (3) Data frame handling\nI0113 00:10:15.884236 3138 log.go:181] (0xc000730160) Data frame received for 1\nI0113 00:10:15.884372 3138 log.go:181] (0xc0007283c0) (1) Data frame handling\nI0113 00:10:15.884485 3138 log.go:181] (0xc0007283c0) (1) Data frame sent\nI0113 00:10:15.884616 3138 log.go:181] (0xc000730160) (0xc0007283c0) Stream removed, broadcasting: 1\nI0113 00:10:15.884760 3138 log.go:181] (0xc000730160) Go away received\nI0113 00:10:15.885997 3138 log.go:181] (0xc000730160) (0xc0007283c0) Stream removed, broadcasting: 1\nI0113 00:10:15.886013 3138 log.go:181] (0xc000730160) (0xc000554000) Stream removed, broadcasting: 3\nI0113 00:10:15.886019 3138 log.go:181] (0xc000730160) (0xc0005540a0) Stream removed, broadcasting: 5\n" Jan 13 00:10:15.892: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 13 00:10:15.892: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 13 00:10:15.892: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-6372 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 13 00:10:16.138: INFO: stderr: "I0113 00:10:16.033826 3156 log.go:181] (0xc0006e4000) (0xc0007b4b40) Create stream\nI0113 00:10:16.033950 3156 log.go:181] (0xc0006e4000) (0xc0007b4b40) Stream added, broadcasting: 1\nI0113 00:10:16.036678 3156 log.go:181] (0xc0006e4000) Reply frame received for 1\nI0113 00:10:16.036737 3156 log.go:181] (0xc0006e4000) (0xc0007b4dc0) Create stream\nI0113 00:10:16.036755 3156 log.go:181] (0xc0006e4000) (0xc0007b4dc0) Stream added, broadcasting: 3\nI0113 00:10:16.037893 3156 log.go:181] (0xc0006e4000) Reply frame received for 3\nI0113 00:10:16.037964 3156 log.go:181] (0xc0006e4000) (0xc000c32280) Create stream\nI0113 00:10:16.038001 3156 log.go:181] (0xc0006e4000) (0xc000c32280) Stream added, broadcasting: 5\nI0113 00:10:16.039097 3156 log.go:181] (0xc0006e4000) Reply frame received for 5\nI0113 00:10:16.098710 3156 log.go:181] (0xc0006e4000) Data frame received for 5\nI0113 00:10:16.098744 3156 log.go:181] (0xc000c32280) (5) Data frame handling\nI0113 00:10:16.098781 3156 log.go:181] (0xc000c32280) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0113 00:10:16.130548 3156 log.go:181] (0xc0006e4000) Data frame received for 3\nI0113 00:10:16.130576 3156 log.go:181] (0xc0007b4dc0) (3) Data frame handling\nI0113 00:10:16.130592 3156 log.go:181] (0xc0007b4dc0) (3) Data frame sent\nI0113 00:10:16.130775 3156 log.go:181] (0xc0006e4000) Data frame received for 5\nI0113 00:10:16.130813 3156 log.go:181] (0xc000c32280) (5) Data frame handling\nI0113 00:10:16.130837 3156 log.go:181] (0xc0006e4000) Data frame received for 3\nI0113 00:10:16.130845 3156 log.go:181] (0xc0007b4dc0) (3) Data frame handling\nI0113 00:10:16.132980 3156 log.go:181] (0xc0006e4000) Data frame received for 1\nI0113 00:10:16.133014 3156 log.go:181] (0xc0007b4b40) (1) Data frame handling\nI0113 00:10:16.133032 3156 log.go:181] (0xc0007b4b40) (1) Data frame sent\nI0113 00:10:16.133050 3156 log.go:181] (0xc0006e4000) (0xc0007b4b40) Stream removed, broadcasting: 1\nI0113 00:10:16.133217 3156 log.go:181] (0xc0006e4000) Go away received\nI0113 00:10:16.133465 3156 log.go:181] (0xc0006e4000) (0xc0007b4b40) Stream removed, broadcasting: 1\nI0113 00:10:16.133483 3156 log.go:181] (0xc0006e4000) (0xc0007b4dc0) Stream removed, broadcasting: 3\nI0113 00:10:16.133492 3156 log.go:181] (0xc0006e4000) (0xc000c32280) Stream removed, broadcasting: 5\n" Jan 13 00:10:16.139: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 13 00:10:16.139: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 13 00:10:16.139: INFO: Waiting for statefulset status.replicas updated to 0 Jan 13 00:10:16.188: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Jan 13 00:10:26.195: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 13 00:10:26.195: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jan 13 00:10:26.195: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jan 13 00:10:26.239: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999619s Jan 13 00:10:27.246: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.962307455s Jan 13 00:10:28.251: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.955405334s Jan 13 00:10:29.264: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.950785903s Jan 13 00:10:30.269: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.937519274s Jan 13 00:10:31.274: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.93260832s Jan 13 00:10:32.279: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.927815333s Jan 13 00:10:33.288: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.921978854s Jan 13 00:10:34.294: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.913616082s Jan 13 00:10:35.298: INFO: Verifying statefulset ss doesn't scale past 3 for another 907.850091ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-6372 Jan 13 00:10:36.304: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-6372 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 00:10:36.559: INFO: stderr: "I0113 00:10:36.453056 3174 log.go:181] (0xc0001b2370) (0xc000142460) Create stream\nI0113 00:10:36.453112 3174 log.go:181] (0xc0001b2370) (0xc000142460) Stream added, broadcasting: 1\nI0113 00:10:36.457571 3174 log.go:181] (0xc0001b2370) Reply frame received for 1\nI0113 00:10:36.457598 3174 log.go:181] (0xc0001b2370) (0xc0002157c0) Create stream\nI0113 00:10:36.457606 3174 log.go:181] (0xc0001b2370) (0xc0002157c0) Stream added, broadcasting: 3\nI0113 00:10:36.458643 3174 log.go:181] (0xc0001b2370) Reply frame received for 3\nI0113 00:10:36.458683 3174 log.go:181] (0xc0001b2370) (0xc000738000) Create stream\nI0113 00:10:36.458697 3174 log.go:181] (0xc0001b2370) (0xc000738000) Stream added, broadcasting: 5\nI0113 00:10:36.459761 3174 log.go:181] (0xc0001b2370) Reply frame received for 5\nI0113 00:10:36.551053 3174 log.go:181] (0xc0001b2370) Data frame received for 3\nI0113 00:10:36.551078 3174 log.go:181] (0xc0002157c0) (3) Data frame handling\nI0113 00:10:36.551091 3174 log.go:181] (0xc0002157c0) (3) Data frame sent\nI0113 00:10:36.551100 3174 log.go:181] (0xc0001b2370) Data frame received for 3\nI0113 00:10:36.551108 3174 log.go:181] (0xc0002157c0) (3) Data frame handling\nI0113 00:10:36.551119 3174 log.go:181] (0xc0001b2370) Data frame received for 5\nI0113 00:10:36.551125 3174 log.go:181] (0xc000738000) (5) Data frame handling\nI0113 00:10:36.551131 3174 log.go:181] (0xc000738000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0113 00:10:36.551158 3174 log.go:181] (0xc0001b2370) Data frame received for 5\nI0113 00:10:36.551167 3174 log.go:181] (0xc000738000) (5) Data frame handling\nI0113 00:10:36.552918 3174 log.go:181] (0xc0001b2370) Data frame received for 1\nI0113 00:10:36.552941 3174 log.go:181] (0xc000142460) (1) Data frame handling\nI0113 00:10:36.552952 3174 log.go:181] (0xc000142460) (1) Data frame sent\nI0113 00:10:36.553113 3174 log.go:181] (0xc0001b2370) (0xc000142460) Stream removed, broadcasting: 1\nI0113 00:10:36.553159 3174 log.go:181] (0xc0001b2370) Go away received\nI0113 00:10:36.553620 3174 log.go:181] (0xc0001b2370) (0xc000142460) Stream removed, broadcasting: 1\nI0113 00:10:36.553649 3174 log.go:181] (0xc0001b2370) (0xc0002157c0) Stream removed, broadcasting: 3\nI0113 00:10:36.553663 3174 log.go:181] (0xc0001b2370) (0xc000738000) Stream removed, broadcasting: 5\n" Jan 13 00:10:36.559: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 13 00:10:36.559: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 13 00:10:36.559: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-6372 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 00:10:36.757: INFO: stderr: "I0113 00:10:36.695735 3193 log.go:181] (0xc0001ca000) (0xc0005d0820) Create stream\nI0113 00:10:36.695828 3193 log.go:181] (0xc0001ca000) (0xc0005d0820) Stream added, broadcasting: 1\nI0113 00:10:36.698276 3193 log.go:181] (0xc0001ca000) Reply frame received for 1\nI0113 00:10:36.698349 3193 log.go:181] (0xc0001ca000) (0xc000b08a00) Create stream\nI0113 00:10:36.698383 3193 log.go:181] (0xc0001ca000) (0xc000b08a00) Stream added, broadcasting: 3\nI0113 00:10:36.699543 3193 log.go:181] (0xc0001ca000) Reply frame received for 3\nI0113 00:10:36.699599 3193 log.go:181] (0xc0001ca000) (0xc00081c0a0) Create stream\nI0113 00:10:36.699615 3193 log.go:181] (0xc0001ca000) (0xc00081c0a0) Stream added, broadcasting: 5\nI0113 00:10:36.700716 3193 log.go:181] (0xc0001ca000) Reply frame received for 5\nI0113 00:10:36.749237 3193 log.go:181] (0xc0001ca000) Data frame received for 3\nI0113 00:10:36.749287 3193 log.go:181] (0xc000b08a00) (3) Data frame handling\nI0113 00:10:36.749303 3193 log.go:181] (0xc000b08a00) (3) Data frame sent\nI0113 00:10:36.749314 3193 log.go:181] (0xc0001ca000) Data frame received for 3\nI0113 00:10:36.749325 3193 log.go:181] (0xc000b08a00) (3) Data frame handling\nI0113 00:10:36.749358 3193 log.go:181] (0xc0001ca000) Data frame received for 5\nI0113 00:10:36.749369 3193 log.go:181] (0xc00081c0a0) (5) Data frame handling\nI0113 00:10:36.749388 3193 log.go:181] (0xc00081c0a0) (5) Data frame sent\nI0113 00:10:36.749414 3193 log.go:181] (0xc0001ca000) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0113 00:10:36.749431 3193 log.go:181] (0xc00081c0a0) (5) Data frame handling\nI0113 00:10:36.750758 3193 log.go:181] (0xc0001ca000) Data frame received for 1\nI0113 00:10:36.750791 3193 log.go:181] (0xc0005d0820) (1) Data frame handling\nI0113 00:10:36.750820 3193 log.go:181] (0xc0005d0820) (1) Data frame sent\nI0113 00:10:36.750840 3193 log.go:181] (0xc0001ca000) (0xc0005d0820) Stream removed, broadcasting: 1\nI0113 00:10:36.750858 3193 log.go:181] (0xc0001ca000) Go away received\nI0113 00:10:36.751335 3193 log.go:181] (0xc0001ca000) (0xc0005d0820) Stream removed, broadcasting: 1\nI0113 00:10:36.751361 3193 log.go:181] (0xc0001ca000) (0xc000b08a00) Stream removed, broadcasting: 3\nI0113 00:10:36.751384 3193 log.go:181] (0xc0001ca000) (0xc00081c0a0) Stream removed, broadcasting: 5\n" Jan 13 00:10:36.757: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 13 00:10:36.757: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 13 00:10:36.757: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-6372 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 00:10:37.020: INFO: stderr: "I0113 00:10:36.950214 3210 log.go:181] (0xc000a420b0) (0xc0007d8000) Create stream\nI0113 00:10:36.950272 3210 log.go:181] (0xc000a420b0) (0xc0007d8000) Stream added, broadcasting: 1\nI0113 00:10:36.951831 3210 log.go:181] (0xc000a420b0) Reply frame received for 1\nI0113 00:10:36.951862 3210 log.go:181] (0xc000a420b0) (0xc00089e000) Create stream\nI0113 00:10:36.951871 3210 log.go:181] (0xc000a420b0) (0xc00089e000) Stream added, broadcasting: 3\nI0113 00:10:36.952650 3210 log.go:181] (0xc000a420b0) Reply frame received for 3\nI0113 00:10:36.952673 3210 log.go:181] (0xc000a420b0) (0xc0007d83c0) Create stream\nI0113 00:10:36.952681 3210 log.go:181] (0xc000a420b0) (0xc0007d83c0) Stream added, broadcasting: 5\nI0113 00:10:36.953531 3210 log.go:181] (0xc000a420b0) Reply frame received for 5\nI0113 00:10:37.014650 3210 log.go:181] (0xc000a420b0) Data frame received for 5\nI0113 00:10:37.014671 3210 log.go:181] (0xc0007d83c0) (5) Data frame handling\nI0113 00:10:37.014683 3210 log.go:181] (0xc0007d83c0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0113 00:10:37.014697 3210 log.go:181] (0xc000a420b0) Data frame received for 3\nI0113 00:10:37.014701 3210 log.go:181] (0xc00089e000) (3) Data frame handling\nI0113 00:10:37.014706 3210 log.go:181] (0xc00089e000) (3) Data frame sent\nI0113 00:10:37.014711 3210 log.go:181] (0xc000a420b0) Data frame received for 3\nI0113 00:10:37.014715 3210 log.go:181] (0xc00089e000) (3) Data frame handling\nI0113 00:10:37.014827 3210 log.go:181] (0xc000a420b0) Data frame received for 5\nI0113 00:10:37.014850 3210 log.go:181] (0xc0007d83c0) (5) Data frame handling\nI0113 00:10:37.015690 3210 log.go:181] (0xc000a420b0) Data frame received for 1\nI0113 00:10:37.015708 3210 log.go:181] (0xc0007d8000) (1) Data frame handling\nI0113 00:10:37.015719 3210 log.go:181] (0xc0007d8000) (1) Data frame sent\nI0113 00:10:37.015731 3210 log.go:181] (0xc000a420b0) (0xc0007d8000) Stream removed, broadcasting: 1\nI0113 00:10:37.015780 3210 log.go:181] (0xc000a420b0) Go away received\nI0113 00:10:37.016449 3210 log.go:181] (0xc000a420b0) (0xc0007d8000) Stream removed, broadcasting: 1\nI0113 00:10:37.016468 3210 log.go:181] (0xc000a420b0) (0xc00089e000) Stream removed, broadcasting: 3\nI0113 00:10:37.016477 3210 log.go:181] (0xc000a420b0) (0xc0007d83c0) Stream removed, broadcasting: 5\n" Jan 13 00:10:37.020: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 13 00:10:37.020: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 13 00:10:37.020: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Jan 13 00:12:07.032: INFO: Deleting all statefulset in ns statefulset-6372 Jan 13 00:12:07.035: INFO: Scaling statefulset ss to 0 Jan 13 00:12:07.069: INFO: Waiting for statefulset status.replicas updated to 0 Jan 13 00:12:07.072: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:12:07.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6372" for this suite. • [SLOW TEST:155.797 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":309,"completed":228,"skipped":3747,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:12:07.093: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name projected-configmap-test-volume-c6253327-55d8-414f-a49a-664b0d05b351 STEP: Creating a pod to test consume configMaps Jan 13 00:12:07.224: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1473bf34-f888-474f-a4d3-427bd53232fe" in namespace "projected-4805" to be "Succeeded or Failed" Jan 13 00:12:07.239: INFO: Pod "pod-projected-configmaps-1473bf34-f888-474f-a4d3-427bd53232fe": Phase="Pending", Reason="", readiness=false. Elapsed: 15.222551ms Jan 13 00:12:09.243: INFO: Pod "pod-projected-configmaps-1473bf34-f888-474f-a4d3-427bd53232fe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019293512s Jan 13 00:12:11.252: INFO: Pod "pod-projected-configmaps-1473bf34-f888-474f-a4d3-427bd53232fe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028070891s STEP: Saw pod success Jan 13 00:12:11.252: INFO: Pod "pod-projected-configmaps-1473bf34-f888-474f-a4d3-427bd53232fe" satisfied condition "Succeeded or Failed" Jan 13 00:12:11.255: INFO: Trying to get logs from node leguer-worker pod pod-projected-configmaps-1473bf34-f888-474f-a4d3-427bd53232fe container agnhost-container: STEP: delete the pod Jan 13 00:12:11.316: INFO: Waiting for pod pod-projected-configmaps-1473bf34-f888-474f-a4d3-427bd53232fe to disappear Jan 13 00:12:11.329: INFO: Pod pod-projected-configmaps-1473bf34-f888-474f-a4d3-427bd53232fe no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:12:11.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4805" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":229,"skipped":3748,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:12:11.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test substitution in container's command Jan 13 00:12:11.683: INFO: Waiting up to 5m0s for pod "var-expansion-865439bf-9d33-44b7-a089-88816c8e0c61" in namespace "var-expansion-636" to be "Succeeded or Failed" Jan 13 00:12:11.694: INFO: Pod "var-expansion-865439bf-9d33-44b7-a089-88816c8e0c61": Phase="Pending", Reason="", readiness=false. Elapsed: 11.861037ms Jan 13 00:12:13.737: INFO: Pod "var-expansion-865439bf-9d33-44b7-a089-88816c8e0c61": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0547007s Jan 13 00:12:15.741: INFO: Pod "var-expansion-865439bf-9d33-44b7-a089-88816c8e0c61": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.058086089s STEP: Saw pod success Jan 13 00:12:15.741: INFO: Pod "var-expansion-865439bf-9d33-44b7-a089-88816c8e0c61" satisfied condition "Succeeded or Failed" Jan 13 00:12:15.743: INFO: Trying to get logs from node leguer-worker2 pod var-expansion-865439bf-9d33-44b7-a089-88816c8e0c61 container dapi-container: STEP: delete the pod Jan 13 00:12:15.869: INFO: Waiting for pod var-expansion-865439bf-9d33-44b7-a089-88816c8e0c61 to disappear Jan 13 00:12:15.886: INFO: Pod var-expansion-865439bf-9d33-44b7-a089-88816c8e0c61 no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:12:15.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-636" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":309,"completed":230,"skipped":3759,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:12:15.894: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward api env vars Jan 13 00:12:16.097: INFO: Waiting up to 5m0s for pod "downward-api-c8bcc555-a669-49e6-8463-89b7de944c20" in namespace "downward-api-1316" to be "Succeeded or Failed" Jan 13 00:12:16.110: INFO: Pod "downward-api-c8bcc555-a669-49e6-8463-89b7de944c20": Phase="Pending", Reason="", readiness=false. Elapsed: 12.841283ms Jan 13 00:12:18.211: INFO: Pod "downward-api-c8bcc555-a669-49e6-8463-89b7de944c20": Phase="Pending", Reason="", readiness=false. Elapsed: 2.113852855s Jan 13 00:12:20.215: INFO: Pod "downward-api-c8bcc555-a669-49e6-8463-89b7de944c20": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.118580559s STEP: Saw pod success Jan 13 00:12:20.215: INFO: Pod "downward-api-c8bcc555-a669-49e6-8463-89b7de944c20" satisfied condition "Succeeded or Failed" Jan 13 00:12:20.219: INFO: Trying to get logs from node leguer-worker pod downward-api-c8bcc555-a669-49e6-8463-89b7de944c20 container dapi-container: STEP: delete the pod Jan 13 00:12:20.240: INFO: Waiting for pod downward-api-c8bcc555-a669-49e6-8463-89b7de944c20 to disappear Jan 13 00:12:20.263: INFO: Pod downward-api-c8bcc555-a669-49e6-8463-89b7de944c20 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:12:20.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1316" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":309,"completed":231,"skipped":3780,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:12:20.274: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 13 00:12:21.239: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created Jan 13 00:12:23.255: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746093541, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746093541, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746093541, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746093541, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 13 00:12:25.260: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746093541, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746093541, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746093541, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746093541, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 13 00:12:28.288: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:12:28.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7500" for this suite. STEP: Destroying namespace "webhook-7500-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:8.170 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":309,"completed":232,"skipped":3818,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:12:28.445: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should serve multiport endpoints from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating service multi-endpoint-test in namespace services-7626 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7626 to expose endpoints map[] Jan 13 00:12:28.586: INFO: Failed go get Endpoints object: endpoints "multi-endpoint-test" not found Jan 13 00:12:29.616: INFO: successfully validated that service multi-endpoint-test in namespace services-7626 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-7626 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7626 to expose endpoints map[pod1:[100]] Jan 13 00:12:33.887: INFO: successfully validated that service multi-endpoint-test in namespace services-7626 exposes endpoints map[pod1:[100]] STEP: Creating pod pod2 in namespace services-7626 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7626 to expose endpoints map[pod1:[100] pod2:[101]] Jan 13 00:12:37.951: INFO: successfully validated that service multi-endpoint-test in namespace services-7626 exposes endpoints map[pod1:[100] pod2:[101]] STEP: Deleting pod pod1 in namespace services-7626 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7626 to expose endpoints map[pod2:[101]] Jan 13 00:12:38.048: INFO: successfully validated that service multi-endpoint-test in namespace services-7626 exposes endpoints map[pod2:[101]] STEP: Deleting pod pod2 in namespace services-7626 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7626 to expose endpoints map[] Jan 13 00:12:38.409: INFO: successfully validated that service multi-endpoint-test in namespace services-7626 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:12:38.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7626" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 • [SLOW TEST:10.366 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":309,"completed":233,"skipped":3892,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:12:38.811: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:12:39.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5514" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":309,"completed":234,"skipped":3894,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:12:39.186: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating projection with secret that has name projected-secret-test-e7adaff1-03fd-42ba-b3d5-3e5936c9accc STEP: Creating a pod to test consume secrets Jan 13 00:12:39.528: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-057a91d2-68d8-4a14-928b-f8ee32d8e3ed" in namespace "projected-2790" to be "Succeeded or Failed" Jan 13 00:12:39.558: INFO: Pod "pod-projected-secrets-057a91d2-68d8-4a14-928b-f8ee32d8e3ed": Phase="Pending", Reason="", readiness=false. Elapsed: 29.613754ms Jan 13 00:12:41.561: INFO: Pod "pod-projected-secrets-057a91d2-68d8-4a14-928b-f8ee32d8e3ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032808264s Jan 13 00:12:43.658: INFO: Pod "pod-projected-secrets-057a91d2-68d8-4a14-928b-f8ee32d8e3ed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.129486271s STEP: Saw pod success Jan 13 00:12:43.658: INFO: Pod "pod-projected-secrets-057a91d2-68d8-4a14-928b-f8ee32d8e3ed" satisfied condition "Succeeded or Failed" Jan 13 00:12:43.676: INFO: Trying to get logs from node leguer-worker pod pod-projected-secrets-057a91d2-68d8-4a14-928b-f8ee32d8e3ed container projected-secret-volume-test: STEP: delete the pod Jan 13 00:12:43.717: INFO: Waiting for pod pod-projected-secrets-057a91d2-68d8-4a14-928b-f8ee32d8e3ed to disappear Jan 13 00:12:43.797: INFO: Pod pod-projected-secrets-057a91d2-68d8-4a14-928b-f8ee32d8e3ed no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:12:43.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2790" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":235,"skipped":3902,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:12:43.809: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should run through a ConfigMap lifecycle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a ConfigMap STEP: fetching the ConfigMap STEP: patching the ConfigMap STEP: listing all ConfigMaps in all namespaces with a label selector STEP: deleting the ConfigMap by collection with a label selector STEP: listing all ConfigMaps in test namespace [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:12:44.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6866" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":309,"completed":236,"skipped":3922,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:12:44.114: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Jan 13 00:12:44.256: INFO: Waiting up to 1m0s for all nodes to be ready Jan 13 00:13:44.285: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:13:44.288: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:488 STEP: Finding an available node STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. Jan 13 00:13:48.438: INFO: found a healthy node: leguer-worker [It] runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 13 00:14:02.603: INFO: pods created so far: [1 1 1] Jan 13 00:14:02.604: INFO: length of pods created so far: 3 Jan 13 00:15:11.244: INFO: pods created so far: [2 2 1] [AfterEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:15:18.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-2562" for this suite. [AfterEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:462 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:15:18.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-815" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:154.280 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:451 runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":309,"completed":237,"skipped":3946,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:15:18.395: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test override all Jan 13 00:15:18.510: INFO: Waiting up to 5m0s for pod "client-containers-66165684-e184-4b01-96ef-98436cef5e28" in namespace "containers-8889" to be "Succeeded or Failed" Jan 13 00:15:18.520: INFO: Pod "client-containers-66165684-e184-4b01-96ef-98436cef5e28": Phase="Pending", Reason="", readiness=false. Elapsed: 10.621688ms Jan 13 00:15:20.525: INFO: Pod "client-containers-66165684-e184-4b01-96ef-98436cef5e28": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015247364s Jan 13 00:15:22.530: INFO: Pod "client-containers-66165684-e184-4b01-96ef-98436cef5e28": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020190245s STEP: Saw pod success Jan 13 00:15:22.530: INFO: Pod "client-containers-66165684-e184-4b01-96ef-98436cef5e28" satisfied condition "Succeeded or Failed" Jan 13 00:15:22.533: INFO: Trying to get logs from node leguer-worker2 pod client-containers-66165684-e184-4b01-96ef-98436cef5e28 container agnhost-container: STEP: delete the pod Jan 13 00:15:22.588: INFO: Waiting for pod client-containers-66165684-e184-4b01-96ef-98436cef5e28 to disappear Jan 13 00:15:22.599: INFO: Pod client-containers-66165684-e184-4b01-96ef-98436cef5e28 no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:15:22.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8889" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":309,"completed":238,"skipped":3961,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:15:22.609: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name configmap-projected-all-test-volume-df1735a1-40f5-4c39-8bf6-a814410d4d25 STEP: Creating secret with name secret-projected-all-test-volume-e672cc61-8266-43ff-a27c-c4dcb53b0a19 STEP: Creating a pod to test Check all projections for projected volume plugin Jan 13 00:15:22.781: INFO: Waiting up to 5m0s for pod "projected-volume-a9365ba0-8f88-45f2-bcc1-b65af331286c" in namespace "projected-758" to be "Succeeded or Failed" Jan 13 00:15:22.792: INFO: Pod "projected-volume-a9365ba0-8f88-45f2-bcc1-b65af331286c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.71987ms Jan 13 00:15:25.104: INFO: Pod "projected-volume-a9365ba0-8f88-45f2-bcc1-b65af331286c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.323376851s Jan 13 00:15:27.310: INFO: Pod "projected-volume-a9365ba0-8f88-45f2-bcc1-b65af331286c": Phase="Running", Reason="", readiness=true. Elapsed: 4.529171367s Jan 13 00:15:29.313: INFO: Pod "projected-volume-a9365ba0-8f88-45f2-bcc1-b65af331286c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.532610503s STEP: Saw pod success Jan 13 00:15:29.314: INFO: Pod "projected-volume-a9365ba0-8f88-45f2-bcc1-b65af331286c" satisfied condition "Succeeded or Failed" Jan 13 00:15:29.316: INFO: Trying to get logs from node leguer-worker2 pod projected-volume-a9365ba0-8f88-45f2-bcc1-b65af331286c container projected-all-volume-test: STEP: delete the pod Jan 13 00:15:29.355: INFO: Waiting for pod projected-volume-a9365ba0-8f88-45f2-bcc1-b65af331286c to disappear Jan 13 00:15:29.381: INFO: Pod projected-volume-a9365ba0-8f88-45f2-bcc1-b65af331286c no longer exists [AfterEach] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:15:29.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-758" for this suite. • [SLOW TEST:6.779 seconds] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:32 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":309,"completed":239,"skipped":4057,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:15:29.389: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 13 00:15:29.948: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created Jan 13 00:15:31.957: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746093730, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746093730, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746093730, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746093729, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 13 00:15:35.025: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:15:35.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5614" for this suite. STEP: Destroying namespace "webhook-5614-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:5.916 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":309,"completed":240,"skipped":4061,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:15:35.305: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:85 [It] deployment should delete old replica sets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 13 00:15:35.400: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Jan 13 00:15:40.404: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jan 13 00:15:40.404: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:79 Jan 13 00:15:40.556: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-5994 dc13b150-f0a0-4c98-ac2d-1e65566ba9c6 438528 1 2021-01-13 00:15:40 +0000 UTC map[name:cleanup-pod] map[] [] [] [{e2e.test Update apps/v1 2021-01-13 00:15:40 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.21 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc005e228f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Jan 13 00:15:40.620: INFO: New ReplicaSet "test-cleanup-deployment-685c4f8568" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-685c4f8568 deployment-5994 42f5450e-c093-4ec3-bec6-84350e6cf089 438537 1 2021-01-13 00:15:40 +0000 UTC map[name:cleanup-pod pod-template-hash:685c4f8568] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment dc13b150-f0a0-4c98-ac2d-1e65566ba9c6 0xc0059c4c37 0xc0059c4c38}] [] [{kube-controller-manager Update apps/v1 2021-01-13 00:15:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dc13b150-f0a0-4c98-ac2d-1e65566ba9c6\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 685c4f8568,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:685c4f8568] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.21 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0059c4cc8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 13 00:15:40.620: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Jan 13 00:15:40.620: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-5994 5150fd54-27bd-44ce-8748-1259acc9db4e 438530 1 2021-01-13 00:15:35 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment dc13b150-f0a0-4c98-ac2d-1e65566ba9c6 0xc0059c4b27 0xc0059c4b28}] [] [{e2e.test Update apps/v1 2021-01-13 00:15:35 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-01-13 00:15:40 +0000 UTC FieldsV1 {"f:metadata":{"f:ownerReferences":{".":{},"k:{\"uid\":\"dc13b150-f0a0-4c98-ac2d-1e65566ba9c6\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0059c4bc8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jan 13 00:15:40.688: INFO: Pod "test-cleanup-controller-5c2l2" is available: &Pod{ObjectMeta:{test-cleanup-controller-5c2l2 test-cleanup-controller- deployment-5994 30d8fddf-648d-41ab-a766-021aa6e7d50c 438512 0 2021-01-13 00:15:35 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller 5150fd54-27bd-44ce-8748-1259acc9db4e 0xc0059c5147 0xc0059c5148}] [] [{kube-controller-manager Update v1 2021-01-13 00:15:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5150fd54-27bd-44ce-8748-1259acc9db4e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-13 00:15:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.146\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4trxc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4trxc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4trxc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 00:15:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 00:15:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 00:15:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 00:15:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:10.244.1.146,StartTime:2021-01-13 00:15:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-01-13 00:15:38 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://a1380b1106c585994b997e3ddf547c9478479a80d5888f8251d7ff7d4b16112e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.146,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 13 00:15:40.688: INFO: Pod "test-cleanup-deployment-685c4f8568-68227" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-685c4f8568-68227 test-cleanup-deployment-685c4f8568- deployment-5994 7ab5be9c-e0c1-4c84-bab6-4cfb63af90f6 438536 0 2021-01-13 00:15:40 +0000 UTC map[name:cleanup-pod pod-template-hash:685c4f8568] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-685c4f8568 42f5450e-c093-4ec3-bec6-84350e6cf089 0xc0059c5417 0xc0059c5418}] [] [{kube-controller-manager Update v1 2021-01-13 00:15:40 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"42f5450e-c093-4ec3-bec6-84350e6cf089\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4trxc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4trxc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4trxc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 00:15:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:15:40.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5994" for this suite. • [SLOW TEST:5.420 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":309,"completed":241,"skipped":4086,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:15:40.726: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-7196.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-7196.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7196.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-7196.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-7196.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7196.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 13 00:15:51.036: INFO: DNS probes using dns-7196/dns-test-6a6f9aeb-ea16-4b79-8891-0fed2d72ad91 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:15:51.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7196" for this suite. • [SLOW TEST:10.461 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":309,"completed":242,"skipped":4093,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSSSS ------------------------------ [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:15:51.187: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating service in namespace services-1182 STEP: creating service affinity-nodeport-transition in namespace services-1182 STEP: creating replication controller affinity-nodeport-transition in namespace services-1182 I0113 00:15:52.031856 7 runners.go:190] Created replication controller with name: affinity-nodeport-transition, namespace: services-1182, replica count: 3 I0113 00:15:55.082235 7 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0113 00:15:58.082470 7 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0113 00:16:01.082745 7 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 13 00:16:01.091: INFO: Creating new exec pod Jan 13 00:16:06.117: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-1182 exec execpod-affinity9tbk2 -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80' Jan 13 00:16:06.364: INFO: stderr: "I0113 00:16:06.273959 3228 log.go:181] (0xc000136e70) (0xc000624500) Create stream\nI0113 00:16:06.274024 3228 log.go:181] (0xc000136e70) (0xc000624500) Stream added, broadcasting: 1\nI0113 00:16:06.275850 3228 log.go:181] (0xc000136e70) Reply frame received for 1\nI0113 00:16:06.275889 3228 log.go:181] (0xc000136e70) (0xc0006246e0) Create stream\nI0113 00:16:06.275913 3228 log.go:181] (0xc000136e70) (0xc0006246e0) Stream added, broadcasting: 3\nI0113 00:16:06.276594 3228 log.go:181] (0xc000136e70) Reply frame received for 3\nI0113 00:16:06.276620 3228 log.go:181] (0xc000136e70) (0xc000b54000) Create stream\nI0113 00:16:06.276633 3228 log.go:181] (0xc000136e70) (0xc000b54000) Stream added, broadcasting: 5\nI0113 00:16:06.277511 3228 log.go:181] (0xc000136e70) Reply frame received for 5\nI0113 00:16:06.351923 3228 log.go:181] (0xc000136e70) Data frame received for 5\nI0113 00:16:06.351956 3228 log.go:181] (0xc000b54000) (5) Data frame handling\nI0113 00:16:06.351988 3228 log.go:181] (0xc000b54000) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport-transition 80\nI0113 00:16:06.353358 3228 log.go:181] (0xc000136e70) Data frame received for 5\nI0113 00:16:06.353398 3228 log.go:181] (0xc000b54000) (5) Data frame handling\nI0113 00:16:06.353432 3228 log.go:181] (0xc000b54000) (5) Data frame sent\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\nI0113 00:16:06.353832 3228 log.go:181] (0xc000136e70) Data frame received for 5\nI0113 00:16:06.353874 3228 log.go:181] (0xc000136e70) Data frame received for 3\nI0113 00:16:06.353906 3228 log.go:181] (0xc0006246e0) (3) Data frame handling\nI0113 00:16:06.353935 3228 log.go:181] (0xc000b54000) (5) Data frame handling\nI0113 00:16:06.355903 3228 log.go:181] (0xc000136e70) Data frame received for 1\nI0113 00:16:06.355928 3228 log.go:181] (0xc000624500) (1) Data frame handling\nI0113 00:16:06.355953 3228 log.go:181] (0xc000624500) (1) Data frame sent\nI0113 00:16:06.355973 3228 log.go:181] (0xc000136e70) (0xc000624500) Stream removed, broadcasting: 1\nI0113 00:16:06.356675 3228 log.go:181] (0xc000136e70) Go away received\nI0113 00:16:06.357687 3228 log.go:181] (0xc000136e70) (0xc000624500) Stream removed, broadcasting: 1\nI0113 00:16:06.357864 3228 log.go:181] (0xc000136e70) (0xc0006246e0) Stream removed, broadcasting: 3\nI0113 00:16:06.358018 3228 log.go:181] (0xc000136e70) (0xc000b54000) Stream removed, broadcasting: 5\n" Jan 13 00:16:06.364: INFO: stdout: "" Jan 13 00:16:06.365: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-1182 exec execpod-affinity9tbk2 -- /bin/sh -x -c nc -zv -t -w 2 10.96.130.114 80' Jan 13 00:16:06.573: INFO: stderr: "I0113 00:16:06.500153 3245 log.go:181] (0xc00077e000) (0xc0009a00a0) Create stream\nI0113 00:16:06.500218 3245 log.go:181] (0xc00077e000) (0xc0009a00a0) Stream added, broadcasting: 1\nI0113 00:16:06.504292 3245 log.go:181] (0xc00077e000) Reply frame received for 1\nI0113 00:16:06.504355 3245 log.go:181] (0xc00077e000) (0xc000b11180) Create stream\nI0113 00:16:06.504371 3245 log.go:181] (0xc00077e000) (0xc000b11180) Stream added, broadcasting: 3\nI0113 00:16:06.505842 3245 log.go:181] (0xc00077e000) Reply frame received for 3\nI0113 00:16:06.505886 3245 log.go:181] (0xc00077e000) (0xc0009a0140) Create stream\nI0113 00:16:06.505899 3245 log.go:181] (0xc00077e000) (0xc0009a0140) Stream added, broadcasting: 5\nI0113 00:16:06.506856 3245 log.go:181] (0xc00077e000) Reply frame received for 5\nI0113 00:16:06.563910 3245 log.go:181] (0xc00077e000) Data frame received for 3\nI0113 00:16:06.563967 3245 log.go:181] (0xc000b11180) (3) Data frame handling\nI0113 00:16:06.564006 3245 log.go:181] (0xc00077e000) Data frame received for 5\nI0113 00:16:06.564027 3245 log.go:181] (0xc0009a0140) (5) Data frame handling\nI0113 00:16:06.564050 3245 log.go:181] (0xc0009a0140) (5) Data frame sent\nI0113 00:16:06.564065 3245 log.go:181] (0xc00077e000) Data frame received for 5\nI0113 00:16:06.564075 3245 log.go:181] (0xc0009a0140) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.130.114 80\nConnection to 10.96.130.114 80 port [tcp/http] succeeded!\nI0113 00:16:06.566348 3245 log.go:181] (0xc00077e000) Data frame received for 1\nI0113 00:16:06.566388 3245 log.go:181] (0xc0009a00a0) (1) Data frame handling\nI0113 00:16:06.566424 3245 log.go:181] (0xc0009a00a0) (1) Data frame sent\nI0113 00:16:06.566455 3245 log.go:181] (0xc00077e000) (0xc0009a00a0) Stream removed, broadcasting: 1\nI0113 00:16:06.566489 3245 log.go:181] (0xc00077e000) Go away received\nI0113 00:16:06.566988 3245 log.go:181] (0xc00077e000) (0xc0009a00a0) Stream removed, broadcasting: 1\nI0113 00:16:06.567011 3245 log.go:181] (0xc00077e000) (0xc000b11180) Stream removed, broadcasting: 3\nI0113 00:16:06.567022 3245 log.go:181] (0xc00077e000) (0xc0009a0140) Stream removed, broadcasting: 5\n" Jan 13 00:16:06.573: INFO: stdout: "" Jan 13 00:16:06.573: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-1182 exec execpod-affinity9tbk2 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.13 32670' Jan 13 00:16:06.791: INFO: stderr: "I0113 00:16:06.713578 3264 log.go:181] (0xc00021c210) (0xc00011e000) Create stream\nI0113 00:16:06.713638 3264 log.go:181] (0xc00021c210) (0xc00011e000) Stream added, broadcasting: 1\nI0113 00:16:06.715233 3264 log.go:181] (0xc00021c210) Reply frame received for 1\nI0113 00:16:06.715259 3264 log.go:181] (0xc00021c210) (0xc0001c4280) Create stream\nI0113 00:16:06.715268 3264 log.go:181] (0xc00021c210) (0xc0001c4280) Stream added, broadcasting: 3\nI0113 00:16:06.716183 3264 log.go:181] (0xc00021c210) Reply frame received for 3\nI0113 00:16:06.716229 3264 log.go:181] (0xc00021c210) (0xc0001c4f00) Create stream\nI0113 00:16:06.716246 3264 log.go:181] (0xc00021c210) (0xc0001c4f00) Stream added, broadcasting: 5\nI0113 00:16:06.717255 3264 log.go:181] (0xc00021c210) Reply frame received for 5\nI0113 00:16:06.783503 3264 log.go:181] (0xc00021c210) Data frame received for 3\nI0113 00:16:06.783556 3264 log.go:181] (0xc0001c4280) (3) Data frame handling\nI0113 00:16:06.783605 3264 log.go:181] (0xc00021c210) Data frame received for 5\nI0113 00:16:06.783634 3264 log.go:181] (0xc0001c4f00) (5) Data frame handling\nI0113 00:16:06.783657 3264 log.go:181] (0xc0001c4f00) (5) Data frame sent\nI0113 00:16:06.783669 3264 log.go:181] (0xc00021c210) Data frame received for 5\nI0113 00:16:06.783682 3264 log.go:181] (0xc0001c4f00) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.13 32670\nConnection to 172.18.0.13 32670 port [tcp/32670] succeeded!\nI0113 00:16:06.785160 3264 log.go:181] (0xc00021c210) Data frame received for 1\nI0113 00:16:06.785179 3264 log.go:181] (0xc00011e000) (1) Data frame handling\nI0113 00:16:06.785189 3264 log.go:181] (0xc00011e000) (1) Data frame sent\nI0113 00:16:06.785200 3264 log.go:181] (0xc00021c210) (0xc00011e000) Stream removed, broadcasting: 1\nI0113 00:16:06.785226 3264 log.go:181] (0xc00021c210) Go away received\nI0113 00:16:06.785632 3264 log.go:181] (0xc00021c210) (0xc00011e000) Stream removed, broadcasting: 1\nI0113 00:16:06.785657 3264 log.go:181] (0xc00021c210) (0xc0001c4280) Stream removed, broadcasting: 3\nI0113 00:16:06.785671 3264 log.go:181] (0xc00021c210) (0xc0001c4f00) Stream removed, broadcasting: 5\n" Jan 13 00:16:06.791: INFO: stdout: "" Jan 13 00:16:06.791: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-1182 exec execpod-affinity9tbk2 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.12 32670' Jan 13 00:16:06.997: INFO: stderr: "I0113 00:16:06.920370 3282 log.go:181] (0xc000028210) (0xc000e04000) Create stream\nI0113 00:16:06.920456 3282 log.go:181] (0xc000028210) (0xc000e04000) Stream added, broadcasting: 1\nI0113 00:16:06.922878 3282 log.go:181] (0xc000028210) Reply frame received for 1\nI0113 00:16:06.922927 3282 log.go:181] (0xc000028210) (0xc00049d900) Create stream\nI0113 00:16:06.922944 3282 log.go:181] (0xc000028210) (0xc00049d900) Stream added, broadcasting: 3\nI0113 00:16:06.923968 3282 log.go:181] (0xc000028210) Reply frame received for 3\nI0113 00:16:06.924036 3282 log.go:181] (0xc000028210) (0xc00049de00) Create stream\nI0113 00:16:06.924073 3282 log.go:181] (0xc000028210) (0xc00049de00) Stream added, broadcasting: 5\nI0113 00:16:06.925249 3282 log.go:181] (0xc000028210) Reply frame received for 5\nI0113 00:16:06.987843 3282 log.go:181] (0xc000028210) Data frame received for 5\nI0113 00:16:06.987880 3282 log.go:181] (0xc00049de00) (5) Data frame handling\nI0113 00:16:06.987906 3282 log.go:181] (0xc00049de00) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.12 32670\nConnection to 172.18.0.12 32670 port [tcp/32670] succeeded!\nI0113 00:16:06.988179 3282 log.go:181] (0xc000028210) Data frame received for 5\nI0113 00:16:06.988200 3282 log.go:181] (0xc00049de00) (5) Data frame handling\nI0113 00:16:06.988258 3282 log.go:181] (0xc000028210) Data frame received for 3\nI0113 00:16:06.988287 3282 log.go:181] (0xc00049d900) (3) Data frame handling\nI0113 00:16:06.990295 3282 log.go:181] (0xc000028210) Data frame received for 1\nI0113 00:16:06.990332 3282 log.go:181] (0xc000e04000) (1) Data frame handling\nI0113 00:16:06.990355 3282 log.go:181] (0xc000e04000) (1) Data frame sent\nI0113 00:16:06.990380 3282 log.go:181] (0xc000028210) (0xc000e04000) Stream removed, broadcasting: 1\nI0113 00:16:06.990407 3282 log.go:181] (0xc000028210) Go away received\nI0113 00:16:06.991015 3282 log.go:181] (0xc000028210) (0xc000e04000) Stream removed, broadcasting: 1\nI0113 00:16:06.991037 3282 log.go:181] (0xc000028210) (0xc00049d900) Stream removed, broadcasting: 3\nI0113 00:16:06.991050 3282 log.go:181] (0xc000028210) (0xc00049de00) Stream removed, broadcasting: 5\n" Jan 13 00:16:06.997: INFO: stdout: "" Jan 13 00:16:07.008: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-1182 exec execpod-affinity9tbk2 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.13:32670/ ; done' Jan 13 00:16:07.353: INFO: stderr: "I0113 00:16:07.146588 3300 log.go:181] (0xc0007473f0) (0xc000644960) Create stream\nI0113 00:16:07.146640 3300 log.go:181] (0xc0007473f0) (0xc000644960) Stream added, broadcasting: 1\nI0113 00:16:07.150539 3300 log.go:181] (0xc0007473f0) Reply frame received for 1\nI0113 00:16:07.150789 3300 log.go:181] (0xc0007473f0) (0xc0005b0460) Create stream\nI0113 00:16:07.150913 3300 log.go:181] (0xc0007473f0) (0xc0005b0460) Stream added, broadcasting: 3\nI0113 00:16:07.152735 3300 log.go:181] (0xc0007473f0) Reply frame received for 3\nI0113 00:16:07.152771 3300 log.go:181] (0xc0007473f0) (0xc0003a4640) Create stream\nI0113 00:16:07.152781 3300 log.go:181] (0xc0007473f0) (0xc0003a4640) Stream added, broadcasting: 5\nI0113 00:16:07.153888 3300 log.go:181] (0xc0007473f0) Reply frame received for 5\nI0113 00:16:07.234082 3300 log.go:181] (0xc0007473f0) Data frame received for 3\nI0113 00:16:07.234117 3300 log.go:181] (0xc0005b0460) (3) Data frame handling\nI0113 00:16:07.234127 3300 log.go:181] (0xc0005b0460) (3) Data frame sent\nI0113 00:16:07.234146 3300 log.go:181] (0xc0007473f0) Data frame received for 5\nI0113 00:16:07.234153 3300 log.go:181] (0xc0003a4640) (5) Data frame handling\nI0113 00:16:07.234160 3300 log.go:181] (0xc0003a4640) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32670/\nI0113 00:16:07.238510 3300 log.go:181] (0xc0007473f0) Data frame received for 3\nI0113 00:16:07.238550 3300 log.go:181] (0xc0005b0460) (3) Data frame handling\nI0113 00:16:07.238595 3300 log.go:181] (0xc0005b0460) (3) Data frame sent\nI0113 00:16:07.238935 3300 log.go:181] (0xc0007473f0) Data frame received for 3\nI0113 00:16:07.238956 3300 log.go:181] (0xc0005b0460) (3) Data frame handling\nI0113 00:16:07.238974 3300 log.go:181] (0xc0007473f0) Data frame received for 5\nI0113 00:16:07.239008 3300 log.go:181] (0xc0003a4640) (5) Data frame handling\nI0113 00:16:07.239034 3300 log.go:181] (0xc0003a4640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32670/\nI0113 00:16:07.239053 3300 log.go:181] (0xc0005b0460) (3) Data frame sent\nI0113 00:16:07.245020 3300 log.go:181] (0xc0007473f0) Data frame received for 3\nI0113 00:16:07.245044 3300 log.go:181] (0xc0005b0460) (3) Data frame handling\nI0113 00:16:07.245066 3300 log.go:181] (0xc0005b0460) (3) Data frame sent\nI0113 00:16:07.245949 3300 log.go:181] (0xc0007473f0) Data frame received for 3\nI0113 00:16:07.245980 3300 log.go:181] (0xc0005b0460) (3) Data frame handling\nI0113 00:16:07.245999 3300 log.go:181] (0xc0005b0460) (3) Data frame sent\nI0113 00:16:07.246013 3300 log.go:181] (0xc0007473f0) Data frame received for 5\nI0113 00:16:07.246023 3300 log.go:181] (0xc0003a4640) (5) Data frame handling\nI0113 00:16:07.246034 3300 log.go:181] (0xc0003a4640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32670/\nI0113 00:16:07.253687 3300 log.go:181] (0xc0007473f0) Data frame received for 3\nI0113 00:16:07.253713 3300 log.go:181] (0xc0005b0460) (3) Data frame handling\nI0113 00:16:07.253731 3300 log.go:181] (0xc0005b0460) (3) Data frame sent\nI0113 00:16:07.254023 3300 log.go:181] (0xc0007473f0) Data frame received for 5\nI0113 00:16:07.254045 3300 log.go:181] (0xc0003a4640) (5) Data frame handling\nI0113 00:16:07.254126 3300 log.go:181] (0xc0003a4640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32670/\nI0113 00:16:07.254160 3300 log.go:181] (0xc0007473f0) Data frame received for 3\nI0113 00:16:07.254180 3300 log.go:181] (0xc0005b0460) (3) Data frame handling\nI0113 00:16:07.254193 3300 log.go:181] (0xc0005b0460) (3) Data frame sent\nI0113 00:16:07.260100 3300 log.go:181] (0xc0007473f0) Data frame received for 3\nI0113 00:16:07.260124 3300 log.go:181] (0xc0005b0460) (3) Data frame handling\nI0113 00:16:07.260138 3300 log.go:181] (0xc0005b0460) (3) Data frame sent\nI0113 00:16:07.261276 3300 log.go:181] (0xc0007473f0) Data frame received for 5\nI0113 00:16:07.261311 3300 log.go:181] (0xc0003a4640) (5) Data frame handling\nI0113 00:16:07.261327 3300 log.go:181] (0xc0003a4640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32670/\nI0113 00:16:07.261350 3300 log.go:181] (0xc0007473f0) Data frame received for 3\nI0113 00:16:07.261388 3300 log.go:181] (0xc0005b0460) (3) Data frame handling\nI0113 00:16:07.261418 3300 log.go:181] (0xc0005b0460) (3) Data frame sent\nI0113 00:16:07.264700 3300 log.go:181] (0xc0007473f0) Data frame received for 3\nI0113 00:16:07.264715 3300 log.go:181] (0xc0005b0460) (3) Data frame handling\nI0113 00:16:07.264724 3300 log.go:181] (0xc0005b0460) (3) Data frame sent\nI0113 00:16:07.265336 3300 log.go:181] (0xc0007473f0) Data frame received for 3\nI0113 00:16:07.265353 3300 log.go:181] (0xc0005b0460) (3) Data frame handling\nI0113 00:16:07.265371 3300 log.go:181] (0xc0007473f0) Data frame received for 5\nI0113 00:16:07.265391 3300 log.go:181] (0xc0003a4640) (5) Data frame handling\nI0113 00:16:07.265403 3300 log.go:181] (0xc0003a4640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32670/\nI0113 00:16:07.265422 3300 log.go:181] (0xc0005b0460) (3) Data frame sent\nI0113 00:16:07.269936 3300 log.go:181] (0xc0007473f0) Data frame received for 3\nI0113 00:16:07.269962 3300 log.go:181] (0xc0005b0460) (3) Data frame handling\nI0113 00:16:07.269981 3300 log.go:181] (0xc0005b0460) (3) Data frame sent\nI0113 00:16:07.270831 3300 log.go:181] (0xc0007473f0) Data frame received for 3\nI0113 00:16:07.270847 3300 log.go:181] (0xc0005b0460) (3) Data frame handling\nI0113 00:16:07.270864 3300 log.go:181] (0xc0005b0460) (3) Data frame sent\nI0113 00:16:07.270885 3300 log.go:181] (0xc0007473f0) Data frame received for 5\nI0113 00:16:07.270910 3300 log.go:181] (0xc0003a4640) (5) Data frame handling\nI0113 00:16:07.270935 3300 log.go:181] (0xc0003a4640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32670/\nI0113 00:16:07.275430 3300 log.go:181] (0xc0007473f0) Data frame received for 3\nI0113 00:16:07.275448 3300 log.go:181] (0xc0005b0460) (3) Data frame handling\nI0113 00:16:07.275457 3300 log.go:181] (0xc0005b0460) (3) Data frame sent\nI0113 00:16:07.276495 3300 log.go:181] (0xc0007473f0) Data frame received for 5\nI0113 00:16:07.276529 3300 log.go:181] (0xc0007473f0) Data frame received for 3\nI0113 00:16:07.276561 3300 log.go:181] (0xc0005b0460) (3) Data frame handling\nI0113 00:16:07.276592 3300 log.go:181] (0xc0003a4640) (5) Data frame handling\nI0113 00:16:07.276629 3300 log.go:181] (0xc0003a4640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32670/\nI0113 00:16:07.276666 3300 log.go:181] (0xc0005b0460) (3) Data frame sent\nI0113 00:16:07.283376 3300 log.go:181] (0xc0007473f0) Data frame received for 3\nI0113 00:16:07.283409 3300 log.go:181] (0xc0005b0460) (3) Data frame handling\nI0113 00:16:07.283430 3300 log.go:181] (0xc0005b0460) (3) Data frame sent\nI0113 00:16:07.284074 3300 log.go:181] (0xc0007473f0) Data frame received for 5\nI0113 00:16:07.284107 3300 log.go:181] (0xc0003a4640) (5) Data frame handling\nI0113 00:16:07.284122 3300 log.go:181] (0xc0003a4640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32670/\nI0113 00:16:07.284147 3300 log.go:181] (0xc0007473f0) Data frame received for 3\nI0113 00:16:07.284162 3300 log.go:181] (0xc0005b0460) (3) Data frame handling\nI0113 00:16:07.284179 3300 log.go:181] (0xc0005b0460) (3) Data frame sent\nI0113 00:16:07.292798 3300 log.go:181] (0xc0007473f0) Data frame received for 3\nI0113 00:16:07.292818 3300 log.go:181] (0xc0005b0460) (3) Data frame handling\nI0113 00:16:07.292830 3300 log.go:181] (0xc0005b0460) (3) Data frame sent\nI0113 00:16:07.293294 3300 log.go:181] (0xc0007473f0) Data frame received for 3\nI0113 00:16:07.293322 3300 log.go:181] (0xc0005b0460) (3) Data frame handling\nI0113 00:16:07.293337 3300 log.go:181] (0xc0005b0460) (3) Data frame sent\nI0113 00:16:07.293364 3300 log.go:181] (0xc0007473f0) Data frame received for 5\nI0113 00:16:07.293393 3300 log.go:181] (0xc0003a4640) (5) Data frame handling\nI0113 00:16:07.293416 3300 log.go:181] (0xc0003a4640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32670/\nI0113 00:16:07.300177 3300 log.go:181] (0xc0007473f0) Data frame received for 3\nI0113 00:16:07.300215 3300 log.go:181] (0xc0005b0460) (3) Data frame handling\nI0113 00:16:07.300238 3300 log.go:181] (0xc0005b0460) (3) Data frame sent\nI0113 00:16:07.300986 3300 log.go:181] (0xc0007473f0) Data frame received for 5\nI0113 00:16:07.301005 3300 log.go:181] (0xc0003a4640) (5) Data frame handling\nI0113 00:16:07.301021 3300 log.go:181] (0xc0003a4640) (5) Data frame sent\nI0113 00:16:07.301029 3300 log.go:181] (0xc0007473f0) Data frame received for 5\n+ echo\n+ curl -q -s --connect-timeoutI0113 00:16:07.301037 3300 log.go:181] (0xc0003a4640) (5) Data frame handling\nI0113 00:16:07.301070 3300 log.go:181] (0xc0003a4640) (5) Data frame sent\n 2 http://172.18.0.13:32670/\nI0113 00:16:07.301119 3300 log.go:181] (0xc0007473f0) Data frame received for 3\nI0113 00:16:07.301139 3300 log.go:181] (0xc0005b0460) (3) Data frame handling\nI0113 00:16:07.301152 3300 log.go:181] (0xc0005b0460) (3) Data frame sent\nI0113 00:16:07.305904 3300 log.go:181] (0xc0007473f0) Data frame received for 3\nI0113 00:16:07.305938 3300 log.go:181] (0xc0005b0460) (3) Data frame handling\nI0113 00:16:07.305958 3300 log.go:181] (0xc0005b0460) (3) Data frame sent\nI0113 00:16:07.306302 3300 log.go:181] (0xc0007473f0) Data frame received for 3\nI0113 00:16:07.306319 3300 log.go:181] (0xc0005b0460) (3) Data frame handling\nI0113 00:16:07.306331 3300 log.go:181] (0xc0005b0460) (3) Data frame sent\nI0113 00:16:07.306364 3300 log.go:181] (0xc0007473f0) Data frame received for 5\nI0113 00:16:07.306386 3300 log.go:181] (0xc0003a4640) (5) Data frame handling\nI0113 00:16:07.306405 3300 log.go:181] (0xc0003a4640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32670/\nI0113 00:16:07.313812 3300 log.go:181] (0xc0007473f0) Data frame received for 3\nI0113 00:16:07.313835 3300 log.go:181] (0xc0005b0460) (3) Data frame handling\nI0113 00:16:07.313848 3300 log.go:181] (0xc0005b0460) (3) Data frame sent\nI0113 00:16:07.314591 3300 log.go:181] (0xc0007473f0) Data frame received for 3\nI0113 00:16:07.314623 3300 log.go:181] (0xc0005b0460) (3) Data frame handling\nI0113 00:16:07.314636 3300 log.go:181] (0xc0005b0460) (3) Data frame sent\nI0113 00:16:07.314652 3300 log.go:181] (0xc0007473f0) Data frame received for 5\nI0113 00:16:07.314668 3300 log.go:181] (0xc0003a4640) (5) Data frame handling\nI0113 00:16:07.314681 3300 log.go:181] (0xc0003a4640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32670/\nI0113 00:16:07.318720 3300 log.go:181] (0xc0007473f0) Data frame received for 3\nI0113 00:16:07.318756 3300 log.go:181] (0xc0005b0460) (3) Data frame handling\nI0113 00:16:07.318787 3300 log.go:181] (0xc0005b0460) (3) Data frame sent\nI0113 00:16:07.319217 3300 log.go:181] (0xc0007473f0) Data frame received for 3\nI0113 00:16:07.319245 3300 log.go:181] (0xc0005b0460) (3) Data frame handling\nI0113 00:16:07.319255 3300 log.go:181] (0xc0005b0460) (3) Data frame sent\nI0113 00:16:07.319275 3300 log.go:181] (0xc0007473f0) Data frame received for 5\nI0113 00:16:07.319298 3300 log.go:181] (0xc0003a4640) (5) Data frame handling\nI0113 00:16:07.319329 3300 log.go:181] (0xc0003a4640) (5) Data frame sent\nI0113 00:16:07.319353 3300 log.go:181] (0xc0007473f0) Data frame received for 5\nI0113 00:16:07.319374 3300 log.go:181] (0xc0003a4640) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32670/\nI0113 00:16:07.319439 3300 log.go:181] (0xc0003a4640) (5) Data frame sent\nI0113 00:16:07.325867 3300 log.go:181] (0xc0007473f0) Data frame received for 3\nI0113 00:16:07.325888 3300 log.go:181] (0xc0005b0460) (3) Data frame handling\nI0113 00:16:07.325911 3300 log.go:181] (0xc0005b0460) (3) Data frame sent\nI0113 00:16:07.326592 3300 log.go:181] (0xc0007473f0) Data frame received for 5\nI0113 00:16:07.326620 3300 log.go:181] (0xc0003a4640) (5) Data frame handling\nI0113 00:16:07.326635 3300 log.go:181] (0xc0003a4640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32670/\nI0113 00:16:07.326655 3300 log.go:181] (0xc0007473f0) Data frame received for 3\nI0113 00:16:07.326667 3300 log.go:181] (0xc0005b0460) (3) Data frame handling\nI0113 00:16:07.326679 3300 log.go:181] (0xc0005b0460) (3) Data frame sent\nI0113 00:16:07.334083 3300 log.go:181] (0xc0007473f0) Data frame received for 3\nI0113 00:16:07.334106 3300 log.go:181] (0xc0005b0460) (3) Data frame handling\nI0113 00:16:07.334122 3300 log.go:181] (0xc0005b0460) (3) Data frame sent\nI0113 00:16:07.334897 3300 log.go:181] (0xc0007473f0) Data frame received for 5\nI0113 00:16:07.334932 3300 log.go:181] (0xc0003a4640) (5) Data frame handling\nI0113 00:16:07.334959 3300 log.go:181] (0xc0003a4640) (5) Data frame sent\nI0113 00:16:07.334976 3300 log.go:181] (0xc0007473f0) Data frame received for 5\nI0113 00:16:07.334993 3300 log.go:181] (0xc0003a4640) (5) Data frame handling\nI0113 00:16:07.335012 3300 log.go:181] (0xc0007473f0) Data frame received for 3\nI0113 00:16:07.335038 3300 log.go:181] (0xc0005b0460) (3) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32670/I0113 00:16:07.335056 3300 log.go:181] (0xc0005b0460) (3) Data frame sent\n\nI0113 00:16:07.335075 3300 log.go:181] (0xc0003a4640) (5) Data frame sent\nI0113 00:16:07.339614 3300 log.go:181] (0xc0007473f0) Data frame received for 3\nI0113 00:16:07.339645 3300 log.go:181] (0xc0005b0460) (3) Data frame handling\nI0113 00:16:07.339677 3300 log.go:181] (0xc0005b0460) (3) Data frame sent\nI0113 00:16:07.340392 3300 log.go:181] (0xc0007473f0) Data frame received for 5\nI0113 00:16:07.340423 3300 log.go:181] (0xc0003a4640) (5) Data frame handling\nI0113 00:16:07.340460 3300 log.go:181] (0xc0007473f0) Data frame received for 3\nI0113 00:16:07.340499 3300 log.go:181] (0xc0005b0460) (3) Data frame handling\nI0113 00:16:07.342337 3300 log.go:181] (0xc0007473f0) Data frame received for 1\nI0113 00:16:07.342365 3300 log.go:181] (0xc000644960) (1) Data frame handling\nI0113 00:16:07.342382 3300 log.go:181] (0xc000644960) (1) Data frame sent\nI0113 00:16:07.342417 3300 log.go:181] (0xc0007473f0) (0xc000644960) Stream removed, broadcasting: 1\nI0113 00:16:07.342873 3300 log.go:181] (0xc0007473f0) (0xc000644960) Stream removed, broadcasting: 1\nI0113 00:16:07.342901 3300 log.go:181] (0xc0007473f0) (0xc0005b0460) Stream removed, broadcasting: 3\nI0113 00:16:07.342913 3300 log.go:181] (0xc0007473f0) (0xc0003a4640) Stream removed, broadcasting: 5\n" Jan 13 00:16:07.353: INFO: stdout: "\naffinity-nodeport-transition-72lkm\naffinity-nodeport-transition-xsf96\naffinity-nodeport-transition-xsf96\naffinity-nodeport-transition-rfk8b\naffinity-nodeport-transition-xsf96\naffinity-nodeport-transition-72lkm\naffinity-nodeport-transition-xsf96\naffinity-nodeport-transition-xsf96\naffinity-nodeport-transition-rfk8b\naffinity-nodeport-transition-xsf96\naffinity-nodeport-transition-rfk8b\naffinity-nodeport-transition-72lkm\naffinity-nodeport-transition-xsf96\naffinity-nodeport-transition-72lkm\naffinity-nodeport-transition-rfk8b\naffinity-nodeport-transition-72lkm" Jan 13 00:16:07.354: INFO: Received response from host: affinity-nodeport-transition-72lkm Jan 13 00:16:07.354: INFO: Received response from host: affinity-nodeport-transition-xsf96 Jan 13 00:16:07.354: INFO: Received response from host: affinity-nodeport-transition-xsf96 Jan 13 00:16:07.354: INFO: Received response from host: affinity-nodeport-transition-rfk8b Jan 13 00:16:07.354: INFO: Received response from host: affinity-nodeport-transition-xsf96 Jan 13 00:16:07.354: INFO: Received response from host: affinity-nodeport-transition-72lkm Jan 13 00:16:07.354: INFO: Received response from host: affinity-nodeport-transition-xsf96 Jan 13 00:16:07.354: INFO: Received response from host: affinity-nodeport-transition-xsf96 Jan 13 00:16:07.354: INFO: Received response from host: affinity-nodeport-transition-rfk8b Jan 13 00:16:07.354: INFO: Received response from host: affinity-nodeport-transition-xsf96 Jan 13 00:16:07.354: INFO: Received response from host: affinity-nodeport-transition-rfk8b Jan 13 00:16:07.354: INFO: Received response from host: affinity-nodeport-transition-72lkm Jan 13 00:16:07.354: INFO: Received response from host: affinity-nodeport-transition-xsf96 Jan 13 00:16:07.354: INFO: Received response from host: affinity-nodeport-transition-72lkm Jan 13 00:16:07.354: INFO: Received response from host: affinity-nodeport-transition-rfk8b Jan 13 00:16:07.354: INFO: Received response from host: affinity-nodeport-transition-72lkm Jan 13 00:16:07.362: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-1182 exec execpod-affinity9tbk2 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.13:32670/ ; done' Jan 13 00:16:07.687: INFO: stderr: "I0113 00:16:07.512274 3318 log.go:181] (0xc00003a420) (0xc000b321e0) Create stream\nI0113 00:16:07.512336 3318 log.go:181] (0xc00003a420) (0xc000b321e0) Stream added, broadcasting: 1\nI0113 00:16:07.515007 3318 log.go:181] (0xc00003a420) Reply frame received for 1\nI0113 00:16:07.515070 3318 log.go:181] (0xc00003a420) (0xc000b32280) Create stream\nI0113 00:16:07.515093 3318 log.go:181] (0xc00003a420) (0xc000b32280) Stream added, broadcasting: 3\nI0113 00:16:07.516256 3318 log.go:181] (0xc00003a420) Reply frame received for 3\nI0113 00:16:07.516278 3318 log.go:181] (0xc00003a420) (0xc000c96000) Create stream\nI0113 00:16:07.516286 3318 log.go:181] (0xc00003a420) (0xc000c96000) Stream added, broadcasting: 5\nI0113 00:16:07.517652 3318 log.go:181] (0xc00003a420) Reply frame received for 5\nI0113 00:16:07.595829 3318 log.go:181] (0xc00003a420) Data frame received for 5\nI0113 00:16:07.595868 3318 log.go:181] (0xc000c96000) (5) Data frame handling\nI0113 00:16:07.595880 3318 log.go:181] (0xc000c96000) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32670/\nI0113 00:16:07.595893 3318 log.go:181] (0xc00003a420) Data frame received for 3\nI0113 00:16:07.595898 3318 log.go:181] (0xc000b32280) (3) Data frame handling\nI0113 00:16:07.595903 3318 log.go:181] (0xc000b32280) (3) Data frame sent\nI0113 00:16:07.599445 3318 log.go:181] (0xc00003a420) Data frame received for 3\nI0113 00:16:07.599465 3318 log.go:181] (0xc000b32280) (3) Data frame handling\nI0113 00:16:07.599483 3318 log.go:181] (0xc000b32280) (3) Data frame sent\nI0113 00:16:07.600057 3318 log.go:181] (0xc00003a420) Data frame received for 3\nI0113 00:16:07.600071 3318 log.go:181] (0xc000b32280) (3) Data frame handling\nI0113 00:16:07.600077 3318 log.go:181] (0xc000b32280) (3) Data frame sent\nI0113 00:16:07.600132 3318 log.go:181] (0xc00003a420) Data frame received for 5\nI0113 00:16:07.600147 3318 log.go:181] (0xc000c96000) (5) Data frame handling\nI0113 00:16:07.600160 3318 log.go:181] (0xc000c96000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32670/\nI0113 00:16:07.603140 3318 log.go:181] (0xc00003a420) Data frame received for 3\nI0113 00:16:07.603162 3318 log.go:181] (0xc000b32280) (3) Data frame handling\nI0113 00:16:07.603185 3318 log.go:181] (0xc000b32280) (3) Data frame sent\nI0113 00:16:07.603731 3318 log.go:181] (0xc00003a420) Data frame received for 5\nI0113 00:16:07.603747 3318 log.go:181] (0xc000c96000) (5) Data frame handling\nI0113 00:16:07.603754 3318 log.go:181] (0xc000c96000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32670/\nI0113 00:16:07.603762 3318 log.go:181] (0xc00003a420) Data frame received for 3\nI0113 00:16:07.603767 3318 log.go:181] (0xc000b32280) (3) Data frame handling\nI0113 00:16:07.603771 3318 log.go:181] (0xc000b32280) (3) Data frame sent\nI0113 00:16:07.607491 3318 log.go:181] (0xc00003a420) Data frame received for 3\nI0113 00:16:07.607506 3318 log.go:181] (0xc000b32280) (3) Data frame handling\nI0113 00:16:07.607513 3318 log.go:181] (0xc000b32280) (3) Data frame sent\nI0113 00:16:07.608167 3318 log.go:181] (0xc00003a420) Data frame received for 3\nI0113 00:16:07.608187 3318 log.go:181] (0xc000b32280) (3) Data frame handling\nI0113 00:16:07.608199 3318 log.go:181] (0xc000b32280) (3) Data frame sent\nI0113 00:16:07.608216 3318 log.go:181] (0xc00003a420) Data frame received for 5\nI0113 00:16:07.608226 3318 log.go:181] (0xc000c96000) (5) Data frame handling\nI0113 00:16:07.608238 3318 log.go:181] (0xc000c96000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32670/\nI0113 00:16:07.615983 3318 log.go:181] (0xc00003a420) Data frame received for 3\nI0113 00:16:07.616006 3318 log.go:181] (0xc000b32280) (3) Data frame handling\nI0113 00:16:07.616024 3318 log.go:181] (0xc000b32280) (3) Data frame sent\nI0113 00:16:07.616441 3318 log.go:181] (0xc00003a420) Data frame received for 3\nI0113 00:16:07.616466 3318 log.go:181] (0xc000b32280) (3) Data frame handling\nI0113 00:16:07.616489 3318 log.go:181] (0xc000b32280) (3) Data frame sent\nI0113 00:16:07.616506 3318 log.go:181] (0xc00003a420) Data frame received for 5\nI0113 00:16:07.616524 3318 log.go:181] (0xc000c96000) (5) Data frame handling\nI0113 00:16:07.616555 3318 log.go:181] (0xc000c96000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32670/\nI0113 00:16:07.620616 3318 log.go:181] (0xc00003a420) Data frame received for 3\nI0113 00:16:07.620637 3318 log.go:181] (0xc000b32280) (3) Data frame handling\nI0113 00:16:07.620657 3318 log.go:181] (0xc000b32280) (3) Data frame sent\nI0113 00:16:07.621264 3318 log.go:181] (0xc00003a420) Data frame received for 3\nI0113 00:16:07.621303 3318 log.go:181] (0xc000b32280) (3) Data frame handling\nI0113 00:16:07.621324 3318 log.go:181] (0xc000b32280) (3) Data frame sent\nI0113 00:16:07.621360 3318 log.go:181] (0xc00003a420) Data frame received for 5\nI0113 00:16:07.621377 3318 log.go:181] (0xc000c96000) (5) Data frame handling\nI0113 00:16:07.621401 3318 log.go:181] (0xc000c96000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32670/\nI0113 00:16:07.625278 3318 log.go:181] (0xc00003a420) Data frame received for 3\nI0113 00:16:07.625301 3318 log.go:181] (0xc000b32280) (3) Data frame handling\nI0113 00:16:07.625318 3318 log.go:181] (0xc000b32280) (3) Data frame sent\nI0113 00:16:07.625791 3318 log.go:181] (0xc00003a420) Data frame received for 3\nI0113 00:16:07.625830 3318 log.go:181] (0xc000b32280) (3) Data frame handling\nI0113 00:16:07.625846 3318 log.go:181] (0xc000b32280) (3) Data frame sent\nI0113 00:16:07.625872 3318 log.go:181] (0xc00003a420) Data frame received for 5\nI0113 00:16:07.625888 3318 log.go:181] (0xc000c96000) (5) Data frame handling\nI0113 00:16:07.625905 3318 log.go:181] (0xc000c96000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32670/\nI0113 00:16:07.629621 3318 log.go:181] (0xc00003a420) Data frame received for 3\nI0113 00:16:07.629637 3318 log.go:181] (0xc000b32280) (3) Data frame handling\nI0113 00:16:07.629661 3318 log.go:181] (0xc000b32280) (3) Data frame sent\nI0113 00:16:07.630338 3318 log.go:181] (0xc00003a420) Data frame received for 5\nI0113 00:16:07.630369 3318 log.go:181] (0xc000c96000) (5) Data frame handling\nI0113 00:16:07.630392 3318 log.go:181] (0xc000c96000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32670/\nI0113 00:16:07.630495 3318 log.go:181] (0xc00003a420) Data frame received for 3\nI0113 00:16:07.630517 3318 log.go:181] (0xc000b32280) (3) Data frame handling\nI0113 00:16:07.630533 3318 log.go:181] (0xc000b32280) (3) Data frame sent\nI0113 00:16:07.634991 3318 log.go:181] (0xc00003a420) Data frame received for 3\nI0113 00:16:07.635015 3318 log.go:181] (0xc000b32280) (3) Data frame handling\nI0113 00:16:07.635041 3318 log.go:181] (0xc000b32280) (3) Data frame sent\nI0113 00:16:07.635426 3318 log.go:181] (0xc00003a420) Data frame received for 3\nI0113 00:16:07.635461 3318 log.go:181] (0xc000b32280) (3) Data frame handling\nI0113 00:16:07.635478 3318 log.go:181] (0xc000b32280) (3) Data frame sent\nI0113 00:16:07.635495 3318 log.go:181] (0xc00003a420) Data frame received for 5\nI0113 00:16:07.635506 3318 log.go:181] (0xc000c96000) (5) Data frame handling\nI0113 00:16:07.635516 3318 log.go:181] (0xc000c96000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32670/\nI0113 00:16:07.639788 3318 log.go:181] (0xc00003a420) Data frame received for 3\nI0113 00:16:07.639800 3318 log.go:181] (0xc000b32280) (3) Data frame handling\nI0113 00:16:07.639806 3318 log.go:181] (0xc000b32280) (3) Data frame sent\nI0113 00:16:07.640367 3318 log.go:181] (0xc00003a420) Data frame received for 3\nI0113 00:16:07.640378 3318 log.go:181] (0xc000b32280) (3) Data frame handling\nI0113 00:16:07.640396 3318 log.go:181] (0xc00003a420) Data frame received for 5\nI0113 00:16:07.640420 3318 log.go:181] (0xc000c96000) (5) Data frame handling\nI0113 00:16:07.640430 3318 log.go:181] (0xc000c96000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32670/\nI0113 00:16:07.640445 3318 log.go:181] (0xc000b32280) (3) Data frame sent\nI0113 00:16:07.645204 3318 log.go:181] (0xc00003a420) Data frame received for 3\nI0113 00:16:07.645219 3318 log.go:181] (0xc000b32280) (3) Data frame handling\nI0113 00:16:07.645226 3318 log.go:181] (0xc000b32280) (3) Data frame sent\nI0113 00:16:07.645862 3318 log.go:181] (0xc00003a420) Data frame received for 5\nI0113 00:16:07.645882 3318 log.go:181] (0xc000c96000) (5) Data frame handling\nI0113 00:16:07.645898 3318 log.go:181] (0xc000c96000) (5) Data frame sent\nI0113 00:16:07.645915 3318 log.go:181] (0xc00003a420) Data frame received for 5\nI0113 00:16:07.645924 3318 log.go:181] (0xc000c96000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32670/\nI0113 00:16:07.645945 3318 log.go:181] (0xc000c96000) (5) Data frame sent\nI0113 00:16:07.646045 3318 log.go:181] (0xc00003a420) Data frame received for 3\nI0113 00:16:07.646063 3318 log.go:181] (0xc000b32280) (3) Data frame handling\nI0113 00:16:07.646079 3318 log.go:181] (0xc000b32280) (3) Data frame sent\nI0113 00:16:07.651077 3318 log.go:181] (0xc00003a420) Data frame received for 3\nI0113 00:16:07.651098 3318 log.go:181] (0xc000b32280) (3) Data frame handling\nI0113 00:16:07.651173 3318 log.go:181] (0xc000b32280) (3) Data frame sent\nI0113 00:16:07.651643 3318 log.go:181] (0xc00003a420) Data frame received for 5\nI0113 00:16:07.651672 3318 log.go:181] (0xc000c96000) (5) Data frame handling\nI0113 00:16:07.651683 3318 log.go:181] (0xc000c96000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32670/\nI0113 00:16:07.651694 3318 log.go:181] (0xc00003a420) Data frame received for 3\nI0113 00:16:07.651700 3318 log.go:181] (0xc000b32280) (3) Data frame handling\nI0113 00:16:07.651706 3318 log.go:181] (0xc000b32280) (3) Data frame sent\nI0113 00:16:07.656949 3318 log.go:181] (0xc00003a420) Data frame received for 3\nI0113 00:16:07.656986 3318 log.go:181] (0xc000b32280) (3) Data frame handling\nI0113 00:16:07.657007 3318 log.go:181] (0xc000b32280) (3) Data frame sent\nI0113 00:16:07.657672 3318 log.go:181] (0xc00003a420) Data frame received for 5\nI0113 00:16:07.657699 3318 log.go:181] (0xc000c96000) (5) Data frame handling\nI0113 00:16:07.657719 3318 log.go:181] (0xc000c96000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2I0113 00:16:07.657895 3318 log.go:181] (0xc00003a420) Data frame received for 3\nI0113 00:16:07.657924 3318 log.go:181] (0xc000b32280) (3) Data frame handling\nI0113 00:16:07.657937 3318 log.go:181] (0xc000b32280) (3) Data frame sent\nI0113 00:16:07.657954 3318 log.go:181] (0xc00003a420) Data frame received for 5\nI0113 00:16:07.657963 3318 log.go:181] (0xc000c96000) (5) Data frame handling\nI0113 00:16:07.657973 3318 log.go:181] (0xc000c96000) (5) Data frame sent\n http://172.18.0.13:32670/\nI0113 00:16:07.663838 3318 log.go:181] (0xc00003a420) Data frame received for 3\nI0113 00:16:07.663860 3318 log.go:181] (0xc000b32280) (3) Data frame handling\nI0113 00:16:07.663885 3318 log.go:181] (0xc000b32280) (3) Data frame sent\nI0113 00:16:07.664429 3318 log.go:181] (0xc00003a420) Data frame received for 3\nI0113 00:16:07.664486 3318 log.go:181] (0xc000b32280) (3) Data frame handling\nI0113 00:16:07.664517 3318 log.go:181] (0xc000b32280) (3) Data frame sent\nI0113 00:16:07.664554 3318 log.go:181] (0xc00003a420) Data frame received for 5\nI0113 00:16:07.664575 3318 log.go:181] (0xc000c96000) (5) Data frame handling\nI0113 00:16:07.664611 3318 log.go:181] (0xc000c96000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32670/\nI0113 00:16:07.670532 3318 log.go:181] (0xc00003a420) Data frame received for 3\nI0113 00:16:07.670552 3318 log.go:181] (0xc000b32280) (3) Data frame handling\nI0113 00:16:07.670570 3318 log.go:181] (0xc000b32280) (3) Data frame sent\nI0113 00:16:07.670967 3318 log.go:181] (0xc00003a420) Data frame received for 3\nI0113 00:16:07.670991 3318 log.go:181] (0xc000b32280) (3) Data frame handling\nI0113 00:16:07.671000 3318 log.go:181] (0xc000b32280) (3) Data frame sent\nI0113 00:16:07.671013 3318 log.go:181] (0xc00003a420) Data frame received for 5\nI0113 00:16:07.671020 3318 log.go:181] (0xc000c96000) (5) Data frame handling\nI0113 00:16:07.671026 3318 log.go:181] (0xc000c96000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32670/\nI0113 00:16:07.674233 3318 log.go:181] (0xc00003a420) Data frame received for 3\nI0113 00:16:07.674270 3318 log.go:181] (0xc000b32280) (3) Data frame handling\nI0113 00:16:07.674287 3318 log.go:181] (0xc000b32280) (3) Data frame sent\nI0113 00:16:07.674869 3318 log.go:181] (0xc00003a420) Data frame received for 5\nI0113 00:16:07.674898 3318 log.go:181] (0xc000c96000) (5) Data frame handling\nI0113 00:16:07.674916 3318 log.go:181] (0xc000c96000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32670/\nI0113 00:16:07.674950 3318 log.go:181] (0xc00003a420) Data frame received for 3\nI0113 00:16:07.674991 3318 log.go:181] (0xc000b32280) (3) Data frame handling\nI0113 00:16:07.675026 3318 log.go:181] (0xc000b32280) (3) Data frame sent\nI0113 00:16:07.679171 3318 log.go:181] (0xc00003a420) Data frame received for 3\nI0113 00:16:07.679206 3318 log.go:181] (0xc000b32280) (3) Data frame handling\nI0113 00:16:07.679237 3318 log.go:181] (0xc000b32280) (3) Data frame sent\nI0113 00:16:07.680027 3318 log.go:181] (0xc00003a420) Data frame received for 3\nI0113 00:16:07.680047 3318 log.go:181] (0xc000b32280) (3) Data frame handling\nI0113 00:16:07.680063 3318 log.go:181] (0xc00003a420) Data frame received for 5\nI0113 00:16:07.680070 3318 log.go:181] (0xc000c96000) (5) Data frame handling\nI0113 00:16:07.681764 3318 log.go:181] (0xc00003a420) Data frame received for 1\nI0113 00:16:07.681787 3318 log.go:181] (0xc000b321e0) (1) Data frame handling\nI0113 00:16:07.681801 3318 log.go:181] (0xc000b321e0) (1) Data frame sent\nI0113 00:16:07.681819 3318 log.go:181] (0xc00003a420) (0xc000b321e0) Stream removed, broadcasting: 1\nI0113 00:16:07.681843 3318 log.go:181] (0xc00003a420) Go away received\nI0113 00:16:07.682275 3318 log.go:181] (0xc00003a420) (0xc000b321e0) Stream removed, broadcasting: 1\nI0113 00:16:07.682299 3318 log.go:181] (0xc00003a420) (0xc000b32280) Stream removed, broadcasting: 3\nI0113 00:16:07.682313 3318 log.go:181] (0xc00003a420) (0xc000c96000) Stream removed, broadcasting: 5\n" Jan 13 00:16:07.688: INFO: stdout: "\naffinity-nodeport-transition-72lkm\naffinity-nodeport-transition-72lkm\naffinity-nodeport-transition-72lkm\naffinity-nodeport-transition-72lkm\naffinity-nodeport-transition-72lkm\naffinity-nodeport-transition-72lkm\naffinity-nodeport-transition-72lkm\naffinity-nodeport-transition-72lkm\naffinity-nodeport-transition-72lkm\naffinity-nodeport-transition-72lkm\naffinity-nodeport-transition-72lkm\naffinity-nodeport-transition-72lkm\naffinity-nodeport-transition-72lkm\naffinity-nodeport-transition-72lkm\naffinity-nodeport-transition-72lkm\naffinity-nodeport-transition-72lkm" Jan 13 00:16:07.688: INFO: Received response from host: affinity-nodeport-transition-72lkm Jan 13 00:16:07.688: INFO: Received response from host: affinity-nodeport-transition-72lkm Jan 13 00:16:07.688: INFO: Received response from host: affinity-nodeport-transition-72lkm Jan 13 00:16:07.688: INFO: Received response from host: affinity-nodeport-transition-72lkm Jan 13 00:16:07.688: INFO: Received response from host: affinity-nodeport-transition-72lkm Jan 13 00:16:07.688: INFO: Received response from host: affinity-nodeport-transition-72lkm Jan 13 00:16:07.688: INFO: Received response from host: affinity-nodeport-transition-72lkm Jan 13 00:16:07.688: INFO: Received response from host: affinity-nodeport-transition-72lkm Jan 13 00:16:07.688: INFO: Received response from host: affinity-nodeport-transition-72lkm Jan 13 00:16:07.688: INFO: Received response from host: affinity-nodeport-transition-72lkm Jan 13 00:16:07.688: INFO: Received response from host: affinity-nodeport-transition-72lkm Jan 13 00:16:07.688: INFO: Received response from host: affinity-nodeport-transition-72lkm Jan 13 00:16:07.688: INFO: Received response from host: affinity-nodeport-transition-72lkm Jan 13 00:16:07.688: INFO: Received response from host: affinity-nodeport-transition-72lkm Jan 13 00:16:07.688: INFO: Received response from host: affinity-nodeport-transition-72lkm Jan 13 00:16:07.688: INFO: Received response from host: affinity-nodeport-transition-72lkm Jan 13 00:16:07.688: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-1182, will wait for the garbage collector to delete the pods Jan 13 00:16:07.810: INFO: Deleting ReplicationController affinity-nodeport-transition took: 33.224819ms Jan 13 00:16:08.310: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 500.278207ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:17:10.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1182" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 • [SLOW TEST:79.110 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":309,"completed":243,"skipped":4100,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:17:10.297: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:17:10.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-9797" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":309,"completed":244,"skipped":4100,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:17:10.386: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:92 Jan 13 00:17:10.488: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 13 00:17:10.500: INFO: Waiting for terminating namespaces to be deleted... Jan 13 00:17:10.503: INFO: Logging pods the apiserver thinks is on node leguer-worker before test Jan 13 00:17:10.511: INFO: rally-a8f48c6d-3kmika18-pdtzv from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 13 00:17:10.511: INFO: Container rally-a8f48c6d-3kmika18 ready: true, restart count 0 Jan 13 00:17:10.511: INFO: rally-a8f48c6d-3kmika18-pllzg from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 13 00:17:10.511: INFO: Container rally-a8f48c6d-3kmika18 ready: true, restart count 0 Jan 13 00:17:10.511: INFO: rally-a8f48c6d-4cyi45kq-j5tzz from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 13 00:17:10.511: INFO: Container rally-a8f48c6d-4cyi45kq ready: true, restart count 0 Jan 13 00:17:10.511: INFO: rally-a8f48c6d-f3hls6a3-57dwc from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:32 +0000 UTC (1 container statuses recorded) Jan 13 00:17:10.511: INFO: Container rally-a8f48c6d-f3hls6a3 ready: true, restart count 0 Jan 13 00:17:10.511: INFO: rally-a8f48c6d-1y3amfc0-lp8st from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:32 +0000 UTC (1 container statuses recorded) Jan 13 00:17:10.511: INFO: Container rally-a8f48c6d-1y3amfc0 ready: true, restart count 0 Jan 13 00:17:10.511: INFO: rally-a8f48c6d-9pqmjehi-9zwjj from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 13 00:17:10.511: INFO: Container rally-a8f48c6d-9pqmjehi ready: true, restart count 0 Jan 13 00:17:10.511: INFO: chaos-controller-manager-69c479c674-s796v from default started at 2021-01-10 20:58:24 +0000 UTC (1 container statuses recorded) Jan 13 00:17:10.511: INFO: Container chaos-mesh ready: true, restart count 0 Jan 13 00:17:10.511: INFO: chaos-daemon-lv692 from default started at 2021-01-10 20:58:25 +0000 UTC (1 container statuses recorded) Jan 13 00:17:10.511: INFO: Container chaos-daemon ready: true, restart count 0 Jan 13 00:17:10.511: INFO: kindnet-psm25 from kube-system started at 2021-01-10 17:38:10 +0000 UTC (1 container statuses recorded) Jan 13 00:17:10.511: INFO: Container kindnet-cni ready: true, restart count 0 Jan 13 00:17:10.511: INFO: kube-proxy-bmbcs from kube-system started at 2021-01-10 17:38:10 +0000 UTC (1 container statuses recorded) Jan 13 00:17:10.511: INFO: Container kube-proxy ready: true, restart count 0 Jan 13 00:17:10.511: INFO: Logging pods the apiserver thinks is on node leguer-worker2 before test Jan 13 00:17:10.518: INFO: rally-a8f48c6d-4cyi45kq-knr4r from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 13 00:17:10.518: INFO: Container rally-a8f48c6d-4cyi45kq ready: true, restart count 0 Jan 13 00:17:10.518: INFO: rally-a8f48c6d-f3hls6a3-dwt8n from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:32 +0000 UTC (1 container statuses recorded) Jan 13 00:17:10.518: INFO: Container rally-a8f48c6d-f3hls6a3 ready: true, restart count 0 Jan 13 00:17:10.518: INFO: rally-a8f48c6d-1y3amfc0-hh9qk from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:32 +0000 UTC (1 container statuses recorded) Jan 13 00:17:10.518: INFO: Container rally-a8f48c6d-1y3amfc0 ready: true, restart count 0 Jan 13 00:17:10.518: INFO: rally-a8f48c6d-9pqmjehi-85slb from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 13 00:17:10.518: INFO: Container rally-a8f48c6d-9pqmjehi ready: true, restart count 0 Jan 13 00:17:10.518: INFO: rally-a8f48c6d-vnukxqu0-llj24 from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 13 00:17:10.518: INFO: Container rally-a8f48c6d-vnukxqu0 ready: true, restart count 0 Jan 13 00:17:10.518: INFO: rally-a8f48c6d-vnukxqu0-v85kr from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 13 00:17:10.518: INFO: Container rally-a8f48c6d-vnukxqu0 ready: true, restart count 0 Jan 13 00:17:10.518: INFO: chaos-daemon-ffkg7 from default started at 2021-01-10 20:58:25 +0000 UTC (1 container statuses recorded) Jan 13 00:17:10.518: INFO: Container chaos-daemon ready: true, restart count 0 Jan 13 00:17:10.518: INFO: kindnet-8wggd from kube-system started at 2021-01-10 17:38:10 +0000 UTC (1 container statuses recorded) Jan 13 00:17:10.518: INFO: Container kindnet-cni ready: true, restart count 0 Jan 13 00:17:10.518: INFO: kube-proxy-29gxg from kube-system started at 2021-01-10 17:38:09 +0000 UTC (1 container statuses recorded) Jan 13 00:17:10.518: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-f722832d-c3fe-47c1-9148-15ca3da5a452 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 172.18.0.13 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-f722832d-c3fe-47c1-9148-15ca3da5a452 off the node leguer-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-f722832d-c3fe-47c1-9148-15ca3da5a452 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:22:21.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5404" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:83 • [SLOW TEST:311.425 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":309,"completed":245,"skipped":4135,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:22:21.811: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 13 00:22:23.006: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 13 00:22:25.015: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746094143, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746094143, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746094143, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746094142, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 13 00:22:28.083: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:22:38.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7305" for this suite. STEP: Destroying namespace "webhook-7305-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:16.574 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":309,"completed":246,"skipped":4150,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:22:38.386: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6375.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-6375.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6375.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-6375.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6375.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-6375.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6375.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-6375.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6375.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-6375.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6375.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-6375.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6375.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 223.93.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.93.223_udp@PTR;check="$$(dig +tcp +noall +answer +search 223.93.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.93.223_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6375.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-6375.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6375.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-6375.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6375.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-6375.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6375.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-6375.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6375.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-6375.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6375.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-6375.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6375.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 223.93.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.93.223_udp@PTR;check="$$(dig +tcp +noall +answer +search 223.93.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.93.223_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 13 00:22:44.686: INFO: Unable to read wheezy_udp@dns-test-service.dns-6375.svc.cluster.local from pod dns-6375/dns-test-7e554776-d271-48fa-af18-0afd93b925f5: the server could not find the requested resource (get pods dns-test-7e554776-d271-48fa-af18-0afd93b925f5) Jan 13 00:22:44.689: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6375.svc.cluster.local from pod dns-6375/dns-test-7e554776-d271-48fa-af18-0afd93b925f5: the server could not find the requested resource (get pods dns-test-7e554776-d271-48fa-af18-0afd93b925f5) Jan 13 00:22:44.691: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6375.svc.cluster.local from pod dns-6375/dns-test-7e554776-d271-48fa-af18-0afd93b925f5: the server could not find the requested resource (get pods dns-test-7e554776-d271-48fa-af18-0afd93b925f5) Jan 13 00:22:44.694: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6375.svc.cluster.local from pod dns-6375/dns-test-7e554776-d271-48fa-af18-0afd93b925f5: the server could not find the requested resource (get pods dns-test-7e554776-d271-48fa-af18-0afd93b925f5) Jan 13 00:22:44.718: INFO: Unable to read jessie_udp@dns-test-service.dns-6375.svc.cluster.local from pod dns-6375/dns-test-7e554776-d271-48fa-af18-0afd93b925f5: the server could not find the requested resource (get pods dns-test-7e554776-d271-48fa-af18-0afd93b925f5) Jan 13 00:22:44.721: INFO: Unable to read jessie_tcp@dns-test-service.dns-6375.svc.cluster.local from pod dns-6375/dns-test-7e554776-d271-48fa-af18-0afd93b925f5: the server could not find the requested resource (get pods dns-test-7e554776-d271-48fa-af18-0afd93b925f5) Jan 13 00:22:44.723: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6375.svc.cluster.local from pod dns-6375/dns-test-7e554776-d271-48fa-af18-0afd93b925f5: the server could not find the requested resource (get pods dns-test-7e554776-d271-48fa-af18-0afd93b925f5) Jan 13 00:22:44.726: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6375.svc.cluster.local from pod dns-6375/dns-test-7e554776-d271-48fa-af18-0afd93b925f5: the server could not find the requested resource (get pods dns-test-7e554776-d271-48fa-af18-0afd93b925f5) Jan 13 00:22:44.742: INFO: Lookups using dns-6375/dns-test-7e554776-d271-48fa-af18-0afd93b925f5 failed for: [wheezy_udp@dns-test-service.dns-6375.svc.cluster.local wheezy_tcp@dns-test-service.dns-6375.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6375.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6375.svc.cluster.local jessie_udp@dns-test-service.dns-6375.svc.cluster.local jessie_tcp@dns-test-service.dns-6375.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6375.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6375.svc.cluster.local] Jan 13 00:22:49.746: INFO: Unable to read wheezy_udp@dns-test-service.dns-6375.svc.cluster.local from pod dns-6375/dns-test-7e554776-d271-48fa-af18-0afd93b925f5: the server could not find the requested resource (get pods dns-test-7e554776-d271-48fa-af18-0afd93b925f5) Jan 13 00:22:49.750: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6375.svc.cluster.local from pod dns-6375/dns-test-7e554776-d271-48fa-af18-0afd93b925f5: the server could not find the requested resource (get pods dns-test-7e554776-d271-48fa-af18-0afd93b925f5) Jan 13 00:22:49.754: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6375.svc.cluster.local from pod dns-6375/dns-test-7e554776-d271-48fa-af18-0afd93b925f5: the server could not find the requested resource (get pods dns-test-7e554776-d271-48fa-af18-0afd93b925f5) Jan 13 00:22:49.757: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6375.svc.cluster.local from pod dns-6375/dns-test-7e554776-d271-48fa-af18-0afd93b925f5: the server could not find the requested resource (get pods dns-test-7e554776-d271-48fa-af18-0afd93b925f5) Jan 13 00:22:49.779: INFO: Unable to read jessie_udp@dns-test-service.dns-6375.svc.cluster.local from pod dns-6375/dns-test-7e554776-d271-48fa-af18-0afd93b925f5: the server could not find the requested resource (get pods dns-test-7e554776-d271-48fa-af18-0afd93b925f5) Jan 13 00:22:49.783: INFO: Unable to read jessie_tcp@dns-test-service.dns-6375.svc.cluster.local from pod dns-6375/dns-test-7e554776-d271-48fa-af18-0afd93b925f5: the server could not find the requested resource (get pods dns-test-7e554776-d271-48fa-af18-0afd93b925f5) Jan 13 00:22:49.786: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6375.svc.cluster.local from pod dns-6375/dns-test-7e554776-d271-48fa-af18-0afd93b925f5: the server could not find the requested resource (get pods dns-test-7e554776-d271-48fa-af18-0afd93b925f5) Jan 13 00:22:49.789: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6375.svc.cluster.local from pod dns-6375/dns-test-7e554776-d271-48fa-af18-0afd93b925f5: the server could not find the requested resource (get pods dns-test-7e554776-d271-48fa-af18-0afd93b925f5) Jan 13 00:22:49.832: INFO: Lookups using dns-6375/dns-test-7e554776-d271-48fa-af18-0afd93b925f5 failed for: [wheezy_udp@dns-test-service.dns-6375.svc.cluster.local wheezy_tcp@dns-test-service.dns-6375.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6375.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6375.svc.cluster.local jessie_udp@dns-test-service.dns-6375.svc.cluster.local jessie_tcp@dns-test-service.dns-6375.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6375.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6375.svc.cluster.local] Jan 13 00:22:55.363: INFO: Unable to read wheezy_udp@dns-test-service.dns-6375.svc.cluster.local from pod dns-6375/dns-test-7e554776-d271-48fa-af18-0afd93b925f5: the server could not find the requested resource (get pods dns-test-7e554776-d271-48fa-af18-0afd93b925f5) Jan 13 00:22:55.366: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6375.svc.cluster.local from pod dns-6375/dns-test-7e554776-d271-48fa-af18-0afd93b925f5: the server could not find the requested resource (get pods dns-test-7e554776-d271-48fa-af18-0afd93b925f5) Jan 13 00:22:55.371: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6375.svc.cluster.local from pod dns-6375/dns-test-7e554776-d271-48fa-af18-0afd93b925f5: the server could not find the requested resource (get pods dns-test-7e554776-d271-48fa-af18-0afd93b925f5) Jan 13 00:22:55.374: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6375.svc.cluster.local from pod dns-6375/dns-test-7e554776-d271-48fa-af18-0afd93b925f5: the server could not find the requested resource (get pods dns-test-7e554776-d271-48fa-af18-0afd93b925f5) Jan 13 00:22:55.411: INFO: Unable to read jessie_udp@dns-test-service.dns-6375.svc.cluster.local from pod dns-6375/dns-test-7e554776-d271-48fa-af18-0afd93b925f5: the server could not find the requested resource (get pods dns-test-7e554776-d271-48fa-af18-0afd93b925f5) Jan 13 00:22:55.415: INFO: Unable to read jessie_tcp@dns-test-service.dns-6375.svc.cluster.local from pod dns-6375/dns-test-7e554776-d271-48fa-af18-0afd93b925f5: the server could not find the requested resource (get pods dns-test-7e554776-d271-48fa-af18-0afd93b925f5) Jan 13 00:22:55.418: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6375.svc.cluster.local from pod dns-6375/dns-test-7e554776-d271-48fa-af18-0afd93b925f5: the server could not find the requested resource (get pods dns-test-7e554776-d271-48fa-af18-0afd93b925f5) Jan 13 00:22:55.422: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6375.svc.cluster.local from pod dns-6375/dns-test-7e554776-d271-48fa-af18-0afd93b925f5: the server could not find the requested resource (get pods dns-test-7e554776-d271-48fa-af18-0afd93b925f5) Jan 13 00:22:55.436: INFO: Lookups using dns-6375/dns-test-7e554776-d271-48fa-af18-0afd93b925f5 failed for: [wheezy_udp@dns-test-service.dns-6375.svc.cluster.local wheezy_tcp@dns-test-service.dns-6375.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6375.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6375.svc.cluster.local jessie_udp@dns-test-service.dns-6375.svc.cluster.local jessie_tcp@dns-test-service.dns-6375.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6375.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6375.svc.cluster.local] Jan 13 00:22:59.747: INFO: Unable to read wheezy_udp@dns-test-service.dns-6375.svc.cluster.local from pod dns-6375/dns-test-7e554776-d271-48fa-af18-0afd93b925f5: the server could not find the requested resource (get pods dns-test-7e554776-d271-48fa-af18-0afd93b925f5) Jan 13 00:22:59.751: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6375.svc.cluster.local from pod dns-6375/dns-test-7e554776-d271-48fa-af18-0afd93b925f5: the server could not find the requested resource (get pods dns-test-7e554776-d271-48fa-af18-0afd93b925f5) Jan 13 00:22:59.754: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6375.svc.cluster.local from pod dns-6375/dns-test-7e554776-d271-48fa-af18-0afd93b925f5: the server could not find the requested resource (get pods dns-test-7e554776-d271-48fa-af18-0afd93b925f5) Jan 13 00:22:59.757: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6375.svc.cluster.local from pod dns-6375/dns-test-7e554776-d271-48fa-af18-0afd93b925f5: the server could not find the requested resource (get pods dns-test-7e554776-d271-48fa-af18-0afd93b925f5) Jan 13 00:22:59.778: INFO: Unable to read jessie_udp@dns-test-service.dns-6375.svc.cluster.local from pod dns-6375/dns-test-7e554776-d271-48fa-af18-0afd93b925f5: the server could not find the requested resource (get pods dns-test-7e554776-d271-48fa-af18-0afd93b925f5) Jan 13 00:22:59.782: INFO: Unable to read jessie_tcp@dns-test-service.dns-6375.svc.cluster.local from pod dns-6375/dns-test-7e554776-d271-48fa-af18-0afd93b925f5: the server could not find the requested resource (get pods dns-test-7e554776-d271-48fa-af18-0afd93b925f5) Jan 13 00:22:59.785: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6375.svc.cluster.local from pod dns-6375/dns-test-7e554776-d271-48fa-af18-0afd93b925f5: the server could not find the requested resource (get pods dns-test-7e554776-d271-48fa-af18-0afd93b925f5) Jan 13 00:22:59.788: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6375.svc.cluster.local from pod dns-6375/dns-test-7e554776-d271-48fa-af18-0afd93b925f5: the server could not find the requested resource (get pods dns-test-7e554776-d271-48fa-af18-0afd93b925f5) Jan 13 00:22:59.808: INFO: Lookups using dns-6375/dns-test-7e554776-d271-48fa-af18-0afd93b925f5 failed for: [wheezy_udp@dns-test-service.dns-6375.svc.cluster.local wheezy_tcp@dns-test-service.dns-6375.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6375.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6375.svc.cluster.local jessie_udp@dns-test-service.dns-6375.svc.cluster.local jessie_tcp@dns-test-service.dns-6375.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6375.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6375.svc.cluster.local] Jan 13 00:23:04.794: INFO: Unable to read wheezy_udp@dns-test-service.dns-6375.svc.cluster.local from pod dns-6375/dns-test-7e554776-d271-48fa-af18-0afd93b925f5: the server could not find the requested resource (get pods dns-test-7e554776-d271-48fa-af18-0afd93b925f5) Jan 13 00:23:04.799: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6375.svc.cluster.local from pod dns-6375/dns-test-7e554776-d271-48fa-af18-0afd93b925f5: the server could not find the requested resource (get pods dns-test-7e554776-d271-48fa-af18-0afd93b925f5) Jan 13 00:23:04.802: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6375.svc.cluster.local from pod dns-6375/dns-test-7e554776-d271-48fa-af18-0afd93b925f5: the server could not find the requested resource (get pods dns-test-7e554776-d271-48fa-af18-0afd93b925f5) Jan 13 00:23:04.805: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6375.svc.cluster.local from pod dns-6375/dns-test-7e554776-d271-48fa-af18-0afd93b925f5: the server could not find the requested resource (get pods dns-test-7e554776-d271-48fa-af18-0afd93b925f5) Jan 13 00:23:04.826: INFO: Unable to read jessie_udp@dns-test-service.dns-6375.svc.cluster.local from pod dns-6375/dns-test-7e554776-d271-48fa-af18-0afd93b925f5: the server could not find the requested resource (get pods dns-test-7e554776-d271-48fa-af18-0afd93b925f5) Jan 13 00:23:04.829: INFO: Unable to read jessie_tcp@dns-test-service.dns-6375.svc.cluster.local from pod dns-6375/dns-test-7e554776-d271-48fa-af18-0afd93b925f5: the server could not find the requested resource (get pods dns-test-7e554776-d271-48fa-af18-0afd93b925f5) Jan 13 00:23:04.831: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6375.svc.cluster.local from pod dns-6375/dns-test-7e554776-d271-48fa-af18-0afd93b925f5: the server could not find the requested resource (get pods dns-test-7e554776-d271-48fa-af18-0afd93b925f5) Jan 13 00:23:04.833: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6375.svc.cluster.local from pod dns-6375/dns-test-7e554776-d271-48fa-af18-0afd93b925f5: the server could not find the requested resource (get pods dns-test-7e554776-d271-48fa-af18-0afd93b925f5) Jan 13 00:23:04.849: INFO: Lookups using dns-6375/dns-test-7e554776-d271-48fa-af18-0afd93b925f5 failed for: [wheezy_udp@dns-test-service.dns-6375.svc.cluster.local wheezy_tcp@dns-test-service.dns-6375.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6375.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6375.svc.cluster.local jessie_udp@dns-test-service.dns-6375.svc.cluster.local jessie_tcp@dns-test-service.dns-6375.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6375.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6375.svc.cluster.local] Jan 13 00:23:09.747: INFO: Unable to read wheezy_udp@dns-test-service.dns-6375.svc.cluster.local from pod dns-6375/dns-test-7e554776-d271-48fa-af18-0afd93b925f5: the server could not find the requested resource (get pods dns-test-7e554776-d271-48fa-af18-0afd93b925f5) Jan 13 00:23:09.750: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6375.svc.cluster.local from pod dns-6375/dns-test-7e554776-d271-48fa-af18-0afd93b925f5: the server could not find the requested resource (get pods dns-test-7e554776-d271-48fa-af18-0afd93b925f5) Jan 13 00:23:09.753: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6375.svc.cluster.local from pod dns-6375/dns-test-7e554776-d271-48fa-af18-0afd93b925f5: the server could not find the requested resource (get pods dns-test-7e554776-d271-48fa-af18-0afd93b925f5) Jan 13 00:23:09.756: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6375.svc.cluster.local from pod dns-6375/dns-test-7e554776-d271-48fa-af18-0afd93b925f5: the server could not find the requested resource (get pods dns-test-7e554776-d271-48fa-af18-0afd93b925f5) Jan 13 00:23:09.775: INFO: Unable to read jessie_udp@dns-test-service.dns-6375.svc.cluster.local from pod dns-6375/dns-test-7e554776-d271-48fa-af18-0afd93b925f5: the server could not find the requested resource (get pods dns-test-7e554776-d271-48fa-af18-0afd93b925f5) Jan 13 00:23:09.778: INFO: Unable to read jessie_tcp@dns-test-service.dns-6375.svc.cluster.local from pod dns-6375/dns-test-7e554776-d271-48fa-af18-0afd93b925f5: the server could not find the requested resource (get pods dns-test-7e554776-d271-48fa-af18-0afd93b925f5) Jan 13 00:23:09.781: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6375.svc.cluster.local from pod dns-6375/dns-test-7e554776-d271-48fa-af18-0afd93b925f5: the server could not find the requested resource (get pods dns-test-7e554776-d271-48fa-af18-0afd93b925f5) Jan 13 00:23:09.783: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6375.svc.cluster.local from pod dns-6375/dns-test-7e554776-d271-48fa-af18-0afd93b925f5: the server could not find the requested resource (get pods dns-test-7e554776-d271-48fa-af18-0afd93b925f5) Jan 13 00:23:09.802: INFO: Lookups using dns-6375/dns-test-7e554776-d271-48fa-af18-0afd93b925f5 failed for: [wheezy_udp@dns-test-service.dns-6375.svc.cluster.local wheezy_tcp@dns-test-service.dns-6375.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6375.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6375.svc.cluster.local jessie_udp@dns-test-service.dns-6375.svc.cluster.local jessie_tcp@dns-test-service.dns-6375.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6375.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6375.svc.cluster.local] Jan 13 00:23:14.833: INFO: DNS probes using dns-6375/dns-test-7e554776-d271-48fa-af18-0afd93b925f5 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:23:15.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6375" for this suite. • [SLOW TEST:37.451 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":309,"completed":247,"skipped":4202,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:23:15.838: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 13 00:23:16.065: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"889e643d-2c79-44e2-831a-9719a2cb8f18", Controller:(*bool)(0xc0042a932a), BlockOwnerDeletion:(*bool)(0xc0042a932b)}} Jan 13 00:23:16.100: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"4b91c443-f036-41b9-8273-fb8fe5ff0f01", Controller:(*bool)(0xc0041052a2), BlockOwnerDeletion:(*bool)(0xc0041052a3)}} Jan 13 00:23:16.130: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"86becfcc-94aa-4d78-b950-6e4c622d2e08", Controller:(*bool)(0xc003efaefa), BlockOwnerDeletion:(*bool)(0xc003efaefb)}} [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:23:21.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3160" for this suite. • [SLOW TEST:5.382 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":309,"completed":248,"skipped":4277,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:23:21.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:23:26.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1190" for this suite. • [SLOW TEST:5.256 seconds] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":309,"completed":249,"skipped":4277,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:23:26.477: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating the pod Jan 13 00:23:31.200: INFO: Successfully updated pod "annotationupdatecf499eed-34d8-4d21-8a7c-1ecf7ae29547" [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:23:35.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4289" for this suite. • [SLOW TEST:8.771 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35 should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":309,"completed":250,"skipped":4300,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:23:35.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should create services for rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating Agnhost RC Jan 13 00:23:35.336: INFO: namespace kubectl-9728 Jan 13 00:23:35.336: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-9728 create -f -' Jan 13 00:23:40.402: INFO: stderr: "" Jan 13 00:23:40.402: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Jan 13 00:23:41.506: INFO: Selector matched 1 pods for map[app:agnhost] Jan 13 00:23:41.507: INFO: Found 0 / 1 Jan 13 00:23:42.459: INFO: Selector matched 1 pods for map[app:agnhost] Jan 13 00:23:42.459: INFO: Found 0 / 1 Jan 13 00:23:43.407: INFO: Selector matched 1 pods for map[app:agnhost] Jan 13 00:23:43.407: INFO: Found 0 / 1 Jan 13 00:23:44.407: INFO: Selector matched 1 pods for map[app:agnhost] Jan 13 00:23:44.408: INFO: Found 0 / 1 Jan 13 00:23:45.408: INFO: Selector matched 1 pods for map[app:agnhost] Jan 13 00:23:45.408: INFO: Found 1 / 1 Jan 13 00:23:45.408: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jan 13 00:23:45.411: INFO: Selector matched 1 pods for map[app:agnhost] Jan 13 00:23:45.411: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 13 00:23:45.411: INFO: wait on agnhost-primary startup in kubectl-9728 Jan 13 00:23:45.411: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-9728 logs agnhost-primary-bhsgh agnhost-primary' Jan 13 00:23:45.542: INFO: stderr: "" Jan 13 00:23:45.542: INFO: stdout: "Paused\n" STEP: exposing RC Jan 13 00:23:45.542: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-9728 expose rc agnhost-primary --name=rm2 --port=1234 --target-port=6379' Jan 13 00:23:45.751: INFO: stderr: "" Jan 13 00:23:45.751: INFO: stdout: "service/rm2 exposed\n" Jan 13 00:23:45.781: INFO: Service rm2 in namespace kubectl-9728 found. STEP: exposing service Jan 13 00:23:47.787: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-9728 expose service rm2 --name=rm3 --port=2345 --target-port=6379' Jan 13 00:23:47.937: INFO: stderr: "" Jan 13 00:23:47.937: INFO: stdout: "service/rm3 exposed\n" Jan 13 00:23:47.996: INFO: Service rm3 in namespace kubectl-9728 found. [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:23:50.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9728" for this suite. • [SLOW TEST:14.761 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1229 should create services for rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":309,"completed":251,"skipped":4307,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:23:50.011: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test emptydir 0644 on node default medium Jan 13 00:23:50.129: INFO: Waiting up to 5m0s for pod "pod-50493535-73d3-4e13-b994-284b0ac5e704" in namespace "emptydir-3557" to be "Succeeded or Failed" Jan 13 00:23:50.159: INFO: Pod "pod-50493535-73d3-4e13-b994-284b0ac5e704": Phase="Pending", Reason="", readiness=false. Elapsed: 29.326907ms Jan 13 00:23:52.165: INFO: Pod "pod-50493535-73d3-4e13-b994-284b0ac5e704": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035591556s Jan 13 00:23:54.177: INFO: Pod "pod-50493535-73d3-4e13-b994-284b0ac5e704": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.047282416s STEP: Saw pod success Jan 13 00:23:54.177: INFO: Pod "pod-50493535-73d3-4e13-b994-284b0ac5e704" satisfied condition "Succeeded or Failed" Jan 13 00:23:54.179: INFO: Trying to get logs from node leguer-worker2 pod pod-50493535-73d3-4e13-b994-284b0ac5e704 container test-container: STEP: delete the pod Jan 13 00:23:54.202: INFO: Waiting for pod pod-50493535-73d3-4e13-b994-284b0ac5e704 to disappear Jan 13 00:23:54.218: INFO: Pod pod-50493535-73d3-4e13-b994-284b0ac5e704 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:23:54.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3557" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":252,"skipped":4371,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:23:54.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 13 00:23:55.112: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 13 00:23:57.123: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746094235, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746094235, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746094235, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746094235, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 13 00:24:00.155: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Jan 13 00:24:04.313: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=webhook-4456 attach --namespace=webhook-4456 to-be-attached-pod -i -c=container1' Jan 13 00:24:04.438: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:24:04.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4456" for this suite. STEP: Destroying namespace "webhook-4456-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:10.412 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":309,"completed":253,"skipped":4376,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:24:04.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 13 00:24:04.812: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-10806805-5337-474e-8aa8-47c441d3044f" in namespace "security-context-test-777" to be "Succeeded or Failed" Jan 13 00:24:04.816: INFO: Pod "busybox-readonly-false-10806805-5337-474e-8aa8-47c441d3044f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.210759ms Jan 13 00:24:06.821: INFO: Pod "busybox-readonly-false-10806805-5337-474e-8aa8-47c441d3044f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008461337s Jan 13 00:24:08.825: INFO: Pod "busybox-readonly-false-10806805-5337-474e-8aa8-47c441d3044f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013268178s Jan 13 00:24:08.825: INFO: Pod "busybox-readonly-false-10806805-5337-474e-8aa8-47c441d3044f" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:24:08.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-777" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":309,"completed":254,"skipped":4378,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:24:08.847: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name configmap-test-volume-92be95ec-e311-411e-9619-e469cde49edc STEP: Creating a pod to test consume configMaps Jan 13 00:24:09.283: INFO: Waiting up to 5m0s for pod "pod-configmaps-dc887f2d-e52e-4110-aa8b-419356fc2e9a" in namespace "configmap-406" to be "Succeeded or Failed" Jan 13 00:24:09.290: INFO: Pod "pod-configmaps-dc887f2d-e52e-4110-aa8b-419356fc2e9a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.666088ms Jan 13 00:24:11.375: INFO: Pod "pod-configmaps-dc887f2d-e52e-4110-aa8b-419356fc2e9a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.091857716s Jan 13 00:24:13.380: INFO: Pod "pod-configmaps-dc887f2d-e52e-4110-aa8b-419356fc2e9a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.096649455s STEP: Saw pod success Jan 13 00:24:13.380: INFO: Pod "pod-configmaps-dc887f2d-e52e-4110-aa8b-419356fc2e9a" satisfied condition "Succeeded or Failed" Jan 13 00:24:13.383: INFO: Trying to get logs from node leguer-worker2 pod pod-configmaps-dc887f2d-e52e-4110-aa8b-419356fc2e9a container configmap-volume-test: STEP: delete the pod Jan 13 00:24:13.493: INFO: Waiting for pod pod-configmaps-dc887f2d-e52e-4110-aa8b-419356fc2e9a to disappear Jan 13 00:24:13.590: INFO: Pod pod-configmaps-dc887f2d-e52e-4110-aa8b-419356fc2e9a no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:24:13.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-406" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":309,"completed":255,"skipped":4400,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:24:13.599: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name projected-configmap-test-volume-befd05e6-f5fc-4d90-bcd3-9574f1213e5f STEP: Creating a pod to test consume configMaps Jan 13 00:24:13.706: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-acb43344-3e63-4b32-ab2b-03f2f47352c1" in namespace "projected-6703" to be "Succeeded or Failed" Jan 13 00:24:13.716: INFO: Pod "pod-projected-configmaps-acb43344-3e63-4b32-ab2b-03f2f47352c1": Phase="Pending", Reason="", readiness=false. Elapsed: 9.439043ms Jan 13 00:24:15.721: INFO: Pod "pod-projected-configmaps-acb43344-3e63-4b32-ab2b-03f2f47352c1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01473182s Jan 13 00:24:17.740: INFO: Pod "pod-projected-configmaps-acb43344-3e63-4b32-ab2b-03f2f47352c1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033381858s Jan 13 00:24:19.744: INFO: Pod "pod-projected-configmaps-acb43344-3e63-4b32-ab2b-03f2f47352c1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.03751691s STEP: Saw pod success Jan 13 00:24:19.744: INFO: Pod "pod-projected-configmaps-acb43344-3e63-4b32-ab2b-03f2f47352c1" satisfied condition "Succeeded or Failed" Jan 13 00:24:19.747: INFO: Trying to get logs from node leguer-worker pod pod-projected-configmaps-acb43344-3e63-4b32-ab2b-03f2f47352c1 container projected-configmap-volume-test: STEP: delete the pod Jan 13 00:24:19.764: INFO: Waiting for pod pod-projected-configmaps-acb43344-3e63-4b32-ab2b-03f2f47352c1 to disappear Jan 13 00:24:19.786: INFO: Pod pod-projected-configmaps-acb43344-3e63-4b32-ab2b-03f2f47352c1 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:24:19.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6703" for this suite. • [SLOW TEST:6.196 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":309,"completed":256,"skipped":4417,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:24:19.795: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:85 [It] deployment should support rollover [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 13 00:24:19.907: INFO: Pod name rollover-pod: Found 0 pods out of 1 Jan 13 00:24:24.935: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jan 13 00:24:24.935: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Jan 13 00:24:26.979: INFO: Creating deployment "test-rollover-deployment" Jan 13 00:24:26.991: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Jan 13 00:24:28.998: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Jan 13 00:24:29.005: INFO: Ensure that both replica sets have 1 created replica Jan 13 00:24:29.009: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Jan 13 00:24:29.015: INFO: Updating deployment test-rollover-deployment Jan 13 00:24:29.015: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Jan 13 00:24:31.091: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Jan 13 00:24:31.098: INFO: Make sure deployment "test-rollover-deployment" is complete Jan 13 00:24:31.103: INFO: all replica sets need to contain the pod-template-hash label Jan 13 00:24:31.103: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746094267, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746094267, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746094269, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746094267, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-668db69979\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 13 00:24:33.113: INFO: all replica sets need to contain the pod-template-hash label Jan 13 00:24:33.113: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746094267, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746094267, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746094269, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746094267, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-668db69979\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 13 00:24:35.112: INFO: all replica sets need to contain the pod-template-hash label Jan 13 00:24:35.112: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746094267, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746094267, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746094273, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746094267, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-668db69979\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 13 00:24:37.112: INFO: all replica sets need to contain the pod-template-hash label Jan 13 00:24:37.112: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746094267, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746094267, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746094273, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746094267, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-668db69979\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 13 00:24:39.111: INFO: all replica sets need to contain the pod-template-hash label Jan 13 00:24:39.111: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746094267, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746094267, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746094273, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746094267, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-668db69979\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 13 00:24:41.112: INFO: all replica sets need to contain the pod-template-hash label Jan 13 00:24:41.112: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746094267, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746094267, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746094273, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746094267, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-668db69979\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 13 00:24:43.112: INFO: all replica sets need to contain the pod-template-hash label Jan 13 00:24:43.112: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746094267, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746094267, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746094273, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746094267, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-668db69979\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 13 00:24:45.111: INFO: Jan 13 00:24:45.111: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:79 Jan 13 00:24:45.120: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-4767 e9d8ee84-db84-44f7-8ada-6726f0e07056 440560 2 2021-01-13 00:24:26 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2021-01-13 00:24:29 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-01-13 00:24:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.21 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00031cf48 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2021-01-13 00:24:27 +0000 UTC,LastTransitionTime:2021-01-13 00:24:27 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-668db69979" has successfully progressed.,LastUpdateTime:2021-01-13 00:24:43 +0000 UTC,LastTransitionTime:2021-01-13 00:24:27 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Jan 13 00:24:45.123: INFO: New ReplicaSet "test-rollover-deployment-668db69979" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-668db69979 deployment-4767 4f6a7f35-a1e6-4c76-a337-1544580544b2 440549 2 2021-01-13 00:24:29 +0000 UTC map[name:rollover-pod pod-template-hash:668db69979] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment e9d8ee84-db84-44f7-8ada-6726f0e07056 0xc004ec2a77 0xc004ec2a78}] [] [{kube-controller-manager Update apps/v1 2021-01-13 00:24:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e9d8ee84-db84-44f7-8ada-6726f0e07056\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 668db69979,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:668db69979] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.21 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004ec2b08 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jan 13 00:24:45.123: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Jan 13 00:24:45.124: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-4767 7c35a97c-4a40-46fe-a371-dae6b02e3bc1 440559 2 2021-01-13 00:24:19 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment e9d8ee84-db84-44f7-8ada-6726f0e07056 0xc004ec2967 0xc004ec2968}] [] [{e2e.test Update apps/v1 2021-01-13 00:24:19 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-01-13 00:24:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e9d8ee84-db84-44f7-8ada-6726f0e07056\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc004ec2a08 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 13 00:24:45.124: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-78bc8b888c deployment-4767 5f448235-bbd4-49ce-895a-11c3ea89614d 440516 2 2021-01-13 00:24:26 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment e9d8ee84-db84-44f7-8ada-6726f0e07056 0xc004ec2b77 0xc004ec2b78}] [] [{kube-controller-manager Update apps/v1 2021-01-13 00:24:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e9d8ee84-db84-44f7-8ada-6726f0e07056\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 78bc8b888c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004ec2c68 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 13 00:24:45.127: INFO: Pod "test-rollover-deployment-668db69979-6mksv" is available: &Pod{ObjectMeta:{test-rollover-deployment-668db69979-6mksv test-rollover-deployment-668db69979- deployment-4767 c97b0108-5705-4ab1-9d6f-5e409d1700d7 440527 0 2021-01-13 00:24:29 +0000 UTC map[name:rollover-pod pod-template-hash:668db69979] map[] [{apps/v1 ReplicaSet test-rollover-deployment-668db69979 4f6a7f35-a1e6-4c76-a337-1544580544b2 0xc004ec3127 0xc004ec3128}] [] [{kube-controller-manager Update v1 2021-01-13 00:24:29 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4f6a7f35-a1e6-4c76-a337-1544580544b2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-13 00:24:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.161\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lvpgg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lvpgg,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lvpgg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 00:24:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 00:24:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 00:24:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 00:24:29 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:10.244.1.161,StartTime:2021-01-13 00:24:29 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-01-13 00:24:32 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a,ContainerID:containerd://1dc41d5afe722f44f5de2ff0084732394242948a132dadf9366a741e324c97b9,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.161,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:24:45.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4767" for this suite. • [SLOW TEST:25.343 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":309,"completed":257,"skipped":4441,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:24:45.139: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-8348 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating stateful set ss in namespace statefulset-8348 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-8348 Jan 13 00:24:45.310: INFO: Found 0 stateful pods, waiting for 1 Jan 13 00:24:55.316: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Jan 13 00:24:55.318: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8348 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 13 00:24:55.624: INFO: stderr: "I0113 00:24:55.471426 3426 log.go:181] (0xc000d4ec60) (0xc000450140) Create stream\nI0113 00:24:55.471487 3426 log.go:181] (0xc000d4ec60) (0xc000450140) Stream added, broadcasting: 1\nI0113 00:24:55.473607 3426 log.go:181] (0xc000d4ec60) Reply frame received for 1\nI0113 00:24:55.473661 3426 log.go:181] (0xc000d4ec60) (0xc0003b7360) Create stream\nI0113 00:24:55.473674 3426 log.go:181] (0xc000d4ec60) (0xc0003b7360) Stream added, broadcasting: 3\nI0113 00:24:55.474837 3426 log.go:181] (0xc000d4ec60) Reply frame received for 3\nI0113 00:24:55.474891 3426 log.go:181] (0xc000d4ec60) (0xc000451720) Create stream\nI0113 00:24:55.474920 3426 log.go:181] (0xc000d4ec60) (0xc000451720) Stream added, broadcasting: 5\nI0113 00:24:55.476031 3426 log.go:181] (0xc000d4ec60) Reply frame received for 5\nI0113 00:24:55.571276 3426 log.go:181] (0xc000d4ec60) Data frame received for 5\nI0113 00:24:55.571310 3426 log.go:181] (0xc000451720) (5) Data frame handling\nI0113 00:24:55.571329 3426 log.go:181] (0xc000451720) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0113 00:24:55.614942 3426 log.go:181] (0xc000d4ec60) Data frame received for 3\nI0113 00:24:55.614984 3426 log.go:181] (0xc0003b7360) (3) Data frame handling\nI0113 00:24:55.615009 3426 log.go:181] (0xc0003b7360) (3) Data frame sent\nI0113 00:24:55.615138 3426 log.go:181] (0xc000d4ec60) Data frame received for 3\nI0113 00:24:55.615162 3426 log.go:181] (0xc0003b7360) (3) Data frame handling\nI0113 00:24:55.615259 3426 log.go:181] (0xc000d4ec60) Data frame received for 5\nI0113 00:24:55.615283 3426 log.go:181] (0xc000451720) (5) Data frame handling\nI0113 00:24:55.617626 3426 log.go:181] (0xc000d4ec60) Data frame received for 1\nI0113 00:24:55.617660 3426 log.go:181] (0xc000450140) (1) Data frame handling\nI0113 00:24:55.617682 3426 log.go:181] (0xc000450140) (1) Data frame sent\nI0113 00:24:55.617710 3426 log.go:181] (0xc000d4ec60) (0xc000450140) Stream removed, broadcasting: 1\nI0113 00:24:55.617741 3426 log.go:181] (0xc000d4ec60) Go away received\nI0113 00:24:55.618282 3426 log.go:181] (0xc000d4ec60) (0xc000450140) Stream removed, broadcasting: 1\nI0113 00:24:55.618307 3426 log.go:181] (0xc000d4ec60) (0xc0003b7360) Stream removed, broadcasting: 3\nI0113 00:24:55.618321 3426 log.go:181] (0xc000d4ec60) (0xc000451720) Stream removed, broadcasting: 5\n" Jan 13 00:24:55.624: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 13 00:24:55.624: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 13 00:24:55.628: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jan 13 00:25:05.635: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 13 00:25:05.635: INFO: Waiting for statefulset status.replicas updated to 0 Jan 13 00:25:05.697: INFO: POD NODE PHASE GRACE CONDITIONS Jan 13 00:25:05.697: INFO: ss-0 leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:24:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:24:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:24:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:24:45 +0000 UTC }] Jan 13 00:25:05.697: INFO: Jan 13 00:25:05.697: INFO: StatefulSet ss has not reached scale 3, at 1 Jan 13 00:25:06.703: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.952012437s Jan 13 00:25:07.951: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.945821927s Jan 13 00:25:08.957: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.698338499s Jan 13 00:25:09.962: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.692237666s Jan 13 00:25:10.968: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.687215247s Jan 13 00:25:11.974: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.681017258s Jan 13 00:25:12.980: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.674930383s Jan 13 00:25:13.985: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.669374095s Jan 13 00:25:14.991: INFO: Verifying statefulset ss doesn't scale past 3 for another 663.517451ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-8348 Jan 13 00:25:15.999: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8348 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 00:25:16.250: INFO: stderr: "I0113 00:25:16.162927 3444 log.go:181] (0xc000546000) (0xc000da0000) Create stream\nI0113 00:25:16.163005 3444 log.go:181] (0xc000546000) (0xc000da0000) Stream added, broadcasting: 1\nI0113 00:25:16.168459 3444 log.go:181] (0xc000546000) Reply frame received for 1\nI0113 00:25:16.168498 3444 log.go:181] (0xc000546000) (0xc000da00a0) Create stream\nI0113 00:25:16.168512 3444 log.go:181] (0xc000546000) (0xc000da00a0) Stream added, broadcasting: 3\nI0113 00:25:16.169962 3444 log.go:181] (0xc000546000) Reply frame received for 3\nI0113 00:25:16.169998 3444 log.go:181] (0xc000546000) (0xc0007d0500) Create stream\nI0113 00:25:16.170008 3444 log.go:181] (0xc000546000) (0xc0007d0500) Stream added, broadcasting: 5\nI0113 00:25:16.170916 3444 log.go:181] (0xc000546000) Reply frame received for 5\nI0113 00:25:16.244256 3444 log.go:181] (0xc000546000) Data frame received for 3\nI0113 00:25:16.244327 3444 log.go:181] (0xc000546000) Data frame received for 5\nI0113 00:25:16.244374 3444 log.go:181] (0xc0007d0500) (5) Data frame handling\nI0113 00:25:16.244428 3444 log.go:181] (0xc0007d0500) (5) Data frame sent\nI0113 00:25:16.244445 3444 log.go:181] (0xc000546000) Data frame received for 5\nI0113 00:25:16.244460 3444 log.go:181] (0xc0007d0500) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0113 00:25:16.244494 3444 log.go:181] (0xc000da00a0) (3) Data frame handling\nI0113 00:25:16.244520 3444 log.go:181] (0xc000da00a0) (3) Data frame sent\nI0113 00:25:16.244540 3444 log.go:181] (0xc000546000) Data frame received for 3\nI0113 00:25:16.244551 3444 log.go:181] (0xc000da00a0) (3) Data frame handling\nI0113 00:25:16.245975 3444 log.go:181] (0xc000546000) Data frame received for 1\nI0113 00:25:16.245995 3444 log.go:181] (0xc000da0000) (1) Data frame handling\nI0113 00:25:16.246016 3444 log.go:181] (0xc000da0000) (1) Data frame sent\nI0113 00:25:16.246032 3444 log.go:181] (0xc000546000) (0xc000da0000) Stream removed, broadcasting: 1\nI0113 00:25:16.246114 3444 log.go:181] (0xc000546000) Go away received\nI0113 00:25:16.246306 3444 log.go:181] (0xc000546000) (0xc000da0000) Stream removed, broadcasting: 1\nI0113 00:25:16.246318 3444 log.go:181] (0xc000546000) (0xc000da00a0) Stream removed, broadcasting: 3\nI0113 00:25:16.246327 3444 log.go:181] (0xc000546000) (0xc0007d0500) Stream removed, broadcasting: 5\n" Jan 13 00:25:16.250: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 13 00:25:16.250: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 13 00:25:16.250: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8348 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 00:25:16.475: INFO: stderr: "I0113 00:25:16.394452 3463 log.go:181] (0xc000fa5810) (0xc0008bea00) Create stream\nI0113 00:25:16.394522 3463 log.go:181] (0xc000fa5810) (0xc0008bea00) Stream added, broadcasting: 1\nI0113 00:25:16.397745 3463 log.go:181] (0xc000fa5810) Reply frame received for 1\nI0113 00:25:16.397829 3463 log.go:181] (0xc000fa5810) (0xc000ba2320) Create stream\nI0113 00:25:16.397855 3463 log.go:181] (0xc000fa5810) (0xc000ba2320) Stream added, broadcasting: 3\nI0113 00:25:16.399224 3463 log.go:181] (0xc000fa5810) Reply frame received for 3\nI0113 00:25:16.399707 3463 log.go:181] (0xc000fa5810) (0xc000b3c000) Create stream\nI0113 00:25:16.399721 3463 log.go:181] (0xc000fa5810) (0xc000b3c000) Stream added, broadcasting: 5\nI0113 00:25:16.400698 3463 log.go:181] (0xc000fa5810) Reply frame received for 5\nI0113 00:25:16.467224 3463 log.go:181] (0xc000fa5810) Data frame received for 3\nI0113 00:25:16.467259 3463 log.go:181] (0xc000ba2320) (3) Data frame handling\nI0113 00:25:16.467275 3463 log.go:181] (0xc000ba2320) (3) Data frame sent\nI0113 00:25:16.467288 3463 log.go:181] (0xc000fa5810) Data frame received for 3\nI0113 00:25:16.467296 3463 log.go:181] (0xc000ba2320) (3) Data frame handling\nI0113 00:25:16.467304 3463 log.go:181] (0xc000fa5810) Data frame received for 5\nI0113 00:25:16.467311 3463 log.go:181] (0xc000b3c000) (5) Data frame handling\nI0113 00:25:16.467329 3463 log.go:181] (0xc000b3c000) (5) Data frame sent\nI0113 00:25:16.467337 3463 log.go:181] (0xc000fa5810) Data frame received for 5\nI0113 00:25:16.467345 3463 log.go:181] (0xc000b3c000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0113 00:25:16.469400 3463 log.go:181] (0xc000fa5810) Data frame received for 1\nI0113 00:25:16.469449 3463 log.go:181] (0xc0008bea00) (1) Data frame handling\nI0113 00:25:16.469471 3463 log.go:181] (0xc0008bea00) (1) Data frame sent\nI0113 00:25:16.469504 3463 log.go:181] (0xc000fa5810) (0xc0008bea00) Stream removed, broadcasting: 1\nI0113 00:25:16.469527 3463 log.go:181] (0xc000fa5810) Go away received\nI0113 00:25:16.469830 3463 log.go:181] (0xc000fa5810) (0xc0008bea00) Stream removed, broadcasting: 1\nI0113 00:25:16.469844 3463 log.go:181] (0xc000fa5810) (0xc000ba2320) Stream removed, broadcasting: 3\nI0113 00:25:16.469851 3463 log.go:181] (0xc000fa5810) (0xc000b3c000) Stream removed, broadcasting: 5\n" Jan 13 00:25:16.476: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 13 00:25:16.476: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 13 00:25:16.476: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8348 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 00:25:16.697: INFO: stderr: "I0113 00:25:16.611774 3481 log.go:181] (0xc00003a420) (0xc000bc4000) Create stream\nI0113 00:25:16.611850 3481 log.go:181] (0xc00003a420) (0xc000bc4000) Stream added, broadcasting: 1\nI0113 00:25:16.613974 3481 log.go:181] (0xc00003a420) Reply frame received for 1\nI0113 00:25:16.614043 3481 log.go:181] (0xc00003a420) (0xc000599180) Create stream\nI0113 00:25:16.614074 3481 log.go:181] (0xc00003a420) (0xc000599180) Stream added, broadcasting: 3\nI0113 00:25:16.615249 3481 log.go:181] (0xc00003a420) Reply frame received for 3\nI0113 00:25:16.615295 3481 log.go:181] (0xc00003a420) (0xc0006340a0) Create stream\nI0113 00:25:16.615311 3481 log.go:181] (0xc00003a420) (0xc0006340a0) Stream added, broadcasting: 5\nI0113 00:25:16.616313 3481 log.go:181] (0xc00003a420) Reply frame received for 5\nI0113 00:25:16.690059 3481 log.go:181] (0xc00003a420) Data frame received for 3\nI0113 00:25:16.690099 3481 log.go:181] (0xc000599180) (3) Data frame handling\nI0113 00:25:16.690122 3481 log.go:181] (0xc000599180) (3) Data frame sent\nI0113 00:25:16.690367 3481 log.go:181] (0xc00003a420) Data frame received for 5\nI0113 00:25:16.690387 3481 log.go:181] (0xc0006340a0) (5) Data frame handling\nI0113 00:25:16.690394 3481 log.go:181] (0xc0006340a0) (5) Data frame sent\nI0113 00:25:16.690399 3481 log.go:181] (0xc00003a420) Data frame received for 5\nI0113 00:25:16.690404 3481 log.go:181] (0xc0006340a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0113 00:25:16.690429 3481 log.go:181] (0xc00003a420) Data frame received for 3\nI0113 00:25:16.690455 3481 log.go:181] (0xc000599180) (3) Data frame handling\nI0113 00:25:16.691812 3481 log.go:181] (0xc00003a420) Data frame received for 1\nI0113 00:25:16.691838 3481 log.go:181] (0xc000bc4000) (1) Data frame handling\nI0113 00:25:16.691856 3481 log.go:181] (0xc000bc4000) (1) Data frame sent\nI0113 00:25:16.691875 3481 log.go:181] (0xc00003a420) (0xc000bc4000) Stream removed, broadcasting: 1\nI0113 00:25:16.691898 3481 log.go:181] (0xc00003a420) Go away received\nI0113 00:25:16.692272 3481 log.go:181] (0xc00003a420) (0xc000bc4000) Stream removed, broadcasting: 1\nI0113 00:25:16.692292 3481 log.go:181] (0xc00003a420) (0xc000599180) Stream removed, broadcasting: 3\nI0113 00:25:16.692304 3481 log.go:181] (0xc00003a420) (0xc0006340a0) Stream removed, broadcasting: 5\n" Jan 13 00:25:16.697: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 13 00:25:16.697: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 13 00:25:16.703: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 13 00:25:16.703: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 13 00:25:16.703: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Jan 13 00:25:16.706: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8348 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 13 00:25:16.925: INFO: stderr: "I0113 00:25:16.842608 3499 log.go:181] (0xc000b56000) (0xc000b0e000) Create stream\nI0113 00:25:16.842682 3499 log.go:181] (0xc000b56000) (0xc000b0e000) Stream added, broadcasting: 1\nI0113 00:25:16.845694 3499 log.go:181] (0xc000b56000) Reply frame received for 1\nI0113 00:25:16.845748 3499 log.go:181] (0xc000b56000) (0xc000b0e0a0) Create stream\nI0113 00:25:16.845776 3499 log.go:181] (0xc000b56000) (0xc000b0e0a0) Stream added, broadcasting: 3\nI0113 00:25:16.846948 3499 log.go:181] (0xc000b56000) Reply frame received for 3\nI0113 00:25:16.846992 3499 log.go:181] (0xc000b56000) (0xc000e06000) Create stream\nI0113 00:25:16.847007 3499 log.go:181] (0xc000b56000) (0xc000e06000) Stream added, broadcasting: 5\nI0113 00:25:16.847955 3499 log.go:181] (0xc000b56000) Reply frame received for 5\nI0113 00:25:16.917314 3499 log.go:181] (0xc000b56000) Data frame received for 5\nI0113 00:25:16.917359 3499 log.go:181] (0xc000e06000) (5) Data frame handling\nI0113 00:25:16.917371 3499 log.go:181] (0xc000e06000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0113 00:25:16.917418 3499 log.go:181] (0xc000b56000) Data frame received for 3\nI0113 00:25:16.917450 3499 log.go:181] (0xc000b0e0a0) (3) Data frame handling\nI0113 00:25:16.917483 3499 log.go:181] (0xc000b0e0a0) (3) Data frame sent\nI0113 00:25:16.917505 3499 log.go:181] (0xc000b56000) Data frame received for 3\nI0113 00:25:16.917528 3499 log.go:181] (0xc000b56000) Data frame received for 5\nI0113 00:25:16.917550 3499 log.go:181] (0xc000e06000) (5) Data frame handling\nI0113 00:25:16.917573 3499 log.go:181] (0xc000b0e0a0) (3) Data frame handling\nI0113 00:25:16.919244 3499 log.go:181] (0xc000b56000) Data frame received for 1\nI0113 00:25:16.919335 3499 log.go:181] (0xc000b0e000) (1) Data frame handling\nI0113 00:25:16.919369 3499 log.go:181] (0xc000b0e000) (1) Data frame sent\nI0113 00:25:16.919391 3499 log.go:181] (0xc000b56000) (0xc000b0e000) Stream removed, broadcasting: 1\nI0113 00:25:16.919410 3499 log.go:181] (0xc000b56000) Go away received\nI0113 00:25:16.919720 3499 log.go:181] (0xc000b56000) (0xc000b0e000) Stream removed, broadcasting: 1\nI0113 00:25:16.919736 3499 log.go:181] (0xc000b56000) (0xc000b0e0a0) Stream removed, broadcasting: 3\nI0113 00:25:16.919744 3499 log.go:181] (0xc000b56000) (0xc000e06000) Stream removed, broadcasting: 5\n" Jan 13 00:25:16.925: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 13 00:25:16.925: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 13 00:25:16.926: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8348 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 13 00:25:17.181: INFO: stderr: "I0113 00:25:17.058400 3517 log.go:181] (0xc0008e2210) (0xc0001fa000) Create stream\nI0113 00:25:17.058462 3517 log.go:181] (0xc0008e2210) (0xc0001fa000) Stream added, broadcasting: 1\nI0113 00:25:17.061039 3517 log.go:181] (0xc0008e2210) Reply frame received for 1\nI0113 00:25:17.061096 3517 log.go:181] (0xc0008e2210) (0xc0005cc000) Create stream\nI0113 00:25:17.061121 3517 log.go:181] (0xc0008e2210) (0xc0005cc000) Stream added, broadcasting: 3\nI0113 00:25:17.062570 3517 log.go:181] (0xc0008e2210) Reply frame received for 3\nI0113 00:25:17.062613 3517 log.go:181] (0xc0008e2210) (0xc0005cc0a0) Create stream\nI0113 00:25:17.062624 3517 log.go:181] (0xc0008e2210) (0xc0005cc0a0) Stream added, broadcasting: 5\nI0113 00:25:17.063412 3517 log.go:181] (0xc0008e2210) Reply frame received for 5\nI0113 00:25:17.120123 3517 log.go:181] (0xc0008e2210) Data frame received for 5\nI0113 00:25:17.120154 3517 log.go:181] (0xc0005cc0a0) (5) Data frame handling\nI0113 00:25:17.120178 3517 log.go:181] (0xc0005cc0a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0113 00:25:17.175540 3517 log.go:181] (0xc0008e2210) Data frame received for 5\nI0113 00:25:17.175580 3517 log.go:181] (0xc0005cc0a0) (5) Data frame handling\nI0113 00:25:17.175603 3517 log.go:181] (0xc0008e2210) Data frame received for 3\nI0113 00:25:17.175612 3517 log.go:181] (0xc0005cc000) (3) Data frame handling\nI0113 00:25:17.175621 3517 log.go:181] (0xc0005cc000) (3) Data frame sent\nI0113 00:25:17.175630 3517 log.go:181] (0xc0008e2210) Data frame received for 3\nI0113 00:25:17.175638 3517 log.go:181] (0xc0005cc000) (3) Data frame handling\nI0113 00:25:17.177329 3517 log.go:181] (0xc0008e2210) Data frame received for 1\nI0113 00:25:17.177345 3517 log.go:181] (0xc0001fa000) (1) Data frame handling\nI0113 00:25:17.177358 3517 log.go:181] (0xc0001fa000) (1) Data frame sent\nI0113 00:25:17.177372 3517 log.go:181] (0xc0008e2210) (0xc0001fa000) Stream removed, broadcasting: 1\nI0113 00:25:17.177387 3517 log.go:181] (0xc0008e2210) Go away received\nI0113 00:25:17.177760 3517 log.go:181] (0xc0008e2210) (0xc0001fa000) Stream removed, broadcasting: 1\nI0113 00:25:17.177776 3517 log.go:181] (0xc0008e2210) (0xc0005cc000) Stream removed, broadcasting: 3\nI0113 00:25:17.177784 3517 log.go:181] (0xc0008e2210) (0xc0005cc0a0) Stream removed, broadcasting: 5\n" Jan 13 00:25:17.182: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 13 00:25:17.182: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 13 00:25:17.182: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8348 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 13 00:25:17.457: INFO: stderr: "I0113 00:25:17.313643 3535 log.go:181] (0xc000c054a0) (0xc000578960) Create stream\nI0113 00:25:17.313712 3535 log.go:181] (0xc000c054a0) (0xc000578960) Stream added, broadcasting: 1\nI0113 00:25:17.315537 3535 log.go:181] (0xc000c054a0) Reply frame received for 1\nI0113 00:25:17.315599 3535 log.go:181] (0xc000c054a0) (0xc000bae140) Create stream\nI0113 00:25:17.315622 3535 log.go:181] (0xc000c054a0) (0xc000bae140) Stream added, broadcasting: 3\nI0113 00:25:17.316607 3535 log.go:181] (0xc000c054a0) Reply frame received for 3\nI0113 00:25:17.316650 3535 log.go:181] (0xc000c054a0) (0xc000eba0a0) Create stream\nI0113 00:25:17.316677 3535 log.go:181] (0xc000c054a0) (0xc000eba0a0) Stream added, broadcasting: 5\nI0113 00:25:17.317634 3535 log.go:181] (0xc000c054a0) Reply frame received for 5\nI0113 00:25:17.393828 3535 log.go:181] (0xc000c054a0) Data frame received for 5\nI0113 00:25:17.393850 3535 log.go:181] (0xc000eba0a0) (5) Data frame handling\nI0113 00:25:17.393861 3535 log.go:181] (0xc000eba0a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0113 00:25:17.451629 3535 log.go:181] (0xc000c054a0) Data frame received for 5\nI0113 00:25:17.451674 3535 log.go:181] (0xc000eba0a0) (5) Data frame handling\nI0113 00:25:17.451705 3535 log.go:181] (0xc000c054a0) Data frame received for 3\nI0113 00:25:17.451717 3535 log.go:181] (0xc000bae140) (3) Data frame handling\nI0113 00:25:17.451728 3535 log.go:181] (0xc000bae140) (3) Data frame sent\nI0113 00:25:17.451739 3535 log.go:181] (0xc000c054a0) Data frame received for 3\nI0113 00:25:17.451748 3535 log.go:181] (0xc000bae140) (3) Data frame handling\nI0113 00:25:17.453071 3535 log.go:181] (0xc000c054a0) Data frame received for 1\nI0113 00:25:17.453097 3535 log.go:181] (0xc000578960) (1) Data frame handling\nI0113 00:25:17.453105 3535 log.go:181] (0xc000578960) (1) Data frame sent\nI0113 00:25:17.453113 3535 log.go:181] (0xc000c054a0) (0xc000578960) Stream removed, broadcasting: 1\nI0113 00:25:17.453124 3535 log.go:181] (0xc000c054a0) Go away received\nI0113 00:25:17.453380 3535 log.go:181] (0xc000c054a0) (0xc000578960) Stream removed, broadcasting: 1\nI0113 00:25:17.453399 3535 log.go:181] (0xc000c054a0) (0xc000bae140) Stream removed, broadcasting: 3\nI0113 00:25:17.453410 3535 log.go:181] (0xc000c054a0) (0xc000eba0a0) Stream removed, broadcasting: 5\n" Jan 13 00:25:17.458: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 13 00:25:17.458: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 13 00:25:17.458: INFO: Waiting for statefulset status.replicas updated to 0 Jan 13 00:25:17.508: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Jan 13 00:25:27.517: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 13 00:25:27.517: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jan 13 00:25:27.517: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jan 13 00:25:27.573: INFO: POD NODE PHASE GRACE CONDITIONS Jan 13 00:25:27.573: INFO: ss-0 leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:24:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:24:45 +0000 UTC }] Jan 13 00:25:27.574: INFO: ss-1 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:05 +0000 UTC }] Jan 13 00:25:27.574: INFO: ss-2 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:05 +0000 UTC }] Jan 13 00:25:27.574: INFO: Jan 13 00:25:27.574: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 13 00:25:28.747: INFO: POD NODE PHASE GRACE CONDITIONS Jan 13 00:25:28.747: INFO: ss-0 leguer-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:24:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:24:45 +0000 UTC }] Jan 13 00:25:28.747: INFO: ss-1 leguer-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:05 +0000 UTC }] Jan 13 00:25:28.747: INFO: ss-2 leguer-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:05 +0000 UTC }] Jan 13 00:25:28.747: INFO: Jan 13 00:25:28.747: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 13 00:25:29.893: INFO: POD NODE PHASE GRACE CONDITIONS Jan 13 00:25:29.893: INFO: ss-0 leguer-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:24:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:24:45 +0000 UTC }] Jan 13 00:25:29.893: INFO: ss-1 leguer-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:05 +0000 UTC }] Jan 13 00:25:29.893: INFO: ss-2 leguer-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:05 +0000 UTC }] Jan 13 00:25:29.893: INFO: Jan 13 00:25:29.893: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 13 00:25:30.898: INFO: POD NODE PHASE GRACE CONDITIONS Jan 13 00:25:30.898: INFO: ss-0 leguer-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:24:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:24:45 +0000 UTC }] Jan 13 00:25:30.898: INFO: ss-1 leguer-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:05 +0000 UTC }] Jan 13 00:25:30.898: INFO: ss-2 leguer-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:05 +0000 UTC }] Jan 13 00:25:30.898: INFO: Jan 13 00:25:30.898: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 13 00:25:31.904: INFO: POD NODE PHASE GRACE CONDITIONS Jan 13 00:25:31.904: INFO: ss-0 leguer-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:24:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:24:45 +0000 UTC }] Jan 13 00:25:31.904: INFO: ss-1 leguer-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:05 +0000 UTC }] Jan 13 00:25:31.904: INFO: ss-2 leguer-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:05 +0000 UTC }] Jan 13 00:25:31.904: INFO: Jan 13 00:25:31.904: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 13 00:25:32.910: INFO: POD NODE PHASE GRACE CONDITIONS Jan 13 00:25:32.910: INFO: ss-0 leguer-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:24:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:24:45 +0000 UTC }] Jan 13 00:25:32.910: INFO: ss-1 leguer-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:05 +0000 UTC }] Jan 13 00:25:32.910: INFO: ss-2 leguer-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:05 +0000 UTC }] Jan 13 00:25:32.910: INFO: Jan 13 00:25:32.910: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 13 00:25:33.915: INFO: POD NODE PHASE GRACE CONDITIONS Jan 13 00:25:33.915: INFO: ss-0 leguer-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:24:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:24:45 +0000 UTC }] Jan 13 00:25:33.915: INFO: ss-1 leguer-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:05 +0000 UTC }] Jan 13 00:25:33.915: INFO: ss-2 leguer-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:05 +0000 UTC }] Jan 13 00:25:33.915: INFO: Jan 13 00:25:33.916: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 13 00:25:34.920: INFO: POD NODE PHASE GRACE CONDITIONS Jan 13 00:25:34.920: INFO: ss-0 leguer-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:24:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:24:45 +0000 UTC }] Jan 13 00:25:34.920: INFO: ss-1 leguer-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:05 +0000 UTC }] Jan 13 00:25:34.920: INFO: ss-2 leguer-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:05 +0000 UTC }] Jan 13 00:25:34.920: INFO: Jan 13 00:25:34.920: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 13 00:25:35.936: INFO: POD NODE PHASE GRACE CONDITIONS Jan 13 00:25:35.936: INFO: ss-0 leguer-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:24:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:24:45 +0000 UTC }] Jan 13 00:25:35.936: INFO: ss-1 leguer-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:05 +0000 UTC }] Jan 13 00:25:35.936: INFO: ss-2 leguer-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:05 +0000 UTC }] Jan 13 00:25:35.936: INFO: Jan 13 00:25:35.936: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 13 00:25:36.941: INFO: POD NODE PHASE GRACE CONDITIONS Jan 13 00:25:36.942: INFO: ss-0 leguer-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:24:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:24:45 +0000 UTC }] Jan 13 00:25:36.942: INFO: ss-1 leguer-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:05 +0000 UTC }] Jan 13 00:25:36.942: INFO: ss-2 leguer-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 00:25:05 +0000 UTC }] Jan 13 00:25:36.942: INFO: Jan 13 00:25:36.942: INFO: StatefulSet ss has not reached scale 0, at 3 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-8348 Jan 13 00:25:37.947: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8348 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 00:25:38.091: INFO: rc: 1 Jan 13 00:25:38.091: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8348 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Jan 13 00:25:48.091: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8348 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 00:25:48.229: INFO: rc: 1 Jan 13 00:25:48.229: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8348 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Jan 13 00:25:58.229: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8348 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 00:25:58.367: INFO: rc: 1 Jan 13 00:25:58.367: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8348 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Jan 13 00:26:08.367: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8348 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 00:26:08.509: INFO: rc: 1 Jan 13 00:26:08.510: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8348 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Jan 13 00:26:18.510: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8348 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 00:26:18.617: INFO: rc: 1 Jan 13 00:26:18.617: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8348 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 13 00:26:28.617: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8348 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 00:26:28.717: INFO: rc: 1 Jan 13 00:26:28.717: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8348 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 13 00:26:38.717: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8348 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 00:26:38.817: INFO: rc: 1 Jan 13 00:26:38.817: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8348 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 13 00:26:48.818: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8348 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 00:26:48.919: INFO: rc: 1 Jan 13 00:26:48.919: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8348 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 13 00:26:58.919: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8348 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 00:26:59.024: INFO: rc: 1 Jan 13 00:26:59.024: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8348 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 13 00:27:09.024: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8348 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 00:27:09.148: INFO: rc: 1 Jan 13 00:27:09.148: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8348 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 13 00:27:19.149: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8348 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 00:27:19.241: INFO: rc: 1 Jan 13 00:27:19.241: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8348 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 13 00:27:29.241: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8348 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 00:27:29.340: INFO: rc: 1 Jan 13 00:27:29.340: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8348 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 13 00:27:39.340: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8348 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 00:27:39.447: INFO: rc: 1 Jan 13 00:27:39.447: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8348 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 13 00:27:49.447: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8348 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 00:27:49.562: INFO: rc: 1 Jan 13 00:27:49.562: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8348 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 13 00:27:59.562: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8348 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 00:27:59.657: INFO: rc: 1 Jan 13 00:27:59.657: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8348 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 13 00:28:09.657: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8348 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 00:28:09.763: INFO: rc: 1 Jan 13 00:28:09.763: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8348 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 13 00:28:19.763: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8348 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 00:28:19.863: INFO: rc: 1 Jan 13 00:28:19.864: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8348 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 13 00:28:29.864: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8348 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 00:28:29.965: INFO: rc: 1 Jan 13 00:28:29.965: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8348 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 13 00:28:39.966: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8348 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 00:28:40.158: INFO: rc: 1 Jan 13 00:28:40.158: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8348 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 13 00:28:50.159: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8348 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 00:28:50.257: INFO: rc: 1 Jan 13 00:28:50.257: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8348 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 13 00:29:00.257: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8348 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 00:29:00.362: INFO: rc: 1 Jan 13 00:29:00.362: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8348 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 13 00:29:10.362: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8348 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 00:29:10.469: INFO: rc: 1 Jan 13 00:29:10.469: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8348 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 13 00:29:20.469: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8348 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 00:29:20.577: INFO: rc: 1 Jan 13 00:29:20.577: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8348 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 13 00:29:30.577: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8348 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 00:29:30.681: INFO: rc: 1 Jan 13 00:29:30.681: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8348 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 13 00:29:40.681: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8348 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 00:29:40.771: INFO: rc: 1 Jan 13 00:29:40.771: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8348 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 13 00:29:50.771: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8348 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 00:29:50.882: INFO: rc: 1 Jan 13 00:29:50.883: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8348 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 13 00:30:00.883: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8348 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 00:30:00.983: INFO: rc: 1 Jan 13 00:30:00.983: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8348 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 13 00:30:10.983: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8348 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 00:30:11.092: INFO: rc: 1 Jan 13 00:30:11.092: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8348 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 13 00:30:21.092: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8348 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 00:30:21.196: INFO: rc: 1 Jan 13 00:30:21.196: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8348 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 13 00:30:31.198: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8348 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 00:30:31.306: INFO: rc: 1 Jan 13 00:30:31.306: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8348 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 13 00:30:41.306: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8348 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 00:30:41.420: INFO: rc: 1 Jan 13 00:30:41.420: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: Jan 13 00:30:41.421: INFO: Scaling statefulset ss to 0 Jan 13 00:30:41.430: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Jan 13 00:30:41.433: INFO: Deleting all statefulset in ns statefulset-8348 Jan 13 00:30:41.435: INFO: Scaling statefulset ss to 0 Jan 13 00:30:41.444: INFO: Waiting for statefulset status.replicas updated to 0 Jan 13 00:30:41.446: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:30:41.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8348" for this suite. • [SLOW TEST:356.327 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":309,"completed":258,"skipped":4452,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:30:41.466: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:92 Jan 13 00:30:41.569: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 13 00:30:41.583: INFO: Waiting for terminating namespaces to be deleted... Jan 13 00:30:41.587: INFO: Logging pods the apiserver thinks is on node leguer-worker before test Jan 13 00:30:41.593: INFO: rally-a8f48c6d-3kmika18-pdtzv from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 13 00:30:41.593: INFO: Container rally-a8f48c6d-3kmika18 ready: true, restart count 0 Jan 13 00:30:41.593: INFO: rally-a8f48c6d-3kmika18-pllzg from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 13 00:30:41.593: INFO: Container rally-a8f48c6d-3kmika18 ready: true, restart count 0 Jan 13 00:30:41.593: INFO: rally-a8f48c6d-4cyi45kq-j5tzz from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 13 00:30:41.593: INFO: Container rally-a8f48c6d-4cyi45kq ready: true, restart count 0 Jan 13 00:30:41.593: INFO: rally-a8f48c6d-f3hls6a3-57dwc from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:32 +0000 UTC (1 container statuses recorded) Jan 13 00:30:41.593: INFO: Container rally-a8f48c6d-f3hls6a3 ready: true, restart count 0 Jan 13 00:30:41.593: INFO: rally-a8f48c6d-1y3amfc0-lp8st from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:32 +0000 UTC (1 container statuses recorded) Jan 13 00:30:41.593: INFO: Container rally-a8f48c6d-1y3amfc0 ready: true, restart count 0 Jan 13 00:30:41.593: INFO: rally-a8f48c6d-9pqmjehi-9zwjj from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 13 00:30:41.593: INFO: Container rally-a8f48c6d-9pqmjehi ready: true, restart count 0 Jan 13 00:30:41.593: INFO: chaos-controller-manager-69c479c674-s796v from default started at 2021-01-10 20:58:24 +0000 UTC (1 container statuses recorded) Jan 13 00:30:41.593: INFO: Container chaos-mesh ready: true, restart count 0 Jan 13 00:30:41.593: INFO: chaos-daemon-lv692 from default started at 2021-01-10 20:58:25 +0000 UTC (1 container statuses recorded) Jan 13 00:30:41.593: INFO: Container chaos-daemon ready: true, restart count 0 Jan 13 00:30:41.593: INFO: kindnet-psm25 from kube-system started at 2021-01-10 17:38:10 +0000 UTC (1 container statuses recorded) Jan 13 00:30:41.593: INFO: Container kindnet-cni ready: true, restart count 0 Jan 13 00:30:41.593: INFO: kube-proxy-bmbcs from kube-system started at 2021-01-10 17:38:10 +0000 UTC (1 container statuses recorded) Jan 13 00:30:41.593: INFO: Container kube-proxy ready: true, restart count 0 Jan 13 00:30:41.594: INFO: Logging pods the apiserver thinks is on node leguer-worker2 before test Jan 13 00:30:41.599: INFO: rally-a8f48c6d-4cyi45kq-knr4r from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 13 00:30:41.599: INFO: Container rally-a8f48c6d-4cyi45kq ready: true, restart count 0 Jan 13 00:30:41.599: INFO: rally-a8f48c6d-f3hls6a3-dwt8n from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:32 +0000 UTC (1 container statuses recorded) Jan 13 00:30:41.599: INFO: Container rally-a8f48c6d-f3hls6a3 ready: true, restart count 0 Jan 13 00:30:41.599: INFO: rally-a8f48c6d-1y3amfc0-hh9qk from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:32 +0000 UTC (1 container statuses recorded) Jan 13 00:30:41.599: INFO: Container rally-a8f48c6d-1y3amfc0 ready: true, restart count 0 Jan 13 00:30:41.599: INFO: rally-a8f48c6d-9pqmjehi-85slb from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 13 00:30:41.599: INFO: Container rally-a8f48c6d-9pqmjehi ready: true, restart count 0 Jan 13 00:30:41.599: INFO: rally-a8f48c6d-vnukxqu0-llj24 from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 13 00:30:41.599: INFO: Container rally-a8f48c6d-vnukxqu0 ready: true, restart count 0 Jan 13 00:30:41.599: INFO: rally-a8f48c6d-vnukxqu0-v85kr from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 13 00:30:41.599: INFO: Container rally-a8f48c6d-vnukxqu0 ready: true, restart count 0 Jan 13 00:30:41.599: INFO: chaos-daemon-ffkg7 from default started at 2021-01-10 20:58:25 +0000 UTC (1 container statuses recorded) Jan 13 00:30:41.599: INFO: Container chaos-daemon ready: true, restart count 0 Jan 13 00:30:41.599: INFO: kindnet-8wggd from kube-system started at 2021-01-10 17:38:10 +0000 UTC (1 container statuses recorded) Jan 13 00:30:41.599: INFO: Container kindnet-cni ready: true, restart count 0 Jan 13 00:30:41.599: INFO: kube-proxy-29gxg from kube-system started at 2021-01-10 17:38:09 +0000 UTC (1 container statuses recorded) Jan 13 00:30:41.599: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-fe1abc32-2c52-492d-b8c1-ebe10ddc71f5 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 172.18.0.12 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 172.18.0.12 but use UDP protocol on the node which pod2 resides STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54321 Jan 13 00:31:01.867: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 172.18.0.12 http://127.0.0.1:54321/hostname] Namespace:sched-pred-8421 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 13 00:31:01.867: INFO: >>> kubeConfig: /root/.kube/config I0113 00:31:01.909121 7 log.go:181] (0xc000731340) (0xc0005dce60) Create stream I0113 00:31:01.909148 7 log.go:181] (0xc000731340) (0xc0005dce60) Stream added, broadcasting: 1 I0113 00:31:01.911234 7 log.go:181] (0xc000731340) Reply frame received for 1 I0113 00:31:01.911277 7 log.go:181] (0xc000731340) (0xc004311540) Create stream I0113 00:31:01.911293 7 log.go:181] (0xc000731340) (0xc004311540) Stream added, broadcasting: 3 I0113 00:31:01.912170 7 log.go:181] (0xc000731340) Reply frame received for 3 I0113 00:31:01.912202 7 log.go:181] (0xc000731340) (0xc0005ab680) Create stream I0113 00:31:01.912217 7 log.go:181] (0xc000731340) (0xc0005ab680) Stream added, broadcasting: 5 I0113 00:31:01.913287 7 log.go:181] (0xc000731340) Reply frame received for 5 I0113 00:31:02.005382 7 log.go:181] (0xc000731340) Data frame received for 5 I0113 00:31:02.005469 7 log.go:181] (0xc0005ab680) (5) Data frame handling I0113 00:31:02.005539 7 log.go:181] (0xc0005ab680) (5) Data frame sent I0113 00:31:02.005556 7 log.go:181] (0xc000731340) Data frame received for 5 I0113 00:31:02.005573 7 log.go:181] (0xc0005ab680) (5) Data frame handling I0113 00:31:02.005616 7 log.go:181] (0xc0005ab680) (5) Data frame sent I0113 00:31:02.005665 7 log.go:181] (0xc000731340) Data frame received for 5 I0113 00:31:02.005681 7 log.go:181] (0xc0005ab680) (5) Data frame handling I0113 00:31:02.005712 7 log.go:181] (0xc0005ab680) (5) Data frame sent I0113 00:31:02.005736 7 log.go:181] (0xc000731340) Data frame received for 5 I0113 00:31:02.005747 7 log.go:181] (0xc0005ab680) (5) Data frame handling I0113 00:31:02.005815 7 log.go:181] (0xc0005ab680) (5) Data frame sent I0113 00:31:02.005837 7 log.go:181] (0xc000731340) Data frame received for 5 I0113 00:31:02.005849 7 log.go:181] (0xc0005ab680) (5) Data frame handling I0113 00:31:02.005862 7 log.go:181] (0xc0005ab680) (5) Data frame sent I0113 00:31:02.005873 7 log.go:181] (0xc000731340) Data frame received for 5 I0113 00:31:02.005930 7 log.go:181] (0xc0005ab680) (5) Data frame handling I0113 00:31:02.005981 7 log.go:181] (0xc0005ab680) (5) Data frame sent I0113 00:31:02.006108 7 log.go:181] (0xc000731340) Data frame received for 5 I0113 00:31:02.006126 7 log.go:181] (0xc0005ab680) (5) Data frame handling I0113 00:31:02.006148 7 log.go:181] (0xc0005ab680) (5) Data frame sent I0113 00:31:02.006268 7 log.go:181] (0xc000731340) Data frame received for 3 I0113 00:31:02.006289 7 log.go:181] (0xc004311540) (3) Data frame handling I0113 00:31:02.006308 7 log.go:181] (0xc004311540) (3) Data frame sent I0113 00:31:02.006976 7 log.go:181] (0xc000731340) Data frame received for 5 I0113 00:31:02.007001 7 log.go:181] (0xc0005ab680) (5) Data frame handling I0113 00:31:02.007319 7 log.go:181] (0xc000731340) Data frame received for 3 I0113 00:31:02.007339 7 log.go:181] (0xc004311540) (3) Data frame handling I0113 00:31:02.009023 7 log.go:181] (0xc000731340) Data frame received for 1 I0113 00:31:02.009052 7 log.go:181] (0xc0005dce60) (1) Data frame handling I0113 00:31:02.009088 7 log.go:181] (0xc0005dce60) (1) Data frame sent I0113 00:31:02.009214 7 log.go:181] (0xc000731340) (0xc0005dce60) Stream removed, broadcasting: 1 I0113 00:31:02.009314 7 log.go:181] (0xc000731340) (0xc0005dce60) Stream removed, broadcasting: 1 I0113 00:31:02.009327 7 log.go:181] (0xc000731340) (0xc004311540) Stream removed, broadcasting: 3 I0113 00:31:02.009412 7 log.go:181] (0xc000731340) Go away received I0113 00:31:02.009532 7 log.go:181] (0xc000731340) (0xc0005ab680) Stream removed, broadcasting: 5 STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.12, port: 54321 Jan 13 00:31:02.009: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://172.18.0.12:54321/hostname] Namespace:sched-pred-8421 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 13 00:31:02.009: INFO: >>> kubeConfig: /root/.kube/config I0113 00:31:02.035584 7 log.go:181] (0xc002dfe4d0) (0xc001912140) Create stream I0113 00:31:02.035618 7 log.go:181] (0xc002dfe4d0) (0xc001912140) Stream added, broadcasting: 1 I0113 00:31:02.038237 7 log.go:181] (0xc002dfe4d0) Reply frame received for 1 I0113 00:31:02.038278 7 log.go:181] (0xc002dfe4d0) (0xc001912280) Create stream I0113 00:31:02.038288 7 log.go:181] (0xc002dfe4d0) (0xc001912280) Stream added, broadcasting: 3 I0113 00:31:02.039254 7 log.go:181] (0xc002dfe4d0) Reply frame received for 3 I0113 00:31:02.039305 7 log.go:181] (0xc002dfe4d0) (0xc001912320) Create stream I0113 00:31:02.039319 7 log.go:181] (0xc002dfe4d0) (0xc001912320) Stream added, broadcasting: 5 I0113 00:31:02.040156 7 log.go:181] (0xc002dfe4d0) Reply frame received for 5 I0113 00:31:02.113895 7 log.go:181] (0xc002dfe4d0) Data frame received for 5 I0113 00:31:02.113965 7 log.go:181] (0xc001912320) (5) Data frame handling I0113 00:31:02.113987 7 log.go:181] (0xc001912320) (5) Data frame sent I0113 00:31:02.114002 7 log.go:181] (0xc002dfe4d0) Data frame received for 5 I0113 00:31:02.114010 7 log.go:181] (0xc001912320) (5) Data frame handling I0113 00:31:02.114027 7 log.go:181] (0xc001912320) (5) Data frame sent I0113 00:31:02.114034 7 log.go:181] (0xc002dfe4d0) Data frame received for 5 I0113 00:31:02.114042 7 log.go:181] (0xc001912320) (5) Data frame handling I0113 00:31:02.114054 7 log.go:181] (0xc001912320) (5) Data frame sent I0113 00:31:02.114066 7 log.go:181] (0xc002dfe4d0) Data frame received for 5 I0113 00:31:02.114075 7 log.go:181] (0xc001912320) (5) Data frame handling I0113 00:31:02.114091 7 log.go:181] (0xc001912320) (5) Data frame sent I0113 00:31:02.114099 7 log.go:181] (0xc002dfe4d0) Data frame received for 5 I0113 00:31:02.114107 7 log.go:181] (0xc001912320) (5) Data frame handling I0113 00:31:02.114118 7 log.go:181] (0xc001912320) (5) Data frame sent I0113 00:31:02.114776 7 log.go:181] (0xc002dfe4d0) Data frame received for 5 I0113 00:31:02.114815 7 log.go:181] (0xc001912320) (5) Data frame handling I0113 00:31:02.114829 7 log.go:181] (0xc001912320) (5) Data frame sent I0113 00:31:02.114847 7 log.go:181] (0xc002dfe4d0) Data frame received for 3 I0113 00:31:02.114864 7 log.go:181] (0xc001912280) (3) Data frame handling I0113 00:31:02.114881 7 log.go:181] (0xc001912280) (3) Data frame sent I0113 00:31:02.115536 7 log.go:181] (0xc002dfe4d0) Data frame received for 5 I0113 00:31:02.115587 7 log.go:181] (0xc001912320) (5) Data frame handling I0113 00:31:02.115635 7 log.go:181] (0xc002dfe4d0) Data frame received for 3 I0113 00:31:02.115655 7 log.go:181] (0xc001912280) (3) Data frame handling I0113 00:31:02.117484 7 log.go:181] (0xc002dfe4d0) Data frame received for 1 I0113 00:31:02.117522 7 log.go:181] (0xc001912140) (1) Data frame handling I0113 00:31:02.117551 7 log.go:181] (0xc001912140) (1) Data frame sent I0113 00:31:02.117580 7 log.go:181] (0xc002dfe4d0) (0xc001912140) Stream removed, broadcasting: 1 I0113 00:31:02.117614 7 log.go:181] (0xc002dfe4d0) Go away received I0113 00:31:02.117732 7 log.go:181] (0xc002dfe4d0) (0xc001912140) Stream removed, broadcasting: 1 I0113 00:31:02.117758 7 log.go:181] (0xc002dfe4d0) (0xc001912280) Stream removed, broadcasting: 3 I0113 00:31:02.117769 7 log.go:181] (0xc002dfe4d0) (0xc001912320) Stream removed, broadcasting: 5 STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.12, port: 54321 UDP Jan 13 00:31:02.117: INFO: ExecWithOptions {Command:[/bin/sh -c nc -vuz -w 5 172.18.0.12 54321] Namespace:sched-pred-8421 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 13 00:31:02.117: INFO: >>> kubeConfig: /root/.kube/config I0113 00:31:02.143361 7 log.go:181] (0xc000731600) (0xc0005dd0e0) Create stream I0113 00:31:02.143389 7 log.go:181] (0xc000731600) (0xc0005dd0e0) Stream added, broadcasting: 1 I0113 00:31:02.145372 7 log.go:181] (0xc000731600) Reply frame received for 1 I0113 00:31:02.145415 7 log.go:181] (0xc000731600) (0xc0005dd180) Create stream I0113 00:31:02.145431 7 log.go:181] (0xc000731600) (0xc0005dd180) Stream added, broadcasting: 3 I0113 00:31:02.146343 7 log.go:181] (0xc000731600) Reply frame received for 3 I0113 00:31:02.146373 7 log.go:181] (0xc000731600) (0xc0019123c0) Create stream I0113 00:31:02.146385 7 log.go:181] (0xc000731600) (0xc0019123c0) Stream added, broadcasting: 5 I0113 00:31:02.147315 7 log.go:181] (0xc000731600) Reply frame received for 5 I0113 00:31:07.212476 7 log.go:181] (0xc000731600) Data frame received for 3 I0113 00:31:07.212520 7 log.go:181] (0xc0005dd180) (3) Data frame handling I0113 00:31:07.212551 7 log.go:181] (0xc000731600) Data frame received for 5 I0113 00:31:07.212568 7 log.go:181] (0xc0019123c0) (5) Data frame handling I0113 00:31:07.212584 7 log.go:181] (0xc0019123c0) (5) Data frame sent I0113 00:31:07.212599 7 log.go:181] (0xc000731600) Data frame received for 5 I0113 00:31:07.212608 7 log.go:181] (0xc0019123c0) (5) Data frame handling I0113 00:31:07.214529 7 log.go:181] (0xc000731600) Data frame received for 1 I0113 00:31:07.214579 7 log.go:181] (0xc0005dd0e0) (1) Data frame handling I0113 00:31:07.214612 7 log.go:181] (0xc0005dd0e0) (1) Data frame sent I0113 00:31:07.214639 7 log.go:181] (0xc000731600) (0xc0005dd0e0) Stream removed, broadcasting: 1 I0113 00:31:07.214665 7 log.go:181] (0xc000731600) Go away received I0113 00:31:07.214841 7 log.go:181] (0xc000731600) (0xc0005dd0e0) Stream removed, broadcasting: 1 I0113 00:31:07.214880 7 log.go:181] (0xc000731600) (0xc0005dd180) Stream removed, broadcasting: 3 I0113 00:31:07.214899 7 log.go:181] (0xc000731600) (0xc0019123c0) Stream removed, broadcasting: 5 STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54321 Jan 13 00:31:07.214: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 172.18.0.12 http://127.0.0.1:54321/hostname] Namespace:sched-pred-8421 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 13 00:31:07.214: INFO: >>> kubeConfig: /root/.kube/config I0113 00:31:07.248155 7 log.go:181] (0xc002dfec60) (0xc001912640) Create stream I0113 00:31:07.248243 7 log.go:181] (0xc002dfec60) (0xc001912640) Stream added, broadcasting: 1 I0113 00:31:07.251437 7 log.go:181] (0xc002dfec60) Reply frame received for 1 I0113 00:31:07.251480 7 log.go:181] (0xc002dfec60) (0xc0019126e0) Create stream I0113 00:31:07.251527 7 log.go:181] (0xc002dfec60) (0xc0019126e0) Stream added, broadcasting: 3 I0113 00:31:07.252735 7 log.go:181] (0xc002dfec60) Reply frame received for 3 I0113 00:31:07.252769 7 log.go:181] (0xc002dfec60) (0xc0005ab7c0) Create stream I0113 00:31:07.252784 7 log.go:181] (0xc002dfec60) (0xc0005ab7c0) Stream added, broadcasting: 5 I0113 00:31:07.254029 7 log.go:181] (0xc002dfec60) Reply frame received for 5 I0113 00:31:07.357043 7 log.go:181] (0xc002dfec60) Data frame received for 5 I0113 00:31:07.357083 7 log.go:181] (0xc0005ab7c0) (5) Data frame handling I0113 00:31:07.357107 7 log.go:181] (0xc0005ab7c0) (5) Data frame sent I0113 00:31:07.357119 7 log.go:181] (0xc002dfec60) Data frame received for 5 I0113 00:31:07.357128 7 log.go:181] (0xc0005ab7c0) (5) Data frame handling I0113 00:31:07.357142 7 log.go:181] (0xc0005ab7c0) (5) Data frame sent I0113 00:31:07.357155 7 log.go:181] (0xc002dfec60) Data frame received for 5 I0113 00:31:07.357166 7 log.go:181] (0xc0005ab7c0) (5) Data frame handling I0113 00:31:07.357176 7 log.go:181] (0xc0005ab7c0) (5) Data frame sent I0113 00:31:07.357185 7 log.go:181] (0xc002dfec60) Data frame received for 5 I0113 00:31:07.357191 7 log.go:181] (0xc0005ab7c0) (5) Data frame handling I0113 00:31:07.357199 7 log.go:181] (0xc0005ab7c0) (5) Data frame sent I0113 00:31:07.357208 7 log.go:181] (0xc002dfec60) Data frame received for 5 I0113 00:31:07.357213 7 log.go:181] (0xc0005ab7c0) (5) Data frame handling I0113 00:31:07.357227 7 log.go:181] (0xc0005ab7c0) (5) Data frame sent I0113 00:31:07.357245 7 log.go:181] (0xc002dfec60) Data frame received for 5 I0113 00:31:07.357268 7 log.go:181] (0xc0005ab7c0) (5) Data frame handling I0113 00:31:07.357284 7 log.go:181] (0xc0005ab7c0) (5) Data frame sent I0113 00:31:07.357300 7 log.go:181] (0xc002dfec60) Data frame received for 5 I0113 00:31:07.357319 7 log.go:181] (0xc0005ab7c0) (5) Data frame handling I0113 00:31:07.357338 7 log.go:181] (0xc0005ab7c0) (5) Data frame sent I0113 00:31:07.357351 7 log.go:181] (0xc002dfec60) Data frame received for 5 I0113 00:31:07.357357 7 log.go:181] (0xc0005ab7c0) (5) Data frame handling I0113 00:31:07.357366 7 log.go:181] (0xc0005ab7c0) (5) Data frame sent I0113 00:31:07.357371 7 log.go:181] (0xc002dfec60) Data frame received for 5 I0113 00:31:07.357376 7 log.go:181] (0xc0005ab7c0) (5) Data frame handling I0113 00:31:07.357382 7 log.go:181] (0xc0005ab7c0) (5) Data frame sent I0113 00:31:07.357399 7 log.go:181] (0xc002dfec60) Data frame received for 5 I0113 00:31:07.357415 7 log.go:181] (0xc0005ab7c0) (5) Data frame handling I0113 00:31:07.357427 7 log.go:181] (0xc0005ab7c0) (5) Data frame sent I0113 00:31:07.357437 7 log.go:181] (0xc002dfec60) Data frame received for 5 I0113 00:31:07.357447 7 log.go:181] (0xc0005ab7c0) (5) Data frame handling I0113 00:31:07.357461 7 log.go:181] (0xc0005ab7c0) (5) Data frame sent I0113 00:31:07.357475 7 log.go:181] (0xc002dfec60) Data frame received for 5 I0113 00:31:07.357488 7 log.go:181] (0xc0005ab7c0) (5) Data frame handling I0113 00:31:07.357503 7 log.go:181] (0xc0005ab7c0) (5) Data frame sent I0113 00:31:07.357967 7 log.go:181] (0xc002dfec60) Data frame received for 5 I0113 00:31:07.357997 7 log.go:181] (0xc0005ab7c0) (5) Data frame handling I0113 00:31:07.358014 7 log.go:181] (0xc0005ab7c0) (5) Data frame sent I0113 00:31:07.358028 7 log.go:181] (0xc002dfec60) Data frame received for 5 I0113 00:31:07.358039 7 log.go:181] (0xc0005ab7c0) (5) Data frame handling I0113 00:31:07.358051 7 log.go:181] (0xc0005ab7c0) (5) Data frame sent I0113 00:31:07.358071 7 log.go:181] (0xc002dfec60) Data frame received for 3 I0113 00:31:07.358083 7 log.go:181] (0xc0019126e0) (3) Data frame handling I0113 00:31:07.358100 7 log.go:181] (0xc0019126e0) (3) Data frame sent I0113 00:31:07.358744 7 log.go:181] (0xc002dfec60) Data frame received for 3 I0113 00:31:07.358770 7 log.go:181] (0xc0019126e0) (3) Data frame handling I0113 00:31:07.358845 7 log.go:181] (0xc002dfec60) Data frame received for 5 I0113 00:31:07.358861 7 log.go:181] (0xc0005ab7c0) (5) Data frame handling I0113 00:31:07.360590 7 log.go:181] (0xc002dfec60) Data frame received for 1 I0113 00:31:07.360607 7 log.go:181] (0xc001912640) (1) Data frame handling I0113 00:31:07.360623 7 log.go:181] (0xc001912640) (1) Data frame sent I0113 00:31:07.360634 7 log.go:181] (0xc002dfec60) (0xc001912640) Stream removed, broadcasting: 1 I0113 00:31:07.360673 7 log.go:181] (0xc002dfec60) Go away received I0113 00:31:07.360700 7 log.go:181] (0xc002dfec60) (0xc001912640) Stream removed, broadcasting: 1 I0113 00:31:07.360717 7 log.go:181] (0xc002dfec60) (0xc0019126e0) Stream removed, broadcasting: 3 I0113 00:31:07.360731 7 log.go:181] (0xc002dfec60) (0xc0005ab7c0) Stream removed, broadcasting: 5 STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.12, port: 54321 Jan 13 00:31:07.360: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://172.18.0.12:54321/hostname] Namespace:sched-pred-8421 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 13 00:31:07.360: INFO: >>> kubeConfig: /root/.kube/config I0113 00:31:07.389767 7 log.go:181] (0xc002dff340) (0xc001912960) Create stream I0113 00:31:07.389793 7 log.go:181] (0xc002dff340) (0xc001912960) Stream added, broadcasting: 1 I0113 00:31:07.391494 7 log.go:181] (0xc002dff340) Reply frame received for 1 I0113 00:31:07.391517 7 log.go:181] (0xc002dff340) (0xc0043115e0) Create stream I0113 00:31:07.391526 7 log.go:181] (0xc002dff340) (0xc0043115e0) Stream added, broadcasting: 3 I0113 00:31:07.392367 7 log.go:181] (0xc002dff340) Reply frame received for 3 I0113 00:31:07.392427 7 log.go:181] (0xc002dff340) (0xc0013c4780) Create stream I0113 00:31:07.392444 7 log.go:181] (0xc002dff340) (0xc0013c4780) Stream added, broadcasting: 5 I0113 00:31:07.393373 7 log.go:181] (0xc002dff340) Reply frame received for 5 I0113 00:31:07.468061 7 log.go:181] (0xc002dff340) Data frame received for 5 I0113 00:31:07.468094 7 log.go:181] (0xc0013c4780) (5) Data frame handling I0113 00:31:07.468105 7 log.go:181] (0xc0013c4780) (5) Data frame sent I0113 00:31:07.468119 7 log.go:181] (0xc002dff340) Data frame received for 5 I0113 00:31:07.468130 7 log.go:181] (0xc0013c4780) (5) Data frame handling I0113 00:31:07.468213 7 log.go:181] (0xc0013c4780) (5) Data frame sent I0113 00:31:07.468233 7 log.go:181] (0xc002dff340) Data frame received for 5 I0113 00:31:07.468248 7 log.go:181] (0xc0013c4780) (5) Data frame handling I0113 00:31:07.468271 7 log.go:181] (0xc0013c4780) (5) Data frame sent I0113 00:31:07.468286 7 log.go:181] (0xc002dff340) Data frame received for 5 I0113 00:31:07.468296 7 log.go:181] (0xc0013c4780) (5) Data frame handling I0113 00:31:07.468318 7 log.go:181] (0xc0013c4780) (5) Data frame sent I0113 00:31:07.468332 7 log.go:181] (0xc002dff340) Data frame received for 5 I0113 00:31:07.468341 7 log.go:181] (0xc0013c4780) (5) Data frame handling I0113 00:31:07.468367 7 log.go:181] (0xc0013c4780) (5) Data frame sent I0113 00:31:07.468389 7 log.go:181] (0xc002dff340) Data frame received for 5 I0113 00:31:07.468399 7 log.go:181] (0xc0013c4780) (5) Data frame handling I0113 00:31:07.468414 7 log.go:181] (0xc0013c4780) (5) Data frame sent I0113 00:31:07.468426 7 log.go:181] (0xc002dff340) Data frame received for 5 I0113 00:31:07.468440 7 log.go:181] (0xc0013c4780) (5) Data frame handling I0113 00:31:07.468461 7 log.go:181] (0xc0013c4780) (5) Data frame sent I0113 00:31:07.468474 7 log.go:181] (0xc002dff340) Data frame received for 5 I0113 00:31:07.468484 7 log.go:181] (0xc0013c4780) (5) Data frame handling I0113 00:31:07.468496 7 log.go:181] (0xc0013c4780) (5) Data frame sent I0113 00:31:07.468615 7 log.go:181] (0xc002dff340) Data frame received for 5 I0113 00:31:07.468635 7 log.go:181] (0xc0013c4780) (5) Data frame handling I0113 00:31:07.468658 7 log.go:181] (0xc0013c4780) (5) Data frame sent I0113 00:31:07.468686 7 log.go:181] (0xc002dff340) Data frame received for 3 I0113 00:31:07.468715 7 log.go:181] (0xc0043115e0) (3) Data frame handling I0113 00:31:07.468725 7 log.go:181] (0xc0043115e0) (3) Data frame sent I0113 00:31:07.468740 7 log.go:181] (0xc002dff340) Data frame received for 5 I0113 00:31:07.468747 7 log.go:181] (0xc0013c4780) (5) Data frame handling I0113 00:31:07.468754 7 log.go:181] (0xc0013c4780) (5) Data frame sent I0113 00:31:07.469564 7 log.go:181] (0xc002dff340) Data frame received for 5 I0113 00:31:07.469586 7 log.go:181] (0xc0013c4780) (5) Data frame handling I0113 00:31:07.469609 7 log.go:181] (0xc002dff340) Data frame received for 3 I0113 00:31:07.469694 7 log.go:181] (0xc0043115e0) (3) Data frame handling I0113 00:31:07.471114 7 log.go:181] (0xc002dff340) Data frame received for 1 I0113 00:31:07.471127 7 log.go:181] (0xc001912960) (1) Data frame handling I0113 00:31:07.471135 7 log.go:181] (0xc001912960) (1) Data frame sent I0113 00:31:07.471143 7 log.go:181] (0xc002dff340) (0xc001912960) Stream removed, broadcasting: 1 I0113 00:31:07.471159 7 log.go:181] (0xc002dff340) Go away received I0113 00:31:07.471312 7 log.go:181] (0xc002dff340) (0xc001912960) Stream removed, broadcasting: 1 I0113 00:31:07.471342 7 log.go:181] (0xc002dff340) (0xc0043115e0) Stream removed, broadcasting: 3 I0113 00:31:07.471363 7 log.go:181] (0xc002dff340) (0xc0013c4780) Stream removed, broadcasting: 5 STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.12, port: 54321 UDP Jan 13 00:31:07.471: INFO: ExecWithOptions {Command:[/bin/sh -c nc -vuz -w 5 172.18.0.12 54321] Namespace:sched-pred-8421 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 13 00:31:07.471: INFO: >>> kubeConfig: /root/.kube/config I0113 00:31:07.500464 7 log.go:181] (0xc000141ef0) (0xc004311900) Create stream I0113 00:31:07.500494 7 log.go:181] (0xc000141ef0) (0xc004311900) Stream added, broadcasting: 1 I0113 00:31:07.502696 7 log.go:181] (0xc000141ef0) Reply frame received for 1 I0113 00:31:07.502780 7 log.go:181] (0xc000141ef0) (0xc0043119a0) Create stream I0113 00:31:07.502791 7 log.go:181] (0xc000141ef0) (0xc0043119a0) Stream added, broadcasting: 3 I0113 00:31:07.503558 7 log.go:181] (0xc000141ef0) Reply frame received for 3 I0113 00:31:07.503586 7 log.go:181] (0xc000141ef0) (0xc004311a40) Create stream I0113 00:31:07.503599 7 log.go:181] (0xc000141ef0) (0xc004311a40) Stream added, broadcasting: 5 I0113 00:31:07.504436 7 log.go:181] (0xc000141ef0) Reply frame received for 5 I0113 00:31:12.576546 7 log.go:181] (0xc000141ef0) Data frame received for 5 I0113 00:31:12.576599 7 log.go:181] (0xc004311a40) (5) Data frame handling I0113 00:31:12.576640 7 log.go:181] (0xc004311a40) (5) Data frame sent I0113 00:31:12.576946 7 log.go:181] (0xc000141ef0) Data frame received for 5 I0113 00:31:12.576990 7 log.go:181] (0xc000141ef0) Data frame received for 3 I0113 00:31:12.577035 7 log.go:181] (0xc0043119a0) (3) Data frame handling I0113 00:31:12.577066 7 log.go:181] (0xc004311a40) (5) Data frame handling I0113 00:31:12.578944 7 log.go:181] (0xc000141ef0) Data frame received for 1 I0113 00:31:12.578958 7 log.go:181] (0xc004311900) (1) Data frame handling I0113 00:31:12.578965 7 log.go:181] (0xc004311900) (1) Data frame sent I0113 00:31:12.578974 7 log.go:181] (0xc000141ef0) (0xc004311900) Stream removed, broadcasting: 1 I0113 00:31:12.578984 7 log.go:181] (0xc000141ef0) Go away received I0113 00:31:12.579083 7 log.go:181] (0xc000141ef0) (0xc004311900) Stream removed, broadcasting: 1 I0113 00:31:12.579107 7 log.go:181] (0xc000141ef0) (0xc0043119a0) Stream removed, broadcasting: 3 I0113 00:31:12.579118 7 log.go:181] (0xc000141ef0) (0xc004311a40) Stream removed, broadcasting: 5 STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54321 Jan 13 00:31:12.579: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 172.18.0.12 http://127.0.0.1:54321/hostname] Namespace:sched-pred-8421 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 13 00:31:12.579: INFO: >>> kubeConfig: /root/.kube/config I0113 00:31:12.612702 7 log.go:181] (0xc001f4c8f0) (0xc0013c5040) Create stream I0113 00:31:12.612729 7 log.go:181] (0xc001f4c8f0) (0xc0013c5040) Stream added, broadcasting: 1 I0113 00:31:12.614975 7 log.go:181] (0xc001f4c8f0) Reply frame received for 1 I0113 00:31:12.615020 7 log.go:181] (0xc001f4c8f0) (0xc0005dd220) Create stream I0113 00:31:12.615037 7 log.go:181] (0xc001f4c8f0) (0xc0005dd220) Stream added, broadcasting: 3 I0113 00:31:12.616221 7 log.go:181] (0xc001f4c8f0) Reply frame received for 3 I0113 00:31:12.616256 7 log.go:181] (0xc001f4c8f0) (0xc004311ae0) Create stream I0113 00:31:12.616267 7 log.go:181] (0xc001f4c8f0) (0xc004311ae0) Stream added, broadcasting: 5 I0113 00:31:12.617446 7 log.go:181] (0xc001f4c8f0) Reply frame received for 5 I0113 00:31:12.680164 7 log.go:181] (0xc001f4c8f0) Data frame received for 5 I0113 00:31:12.680211 7 log.go:181] (0xc004311ae0) (5) Data frame handling I0113 00:31:12.680234 7 log.go:181] (0xc004311ae0) (5) Data frame sent I0113 00:31:12.680250 7 log.go:181] (0xc001f4c8f0) Data frame received for 5 I0113 00:31:12.680258 7 log.go:181] (0xc004311ae0) (5) Data frame handling I0113 00:31:12.680283 7 log.go:181] (0xc004311ae0) (5) Data frame sent I0113 00:31:12.680312 7 log.go:181] (0xc001f4c8f0) Data frame received for 5 I0113 00:31:12.680329 7 log.go:181] (0xc004311ae0) (5) Data frame handling I0113 00:31:12.680344 7 log.go:181] (0xc004311ae0) (5) Data frame sent I0113 00:31:12.680376 7 log.go:181] (0xc001f4c8f0) Data frame received for 5 I0113 00:31:12.680397 7 log.go:181] (0xc004311ae0) (5) Data frame handling I0113 00:31:12.680419 7 log.go:181] (0xc004311ae0) (5) Data frame sent I0113 00:31:12.680435 7 log.go:181] (0xc001f4c8f0) Data frame received for 5 I0113 00:31:12.680452 7 log.go:181] (0xc004311ae0) (5) Data frame handling I0113 00:31:12.680490 7 log.go:181] (0xc001f4c8f0) Data frame received for 3 I0113 00:31:12.680529 7 log.go:181] (0xc0005dd220) (3) Data frame handling I0113 00:31:12.680540 7 log.go:181] (0xc0005dd220) (3) Data frame sent I0113 00:31:12.680563 7 log.go:181] (0xc004311ae0) (5) Data frame sent I0113 00:31:12.680575 7 log.go:181] (0xc001f4c8f0) Data frame received for 5 I0113 00:31:12.680602 7 log.go:181] (0xc004311ae0) (5) Data frame handling I0113 00:31:12.680612 7 log.go:181] (0xc004311ae0) (5) Data frame sent I0113 00:31:12.681397 7 log.go:181] (0xc001f4c8f0) Data frame received for 5 I0113 00:31:12.681476 7 log.go:181] (0xc004311ae0) (5) Data frame handling I0113 00:31:12.681520 7 log.go:181] (0xc001f4c8f0) Data frame received for 3 I0113 00:31:12.681543 7 log.go:181] (0xc0005dd220) (3) Data frame handling I0113 00:31:12.682692 7 log.go:181] (0xc001f4c8f0) Data frame received for 1 I0113 00:31:12.682712 7 log.go:181] (0xc0013c5040) (1) Data frame handling I0113 00:31:12.682721 7 log.go:181] (0xc0013c5040) (1) Data frame sent I0113 00:31:12.682730 7 log.go:181] (0xc001f4c8f0) (0xc0013c5040) Stream removed, broadcasting: 1 I0113 00:31:12.682742 7 log.go:181] (0xc001f4c8f0) Go away received I0113 00:31:12.682823 7 log.go:181] (0xc001f4c8f0) (0xc0013c5040) Stream removed, broadcasting: 1 I0113 00:31:12.682840 7 log.go:181] (0xc001f4c8f0) (0xc0005dd220) Stream removed, broadcasting: 3 I0113 00:31:12.682850 7 log.go:181] (0xc001f4c8f0) (0xc004311ae0) Stream removed, broadcasting: 5 STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.12, port: 54321 Jan 13 00:31:12.682: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://172.18.0.12:54321/hostname] Namespace:sched-pred-8421 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 13 00:31:12.682: INFO: >>> kubeConfig: /root/.kube/config I0113 00:31:12.714637 7 log.go:181] (0xc001f4cfd0) (0xc0013c5f40) Create stream I0113 00:31:12.714663 7 log.go:181] (0xc001f4cfd0) (0xc0013c5f40) Stream added, broadcasting: 1 I0113 00:31:12.716657 7 log.go:181] (0xc001f4cfd0) Reply frame received for 1 I0113 00:31:12.716699 7 log.go:181] (0xc001f4cfd0) (0xc0005dd540) Create stream I0113 00:31:12.716716 7 log.go:181] (0xc001f4cfd0) (0xc0005dd540) Stream added, broadcasting: 3 I0113 00:31:12.717737 7 log.go:181] (0xc001f4cfd0) Reply frame received for 3 I0113 00:31:12.717797 7 log.go:181] (0xc001f4cfd0) (0xc004311b80) Create stream I0113 00:31:12.717819 7 log.go:181] (0xc001f4cfd0) (0xc004311b80) Stream added, broadcasting: 5 I0113 00:31:12.718803 7 log.go:181] (0xc001f4cfd0) Reply frame received for 5 I0113 00:31:12.792619 7 log.go:181] (0xc001f4cfd0) Data frame received for 5 I0113 00:31:12.792652 7 log.go:181] (0xc004311b80) (5) Data frame handling I0113 00:31:12.792666 7 log.go:181] (0xc004311b80) (5) Data frame sent I0113 00:31:12.792675 7 log.go:181] (0xc001f4cfd0) Data frame received for 5 I0113 00:31:12.792681 7 log.go:181] (0xc004311b80) (5) Data frame handling I0113 00:31:12.792698 7 log.go:181] (0xc004311b80) (5) Data frame sent I0113 00:31:12.792707 7 log.go:181] (0xc001f4cfd0) Data frame received for 5 I0113 00:31:12.792713 7 log.go:181] (0xc004311b80) (5) Data frame handling I0113 00:31:12.792721 7 log.go:181] (0xc004311b80) (5) Data frame sent I0113 00:31:12.792744 7 log.go:181] (0xc001f4cfd0) Data frame received for 5 I0113 00:31:12.792754 7 log.go:181] (0xc004311b80) (5) Data frame handling I0113 00:31:12.792761 7 log.go:181] (0xc004311b80) (5) Data frame sent I0113 00:31:12.792765 7 log.go:181] (0xc001f4cfd0) Data frame received for 5 I0113 00:31:12.792769 7 log.go:181] (0xc004311b80) (5) Data frame handling I0113 00:31:12.792775 7 log.go:181] (0xc004311b80) (5) Data frame sent I0113 00:31:12.792829 7 log.go:181] (0xc001f4cfd0) Data frame received for 5 I0113 00:31:12.792991 7 log.go:181] (0xc004311b80) (5) Data frame handling I0113 00:31:12.793023 7 log.go:181] (0xc004311b80) (5) Data frame sent I0113 00:31:12.793163 7 log.go:181] (0xc001f4cfd0) Data frame received for 5 I0113 00:31:12.793177 7 log.go:181] (0xc004311b80) (5) Data frame handling I0113 00:31:12.793182 7 log.go:181] (0xc004311b80) (5) Data frame sent I0113 00:31:12.793186 7 log.go:181] (0xc001f4cfd0) Data frame received for 5 I0113 00:31:12.793190 7 log.go:181] (0xc004311b80) (5) Data frame handling I0113 00:31:12.793197 7 log.go:181] (0xc004311b80) (5) Data frame sent I0113 00:31:12.793235 7 log.go:181] (0xc001f4cfd0) Data frame received for 5 I0113 00:31:12.793257 7 log.go:181] (0xc004311b80) (5) Data frame handling I0113 00:31:12.793276 7 log.go:181] (0xc004311b80) (5) Data frame sent I0113 00:31:12.793284 7 log.go:181] (0xc001f4cfd0) Data frame received for 5 I0113 00:31:12.793290 7 log.go:181] (0xc004311b80) (5) Data frame handling I0113 00:31:12.793303 7 log.go:181] (0xc004311b80) (5) Data frame sent I0113 00:31:12.793698 7 log.go:181] (0xc001f4cfd0) Data frame received for 5 I0113 00:31:12.793729 7 log.go:181] (0xc004311b80) (5) Data frame handling I0113 00:31:12.793760 7 log.go:181] (0xc004311b80) (5) Data frame sent I0113 00:31:12.793792 7 log.go:181] (0xc001f4cfd0) Data frame received for 3 I0113 00:31:12.793824 7 log.go:181] (0xc0005dd540) (3) Data frame handling I0113 00:31:12.793845 7 log.go:181] (0xc0005dd540) (3) Data frame sent I0113 00:31:12.794066 7 log.go:181] (0xc001f4cfd0) Data frame received for 5 I0113 00:31:12.794087 7 log.go:181] (0xc004311b80) (5) Data frame handling I0113 00:31:12.794434 7 log.go:181] (0xc001f4cfd0) Data frame received for 3 I0113 00:31:12.794461 7 log.go:181] (0xc0005dd540) (3) Data frame handling I0113 00:31:12.795704 7 log.go:181] (0xc001f4cfd0) Data frame received for 1 I0113 00:31:12.795722 7 log.go:181] (0xc0013c5f40) (1) Data frame handling I0113 00:31:12.795736 7 log.go:181] (0xc0013c5f40) (1) Data frame sent I0113 00:31:12.795749 7 log.go:181] (0xc001f4cfd0) (0xc0013c5f40) Stream removed, broadcasting: 1 I0113 00:31:12.795818 7 log.go:181] (0xc001f4cfd0) (0xc0013c5f40) Stream removed, broadcasting: 1 I0113 00:31:12.795837 7 log.go:181] (0xc001f4cfd0) (0xc0005dd540) Stream removed, broadcasting: 3 I0113 00:31:12.795848 7 log.go:181] (0xc001f4cfd0) (0xc004311b80) Stream removed, broadcasting: 5 STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.12, port: 54321 UDP Jan 13 00:31:12.795: INFO: ExecWithOptions {Command:[/bin/sh -c nc -vuz -w 5 172.18.0.12 54321] Namespace:sched-pred-8421 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 13 00:31:12.795: INFO: >>> kubeConfig: /root/.kube/config I0113 00:31:12.796478 7 log.go:181] (0xc001f4cfd0) Go away received I0113 00:31:12.825357 7 log.go:181] (0xc000731ce0) (0xc0005dd7c0) Create stream I0113 00:31:12.825390 7 log.go:181] (0xc000731ce0) (0xc0005dd7c0) Stream added, broadcasting: 1 I0113 00:31:12.827513 7 log.go:181] (0xc000731ce0) Reply frame received for 1 I0113 00:31:12.827543 7 log.go:181] (0xc000731ce0) (0xc0005dd900) Create stream I0113 00:31:12.827556 7 log.go:181] (0xc000731ce0) (0xc0005dd900) Stream added, broadcasting: 3 I0113 00:31:12.828447 7 log.go:181] (0xc000731ce0) Reply frame received for 3 I0113 00:31:12.828502 7 log.go:181] (0xc000731ce0) (0xc001912aa0) Create stream I0113 00:31:12.828529 7 log.go:181] (0xc000731ce0) (0xc001912aa0) Stream added, broadcasting: 5 I0113 00:31:12.829527 7 log.go:181] (0xc000731ce0) Reply frame received for 5 I0113 00:31:17.875628 7 log.go:181] (0xc000731ce0) Data frame received for 5 I0113 00:31:17.875668 7 log.go:181] (0xc001912aa0) (5) Data frame handling I0113 00:31:17.875696 7 log.go:181] (0xc001912aa0) (5) Data frame sent I0113 00:31:17.875853 7 log.go:181] (0xc000731ce0) Data frame received for 5 I0113 00:31:17.875888 7 log.go:181] (0xc001912aa0) (5) Data frame handling I0113 00:31:17.876156 7 log.go:181] (0xc000731ce0) Data frame received for 3 I0113 00:31:17.876182 7 log.go:181] (0xc0005dd900) (3) Data frame handling I0113 00:31:17.878041 7 log.go:181] (0xc000731ce0) Data frame received for 1 I0113 00:31:17.878073 7 log.go:181] (0xc0005dd7c0) (1) Data frame handling I0113 00:31:17.878095 7 log.go:181] (0xc0005dd7c0) (1) Data frame sent I0113 00:31:17.878111 7 log.go:181] (0xc000731ce0) (0xc0005dd7c0) Stream removed, broadcasting: 1 I0113 00:31:17.878141 7 log.go:181] (0xc000731ce0) Go away received I0113 00:31:17.878202 7 log.go:181] (0xc000731ce0) (0xc0005dd7c0) Stream removed, broadcasting: 1 I0113 00:31:17.878225 7 log.go:181] (0xc000731ce0) (0xc0005dd900) Stream removed, broadcasting: 3 I0113 00:31:17.878233 7 log.go:181] (0xc000731ce0) (0xc001912aa0) Stream removed, broadcasting: 5 STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54321 Jan 13 00:31:17.878: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 172.18.0.12 http://127.0.0.1:54321/hostname] Namespace:sched-pred-8421 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 13 00:31:17.878: INFO: >>> kubeConfig: /root/.kube/config I0113 00:31:17.923753 7 log.go:181] (0xc002a824d0) (0xc004311e00) Create stream I0113 00:31:17.923784 7 log.go:181] (0xc002a824d0) (0xc004311e00) Stream added, broadcasting: 1 I0113 00:31:17.926371 7 log.go:181] (0xc002a824d0) Reply frame received for 1 I0113 00:31:17.926438 7 log.go:181] (0xc002a824d0) (0xc004311ea0) Create stream I0113 00:31:17.926458 7 log.go:181] (0xc002a824d0) (0xc004311ea0) Stream added, broadcasting: 3 I0113 00:31:17.927480 7 log.go:181] (0xc002a824d0) Reply frame received for 3 I0113 00:31:17.927524 7 log.go:181] (0xc002a824d0) (0xc002d84000) Create stream I0113 00:31:17.927540 7 log.go:181] (0xc002a824d0) (0xc002d84000) Stream added, broadcasting: 5 I0113 00:31:17.928614 7 log.go:181] (0xc002a824d0) Reply frame received for 5 I0113 00:31:17.985810 7 log.go:181] (0xc002a824d0) Data frame received for 5 I0113 00:31:17.985840 7 log.go:181] (0xc002d84000) (5) Data frame handling I0113 00:31:17.985862 7 log.go:181] (0xc002d84000) (5) Data frame sent I0113 00:31:17.985877 7 log.go:181] (0xc002a824d0) Data frame received for 5 I0113 00:31:17.985889 7 log.go:181] (0xc002d84000) (5) Data frame handling I0113 00:31:17.985962 7 log.go:181] (0xc002d84000) (5) Data frame sent I0113 00:31:17.986057 7 log.go:181] (0xc002a824d0) Data frame received for 5 I0113 00:31:17.986070 7 log.go:181] (0xc002d84000) (5) Data frame handling I0113 00:31:17.986080 7 log.go:181] (0xc002d84000) (5) Data frame sent I0113 00:31:17.986149 7 log.go:181] (0xc002a824d0) Data frame received for 3 I0113 00:31:17.986174 7 log.go:181] (0xc004311ea0) (3) Data frame handling I0113 00:31:17.986202 7 log.go:181] (0xc002a824d0) Data frame received for 5 I0113 00:31:17.986234 7 log.go:181] (0xc002d84000) (5) Data frame handling I0113 00:31:17.986256 7 log.go:181] (0xc002d84000) (5) Data frame sent I0113 00:31:17.986277 7 log.go:181] (0xc002a824d0) Data frame received for 5 I0113 00:31:17.986296 7 log.go:181] (0xc002d84000) (5) Data frame handling I0113 00:31:17.986316 7 log.go:181] (0xc002d84000) (5) Data frame sent I0113 00:31:17.986336 7 log.go:181] (0xc004311ea0) (3) Data frame sent I0113 00:31:17.987046 7 log.go:181] (0xc002a824d0) Data frame received for 5 I0113 00:31:17.987093 7 log.go:181] (0xc002d84000) (5) Data frame handling I0113 00:31:17.987135 7 log.go:181] (0xc002a824d0) Data frame received for 3 I0113 00:31:17.987161 7 log.go:181] (0xc004311ea0) (3) Data frame handling I0113 00:31:17.988819 7 log.go:181] (0xc002a824d0) Data frame received for 1 I0113 00:31:17.988903 7 log.go:181] (0xc004311e00) (1) Data frame handling I0113 00:31:17.988920 7 log.go:181] (0xc004311e00) (1) Data frame sent I0113 00:31:17.988935 7 log.go:181] (0xc002a824d0) (0xc004311e00) Stream removed, broadcasting: 1 I0113 00:31:17.988989 7 log.go:181] (0xc002a824d0) (0xc004311e00) Stream removed, broadcasting: 1 I0113 00:31:17.989010 7 log.go:181] (0xc002a824d0) Go away received I0113 00:31:17.989075 7 log.go:181] (0xc002a824d0) (0xc004311ea0) Stream removed, broadcasting: 3 I0113 00:31:17.989128 7 log.go:181] (0xc002a824d0) (0xc002d84000) Stream removed, broadcasting: 5 STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.12, port: 54321 Jan 13 00:31:17.989: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://172.18.0.12:54321/hostname] Namespace:sched-pred-8421 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 13 00:31:17.989: INFO: >>> kubeConfig: /root/.kube/config I0113 00:31:18.020391 7 log.go:181] (0xc000947810) (0xc0005abb80) Create stream I0113 00:31:18.020446 7 log.go:181] (0xc000947810) (0xc0005abb80) Stream added, broadcasting: 1 I0113 00:31:18.023067 7 log.go:181] (0xc000947810) Reply frame received for 1 I0113 00:31:18.023115 7 log.go:181] (0xc000947810) (0xc004311f40) Create stream I0113 00:31:18.023135 7 log.go:181] (0xc000947810) (0xc004311f40) Stream added, broadcasting: 3 I0113 00:31:18.024121 7 log.go:181] (0xc000947810) Reply frame received for 3 I0113 00:31:18.024155 7 log.go:181] (0xc000947810) (0xc001912b40) Create stream I0113 00:31:18.024171 7 log.go:181] (0xc000947810) (0xc001912b40) Stream added, broadcasting: 5 I0113 00:31:18.025299 7 log.go:181] (0xc000947810) Reply frame received for 5 I0113 00:31:18.085281 7 log.go:181] (0xc000947810) Data frame received for 5 I0113 00:31:18.085314 7 log.go:181] (0xc001912b40) (5) Data frame handling I0113 00:31:18.085326 7 log.go:181] (0xc001912b40) (5) Data frame sent I0113 00:31:18.085333 7 log.go:181] (0xc000947810) Data frame received for 5 I0113 00:31:18.085343 7 log.go:181] (0xc001912b40) (5) Data frame handling I0113 00:31:18.085362 7 log.go:181] (0xc001912b40) (5) Data frame sent I0113 00:31:18.085373 7 log.go:181] (0xc000947810) Data frame received for 5 I0113 00:31:18.085384 7 log.go:181] (0xc001912b40) (5) Data frame handling I0113 00:31:18.085397 7 log.go:181] (0xc001912b40) (5) Data frame sent I0113 00:31:18.085405 7 log.go:181] (0xc000947810) Data frame received for 5 I0113 00:31:18.085412 7 log.go:181] (0xc001912b40) (5) Data frame handling I0113 00:31:18.085424 7 log.go:181] (0xc001912b40) (5) Data frame sent I0113 00:31:18.085435 7 log.go:181] (0xc000947810) Data frame received for 5 I0113 00:31:18.085444 7 log.go:181] (0xc001912b40) (5) Data frame handling I0113 00:31:18.085455 7 log.go:181] (0xc001912b40) (5) Data frame sent I0113 00:31:18.085468 7 log.go:181] (0xc000947810) Data frame received for 5 I0113 00:31:18.085478 7 log.go:181] (0xc001912b40) (5) Data frame handling I0113 00:31:18.085488 7 log.go:181] (0xc001912b40) (5) Data frame sent I0113 00:31:18.085499 7 log.go:181] (0xc000947810) Data frame received for 5 I0113 00:31:18.085508 7 log.go:181] (0xc001912b40) (5) Data frame handling I0113 00:31:18.085530 7 log.go:181] (0xc001912b40) (5) Data frame sent I0113 00:31:18.085542 7 log.go:181] (0xc000947810) Data frame received for 5 I0113 00:31:18.085560 7 log.go:181] (0xc001912b40) (5) Data frame handling I0113 00:31:18.085584 7 log.go:181] (0xc001912b40) (5) Data frame sent I0113 00:31:18.085595 7 log.go:181] (0xc000947810) Data frame received for 5 I0113 00:31:18.085605 7 log.go:181] (0xc001912b40) (5) Data frame handling I0113 00:31:18.085643 7 log.go:181] (0xc001912b40) (5) Data frame sent I0113 00:31:18.085667 7 log.go:181] (0xc000947810) Data frame received for 5 I0113 00:31:18.085679 7 log.go:181] (0xc001912b40) (5) Data frame handling I0113 00:31:18.085710 7 log.go:181] (0xc001912b40) (5) Data frame sent I0113 00:31:18.085723 7 log.go:181] (0xc000947810) Data frame received for 5 I0113 00:31:18.085732 7 log.go:181] (0xc001912b40) (5) Data frame handling I0113 00:31:18.085745 7 log.go:181] (0xc001912b40) (5) Data frame sent I0113 00:31:18.085761 7 log.go:181] (0xc000947810) Data frame received for 5 I0113 00:31:18.085798 7 log.go:181] (0xc001912b40) (5) Data frame handling I0113 00:31:18.085852 7 log.go:181] (0xc001912b40) (5) Data frame sent I0113 00:31:18.085884 7 log.go:181] (0xc000947810) Data frame received for 5 I0113 00:31:18.085937 7 log.go:181] (0xc001912b40) (5) Data frame handling I0113 00:31:18.085976 7 log.go:181] (0xc001912b40) (5) Data frame sent I0113 00:31:18.086007 7 log.go:181] (0xc000947810) Data frame received for 5 I0113 00:31:18.086020 7 log.go:181] (0xc001912b40) (5) Data frame handling I0113 00:31:18.086032 7 log.go:181] (0xc001912b40) (5) Data frame sent I0113 00:31:18.086042 7 log.go:181] (0xc000947810) Data frame received for 5 I0113 00:31:18.086052 7 log.go:181] (0xc001912b40) (5) Data frame handling I0113 00:31:18.086066 7 log.go:181] (0xc001912b40) (5) Data frame sent I0113 00:31:18.086519 7 log.go:181] (0xc000947810) Data frame received for 3 I0113 00:31:18.086549 7 log.go:181] (0xc004311f40) (3) Data frame handling I0113 00:31:18.086566 7 log.go:181] (0xc004311f40) (3) Data frame sent I0113 00:31:18.086581 7 log.go:181] (0xc000947810) Data frame received for 5 I0113 00:31:18.086624 7 log.go:181] (0xc001912b40) (5) Data frame handling I0113 00:31:18.086656 7 log.go:181] (0xc001912b40) (5) Data frame sent I0113 00:31:18.086926 7 log.go:181] (0xc000947810) Data frame received for 5 I0113 00:31:18.086942 7 log.go:181] (0xc001912b40) (5) Data frame handling I0113 00:31:18.087207 7 log.go:181] (0xc000947810) Data frame received for 3 I0113 00:31:18.087226 7 log.go:181] (0xc004311f40) (3) Data frame handling I0113 00:31:18.088619 7 log.go:181] (0xc000947810) Data frame received for 1 I0113 00:31:18.088636 7 log.go:181] (0xc0005abb80) (1) Data frame handling I0113 00:31:18.088652 7 log.go:181] (0xc0005abb80) (1) Data frame sent I0113 00:31:18.088662 7 log.go:181] (0xc000947810) (0xc0005abb80) Stream removed, broadcasting: 1 I0113 00:31:18.088730 7 log.go:181] (0xc000947810) (0xc0005abb80) Stream removed, broadcasting: 1 I0113 00:31:18.088744 7 log.go:181] (0xc000947810) (0xc004311f40) Stream removed, broadcasting: 3 I0113 00:31:18.088922 7 log.go:181] (0xc000947810) Go away received I0113 00:31:18.088958 7 log.go:181] (0xc000947810) (0xc001912b40) Stream removed, broadcasting: 5 STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.12, port: 54321 UDP Jan 13 00:31:18.089: INFO: ExecWithOptions {Command:[/bin/sh -c nc -vuz -w 5 172.18.0.12 54321] Namespace:sched-pred-8421 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 13 00:31:18.089: INFO: >>> kubeConfig: /root/.kube/config I0113 00:31:18.116157 7 log.go:181] (0xc002a82bb0) (0xc002b94460) Create stream I0113 00:31:18.116189 7 log.go:181] (0xc002a82bb0) (0xc002b94460) Stream added, broadcasting: 1 I0113 00:31:18.121500 7 log.go:181] (0xc002a82bb0) Reply frame received for 1 I0113 00:31:18.121581 7 log.go:181] (0xc002a82bb0) (0xc002d84140) Create stream I0113 00:31:18.121616 7 log.go:181] (0xc002a82bb0) (0xc002d84140) Stream added, broadcasting: 3 I0113 00:31:18.122667 7 log.go:181] (0xc002a82bb0) Reply frame received for 3 I0113 00:31:18.122710 7 log.go:181] (0xc002a82bb0) (0xc0005abcc0) Create stream I0113 00:31:18.122725 7 log.go:181] (0xc002a82bb0) (0xc0005abcc0) Stream added, broadcasting: 5 I0113 00:31:18.123575 7 log.go:181] (0xc002a82bb0) Reply frame received for 5 I0113 00:31:23.187871 7 log.go:181] (0xc002a82bb0) Data frame received for 5 I0113 00:31:23.187917 7 log.go:181] (0xc0005abcc0) (5) Data frame handling I0113 00:31:23.187951 7 log.go:181] (0xc0005abcc0) (5) Data frame sent I0113 00:31:23.188239 7 log.go:181] (0xc002a82bb0) Data frame received for 5 I0113 00:31:23.188276 7 log.go:181] (0xc0005abcc0) (5) Data frame handling I0113 00:31:23.188303 7 log.go:181] (0xc002a82bb0) Data frame received for 3 I0113 00:31:23.188320 7 log.go:181] (0xc002d84140) (3) Data frame handling I0113 00:31:23.190212 7 log.go:181] (0xc002a82bb0) Data frame received for 1 I0113 00:31:23.190236 7 log.go:181] (0xc002b94460) (1) Data frame handling I0113 00:31:23.190256 7 log.go:181] (0xc002b94460) (1) Data frame sent I0113 00:31:23.190369 7 log.go:181] (0xc002a82bb0) (0xc002b94460) Stream removed, broadcasting: 1 I0113 00:31:23.190463 7 log.go:181] (0xc002a82bb0) (0xc002b94460) Stream removed, broadcasting: 1 I0113 00:31:23.190508 7 log.go:181] (0xc002a82bb0) (0xc002d84140) Stream removed, broadcasting: 3 I0113 00:31:23.190530 7 log.go:181] (0xc002a82bb0) (0xc0005abcc0) Stream removed, broadcasting: 5 STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54321 I0113 00:31:23.190578 7 log.go:181] (0xc002a82bb0) Go away received Jan 13 00:31:23.190: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 172.18.0.12 http://127.0.0.1:54321/hostname] Namespace:sched-pred-8421 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 13 00:31:23.190: INFO: >>> kubeConfig: /root/.kube/config I0113 00:31:23.231219 7 log.go:181] (0xc0063a2000) (0xc0020de000) Create stream I0113 00:31:23.231262 7 log.go:181] (0xc0063a2000) (0xc0020de000) Stream added, broadcasting: 1 I0113 00:31:23.233977 7 log.go:181] (0xc0063a2000) Reply frame received for 1 I0113 00:31:23.234020 7 log.go:181] (0xc0063a2000) (0xc002b94500) Create stream I0113 00:31:23.234032 7 log.go:181] (0xc0063a2000) (0xc002b94500) Stream added, broadcasting: 3 I0113 00:31:23.235013 7 log.go:181] (0xc0063a2000) Reply frame received for 3 I0113 00:31:23.235040 7 log.go:181] (0xc0063a2000) (0xc0005dda40) Create stream I0113 00:31:23.235049 7 log.go:181] (0xc0063a2000) (0xc0005dda40) Stream added, broadcasting: 5 I0113 00:31:23.235752 7 log.go:181] (0xc0063a2000) Reply frame received for 5 I0113 00:31:23.326016 7 log.go:181] (0xc0063a2000) Data frame received for 5 I0113 00:31:23.326053 7 log.go:181] (0xc0005dda40) (5) Data frame handling I0113 00:31:23.326079 7 log.go:181] (0xc0005dda40) (5) Data frame sent I0113 00:31:23.326094 7 log.go:181] (0xc0063a2000) Data frame received for 5 I0113 00:31:23.326107 7 log.go:181] (0xc0005dda40) (5) Data frame handling I0113 00:31:23.326129 7 log.go:181] (0xc0005dda40) (5) Data frame sent I0113 00:31:23.326142 7 log.go:181] (0xc0063a2000) Data frame received for 5 I0113 00:31:23.326155 7 log.go:181] (0xc0005dda40) (5) Data frame handling I0113 00:31:23.326172 7 log.go:181] (0xc0005dda40) (5) Data frame sent I0113 00:31:23.326185 7 log.go:181] (0xc0063a2000) Data frame received for 5 I0113 00:31:23.326198 7 log.go:181] (0xc0005dda40) (5) Data frame handling I0113 00:31:23.326223 7 log.go:181] (0xc0005dda40) (5) Data frame sent I0113 00:31:23.326237 7 log.go:181] (0xc0063a2000) Data frame received for 5 I0113 00:31:23.326250 7 log.go:181] (0xc0005dda40) (5) Data frame handling I0113 00:31:23.326266 7 log.go:181] (0xc0005dda40) (5) Data frame sent I0113 00:31:23.326426 7 log.go:181] (0xc0063a2000) Data frame received for 5 I0113 00:31:23.326456 7 log.go:181] (0xc0005dda40) (5) Data frame handling I0113 00:31:23.326475 7 log.go:181] (0xc0005dda40) (5) Data frame sent I0113 00:31:23.326490 7 log.go:181] (0xc0063a2000) Data frame received for 5 I0113 00:31:23.326503 7 log.go:181] (0xc0005dda40) (5) Data frame handling I0113 00:31:23.326518 7 log.go:181] (0xc0063a2000) Data frame received for 3 I0113 00:31:23.326531 7 log.go:181] (0xc002b94500) (3) Data frame handling I0113 00:31:23.326558 7 log.go:181] (0xc002b94500) (3) Data frame sent I0113 00:31:23.326580 7 log.go:181] (0xc0005dda40) (5) Data frame sent I0113 00:31:23.327205 7 log.go:181] (0xc0063a2000) Data frame received for 5 I0113 00:31:23.327226 7 log.go:181] (0xc0005dda40) (5) Data frame handling I0113 00:31:23.327319 7 log.go:181] (0xc0063a2000) Data frame received for 3 I0113 00:31:23.327365 7 log.go:181] (0xc002b94500) (3) Data frame handling I0113 00:31:23.329043 7 log.go:181] (0xc0063a2000) Data frame received for 1 I0113 00:31:23.329088 7 log.go:181] (0xc0020de000) (1) Data frame handling I0113 00:31:23.329121 7 log.go:181] (0xc0020de000) (1) Data frame sent I0113 00:31:23.329152 7 log.go:181] (0xc0063a2000) (0xc0020de000) Stream removed, broadcasting: 1 I0113 00:31:23.329210 7 log.go:181] (0xc0063a2000) Go away received I0113 00:31:23.329245 7 log.go:181] (0xc0063a2000) (0xc0020de000) Stream removed, broadcasting: 1 I0113 00:31:23.329259 7 log.go:181] (0xc0063a2000) (0xc002b94500) Stream removed, broadcasting: 3 I0113 00:31:23.329265 7 log.go:181] (0xc0063a2000) (0xc0005dda40) Stream removed, broadcasting: 5 STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.12, port: 54321 Jan 13 00:31:23.329: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://172.18.0.12:54321/hostname] Namespace:sched-pred-8421 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 13 00:31:23.329: INFO: >>> kubeConfig: /root/.kube/config I0113 00:31:23.363100 7 log.go:181] (0xc002dff6b0) (0xc001912c80) Create stream I0113 00:31:23.363126 7 log.go:181] (0xc002dff6b0) (0xc001912c80) Stream added, broadcasting: 1 I0113 00:31:23.365315 7 log.go:181] (0xc002dff6b0) Reply frame received for 1 I0113 00:31:23.365362 7 log.go:181] (0xc002dff6b0) (0xc0020de0a0) Create stream I0113 00:31:23.365378 7 log.go:181] (0xc002dff6b0) (0xc0020de0a0) Stream added, broadcasting: 3 I0113 00:31:23.366417 7 log.go:181] (0xc002dff6b0) Reply frame received for 3 I0113 00:31:23.366456 7 log.go:181] (0xc002dff6b0) (0xc002b94780) Create stream I0113 00:31:23.366477 7 log.go:181] (0xc002dff6b0) (0xc002b94780) Stream added, broadcasting: 5 I0113 00:31:23.367448 7 log.go:181] (0xc002dff6b0) Reply frame received for 5 I0113 00:31:23.445757 7 log.go:181] (0xc002dff6b0) Data frame received for 5 I0113 00:31:23.445794 7 log.go:181] (0xc002b94780) (5) Data frame handling I0113 00:31:23.445808 7 log.go:181] (0xc002b94780) (5) Data frame sent I0113 00:31:23.445835 7 log.go:181] (0xc002dff6b0) Data frame received for 5 I0113 00:31:23.445889 7 log.go:181] (0xc002b94780) (5) Data frame handling I0113 00:31:23.445911 7 log.go:181] (0xc002b94780) (5) Data frame sent I0113 00:31:23.445932 7 log.go:181] (0xc002dff6b0) Data frame received for 5 I0113 00:31:23.445946 7 log.go:181] (0xc002b94780) (5) Data frame handling I0113 00:31:23.445987 7 log.go:181] (0xc002b94780) (5) Data frame sent I0113 00:31:23.446017 7 log.go:181] (0xc002dff6b0) Data frame received for 5 I0113 00:31:23.446049 7 log.go:181] (0xc002b94780) (5) Data frame handling I0113 00:31:23.446084 7 log.go:181] (0xc002b94780) (5) Data frame sent I0113 00:31:23.446107 7 log.go:181] (0xc002dff6b0) Data frame received for 5 I0113 00:31:23.446128 7 log.go:181] (0xc002b94780) (5) Data frame handling I0113 00:31:23.446161 7 log.go:181] (0xc002b94780) (5) Data frame sent I0113 00:31:23.446180 7 log.go:181] (0xc002dff6b0) Data frame received for 5 I0113 00:31:23.446208 7 log.go:181] (0xc002b94780) (5) Data frame handling I0113 00:31:23.446239 7 log.go:181] (0xc002b94780) (5) Data frame sent I0113 00:31:23.446263 7 log.go:181] (0xc002dff6b0) Data frame received for 5 I0113 00:31:23.446285 7 log.go:181] (0xc002b94780) (5) Data frame handling I0113 00:31:23.446305 7 log.go:181] (0xc002b94780) (5) Data frame sent I0113 00:31:23.446318 7 log.go:181] (0xc002dff6b0) Data frame received for 5 I0113 00:31:23.446334 7 log.go:181] (0xc002b94780) (5) Data frame handling I0113 00:31:23.446347 7 log.go:181] (0xc002b94780) (5) Data frame sent I0113 00:31:23.446361 7 log.go:181] (0xc002dff6b0) Data frame received for 5 I0113 00:31:23.446375 7 log.go:181] (0xc002b94780) (5) Data frame handling I0113 00:31:23.446400 7 log.go:181] (0xc002b94780) (5) Data frame sent I0113 00:31:23.446452 7 log.go:181] (0xc002dff6b0) Data frame received for 5 I0113 00:31:23.446486 7 log.go:181] (0xc002b94780) (5) Data frame handling I0113 00:31:23.446506 7 log.go:181] (0xc002b94780) (5) Data frame sent I0113 00:31:23.446522 7 log.go:181] (0xc002dff6b0) Data frame received for 5 I0113 00:31:23.446537 7 log.go:181] (0xc002b94780) (5) Data frame handling I0113 00:31:23.446551 7 log.go:181] (0xc002b94780) (5) Data frame sent I0113 00:31:23.446565 7 log.go:181] (0xc002dff6b0) Data frame received for 5 I0113 00:31:23.446578 7 log.go:181] (0xc002b94780) (5) Data frame handling I0113 00:31:23.446600 7 log.go:181] (0xc002b94780) (5) Data frame sent I0113 00:31:23.446613 7 log.go:181] (0xc002dff6b0) Data frame received for 5 I0113 00:31:23.446626 7 log.go:181] (0xc002b94780) (5) Data frame handling I0113 00:31:23.446643 7 log.go:181] (0xc002b94780) (5) Data frame sent I0113 00:31:23.446655 7 log.go:181] (0xc002dff6b0) Data frame received for 5 I0113 00:31:23.446671 7 log.go:181] (0xc002b94780) (5) Data frame handling I0113 00:31:23.446688 7 log.go:181] (0xc002dff6b0) Data frame received for 3 I0113 00:31:23.446702 7 log.go:181] (0xc0020de0a0) (3) Data frame handling I0113 00:31:23.446716 7 log.go:181] (0xc0020de0a0) (3) Data frame sent I0113 00:31:23.446752 7 log.go:181] (0xc002b94780) (5) Data frame sent I0113 00:31:23.447844 7 log.go:181] (0xc002dff6b0) Data frame received for 5 I0113 00:31:23.447898 7 log.go:181] (0xc002b94780) (5) Data frame handling I0113 00:31:23.447932 7 log.go:181] (0xc002dff6b0) Data frame received for 3 I0113 00:31:23.447947 7 log.go:181] (0xc0020de0a0) (3) Data frame handling I0113 00:31:23.449502 7 log.go:181] (0xc002dff6b0) Data frame received for 1 I0113 00:31:23.449529 7 log.go:181] (0xc001912c80) (1) Data frame handling I0113 00:31:23.449560 7 log.go:181] (0xc001912c80) (1) Data frame sent I0113 00:31:23.449580 7 log.go:181] (0xc002dff6b0) (0xc001912c80) Stream removed, broadcasting: 1 I0113 00:31:23.449595 7 log.go:181] (0xc002dff6b0) Go away received I0113 00:31:23.449709 7 log.go:181] (0xc002dff6b0) (0xc001912c80) Stream removed, broadcasting: 1 I0113 00:31:23.449730 7 log.go:181] (0xc002dff6b0) (0xc0020de0a0) Stream removed, broadcasting: 3 I0113 00:31:23.449742 7 log.go:181] (0xc002dff6b0) (0xc002b94780) Stream removed, broadcasting: 5 STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.12, port: 54321 UDP Jan 13 00:31:23.449: INFO: ExecWithOptions {Command:[/bin/sh -c nc -vuz -w 5 172.18.0.12 54321] Namespace:sched-pred-8421 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 13 00:31:23.449: INFO: >>> kubeConfig: /root/.kube/config I0113 00:31:23.477178 7 log.go:181] (0xc001f4db80) (0xc002d843c0) Create stream I0113 00:31:23.477200 7 log.go:181] (0xc001f4db80) (0xc002d843c0) Stream added, broadcasting: 1 I0113 00:31:23.479146 7 log.go:181] (0xc001f4db80) Reply frame received for 1 I0113 00:31:23.479195 7 log.go:181] (0xc001f4db80) (0xc0020de140) Create stream I0113 00:31:23.479210 7 log.go:181] (0xc001f4db80) (0xc0020de140) Stream added, broadcasting: 3 I0113 00:31:23.480121 7 log.go:181] (0xc001f4db80) Reply frame received for 3 I0113 00:31:23.480168 7 log.go:181] (0xc001f4db80) (0xc002b94820) Create stream I0113 00:31:23.480185 7 log.go:181] (0xc001f4db80) (0xc002b94820) Stream added, broadcasting: 5 I0113 00:31:23.481115 7 log.go:181] (0xc001f4db80) Reply frame received for 5 I0113 00:31:28.548372 7 log.go:181] (0xc001f4db80) Data frame received for 5 I0113 00:31:28.548480 7 log.go:181] (0xc002b94820) (5) Data frame handling I0113 00:31:28.548517 7 log.go:181] (0xc002b94820) (5) Data frame sent I0113 00:31:28.548536 7 log.go:181] (0xc001f4db80) Data frame received for 5 I0113 00:31:28.548555 7 log.go:181] (0xc002b94820) (5) Data frame handling I0113 00:31:28.548935 7 log.go:181] (0xc001f4db80) Data frame received for 3 I0113 00:31:28.548964 7 log.go:181] (0xc0020de140) (3) Data frame handling I0113 00:31:28.550992 7 log.go:181] (0xc001f4db80) Data frame received for 1 I0113 00:31:28.551044 7 log.go:181] (0xc002d843c0) (1) Data frame handling I0113 00:31:28.551090 7 log.go:181] (0xc002d843c0) (1) Data frame sent I0113 00:31:28.551144 7 log.go:181] (0xc001f4db80) (0xc002d843c0) Stream removed, broadcasting: 1 I0113 00:31:28.551184 7 log.go:181] (0xc001f4db80) Go away received I0113 00:31:28.551275 7 log.go:181] (0xc001f4db80) (0xc002d843c0) Stream removed, broadcasting: 1 I0113 00:31:28.551302 7 log.go:181] (0xc001f4db80) (0xc0020de140) Stream removed, broadcasting: 3 I0113 00:31:28.551317 7 log.go:181] (0xc001f4db80) (0xc002b94820) Stream removed, broadcasting: 5 STEP: removing the label kubernetes.io/e2e-fe1abc32-2c52-492d-b8c1-ebe10ddc71f5 off the node leguer-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-fe1abc32-2c52-492d-b8c1-ebe10ddc71f5 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:31:28.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8421" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:83 • [SLOW TEST:47.138 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":309,"completed":259,"skipped":4461,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:31:28.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test emptydir 0777 on node default medium Jan 13 00:31:28.769: INFO: Waiting up to 5m0s for pod "pod-8693749f-4c1a-42dd-8ed7-be8421cc9d1c" in namespace "emptydir-4512" to be "Succeeded or Failed" Jan 13 00:31:28.808: INFO: Pod "pod-8693749f-4c1a-42dd-8ed7-be8421cc9d1c": Phase="Pending", Reason="", readiness=false. Elapsed: 38.419703ms Jan 13 00:31:30.812: INFO: Pod "pod-8693749f-4c1a-42dd-8ed7-be8421cc9d1c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043090683s Jan 13 00:31:32.838: INFO: Pod "pod-8693749f-4c1a-42dd-8ed7-be8421cc9d1c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.068910698s STEP: Saw pod success Jan 13 00:31:32.838: INFO: Pod "pod-8693749f-4c1a-42dd-8ed7-be8421cc9d1c" satisfied condition "Succeeded or Failed" Jan 13 00:31:32.842: INFO: Trying to get logs from node leguer-worker pod pod-8693749f-4c1a-42dd-8ed7-be8421cc9d1c container test-container: STEP: delete the pod Jan 13 00:31:32.888: INFO: Waiting for pod pod-8693749f-4c1a-42dd-8ed7-be8421cc9d1c to disappear Jan 13 00:31:32.903: INFO: Pod pod-8693749f-4c1a-42dd-8ed7-be8421cc9d1c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:31:32.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4512" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":260,"skipped":4470,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:31:32.910: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0113 00:32:14.045623 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Jan 13 00:33:16.067: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. Jan 13 00:33:16.067: INFO: Deleting pod "simpletest.rc-2bxnn" in namespace "gc-8447" Jan 13 00:33:16.126: INFO: Deleting pod "simpletest.rc-2z65d" in namespace "gc-8447" Jan 13 00:33:16.148: INFO: Deleting pod "simpletest.rc-bhb2m" in namespace "gc-8447" Jan 13 00:33:16.191: INFO: Deleting pod "simpletest.rc-d6s5b" in namespace "gc-8447" Jan 13 00:33:16.265: INFO: Deleting pod "simpletest.rc-hkw28" in namespace "gc-8447" Jan 13 00:33:19.210: INFO: Deleting pod "simpletest.rc-n7684" in namespace "gc-8447" Jan 13 00:33:19.565: INFO: Deleting pod "simpletest.rc-pprx9" in namespace "gc-8447" Jan 13 00:33:19.758: INFO: Deleting pod "simpletest.rc-qqp4g" in namespace "gc-8447" Jan 13 00:33:19.792: INFO: Deleting pod "simpletest.rc-tqdql" in namespace "gc-8447" Jan 13 00:33:20.112: INFO: Deleting pod "simpletest.rc-vjsdf" in namespace "gc-8447" [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:33:20.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8447" for this suite. • [SLOW TEST:107.677 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":309,"completed":261,"skipped":4475,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:33:20.588: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-4649.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-4649.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4649.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-4649.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-4649.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4649.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 13 00:33:31.086: INFO: DNS probes using dns-4649/dns-test-4d930c6f-0514-41bb-bb27-c8da12b4a8b2 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:33:31.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4649" for this suite. • [SLOW TEST:10.600 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":309,"completed":262,"skipped":4488,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:33:31.189: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating secret with name secret-test-9f87d87e-3c8f-468a-9f4f-db9ba388ab0e STEP: Creating a pod to test consume secrets Jan 13 00:33:31.698: INFO: Waiting up to 5m0s for pod "pod-secrets-15b5d568-0568-4027-9929-105a19b980f9" in namespace "secrets-916" to be "Succeeded or Failed" Jan 13 00:33:31.702: INFO: Pod "pod-secrets-15b5d568-0568-4027-9929-105a19b980f9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.236665ms Jan 13 00:33:33.707: INFO: Pod "pod-secrets-15b5d568-0568-4027-9929-105a19b980f9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008967554s Jan 13 00:33:35.713: INFO: Pod "pod-secrets-15b5d568-0568-4027-9929-105a19b980f9": Phase="Running", Reason="", readiness=true. Elapsed: 4.015051438s Jan 13 00:33:37.718: INFO: Pod "pod-secrets-15b5d568-0568-4027-9929-105a19b980f9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.020102505s STEP: Saw pod success Jan 13 00:33:37.718: INFO: Pod "pod-secrets-15b5d568-0568-4027-9929-105a19b980f9" satisfied condition "Succeeded or Failed" Jan 13 00:33:37.722: INFO: Trying to get logs from node leguer-worker pod pod-secrets-15b5d568-0568-4027-9929-105a19b980f9 container secret-volume-test: STEP: delete the pod Jan 13 00:33:37.767: INFO: Waiting for pod pod-secrets-15b5d568-0568-4027-9929-105a19b980f9 to disappear Jan 13 00:33:37.783: INFO: Pod pod-secrets-15b5d568-0568-4027-9929-105a19b980f9 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:33:37.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-916" for this suite. • [SLOW TEST:6.601 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":263,"skipped":4526,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:33:37.790: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward API volume plugin Jan 13 00:33:37.893: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e07660c7-d5f1-4aff-87f9-d96f4cc2ae1f" in namespace "downward-api-5504" to be "Succeeded or Failed" Jan 13 00:33:37.936: INFO: Pod "downwardapi-volume-e07660c7-d5f1-4aff-87f9-d96f4cc2ae1f": Phase="Pending", Reason="", readiness=false. Elapsed: 42.608957ms Jan 13 00:33:39.946: INFO: Pod "downwardapi-volume-e07660c7-d5f1-4aff-87f9-d96f4cc2ae1f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052683446s Jan 13 00:33:41.950: INFO: Pod "downwardapi-volume-e07660c7-d5f1-4aff-87f9-d96f4cc2ae1f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.05702899s STEP: Saw pod success Jan 13 00:33:41.951: INFO: Pod "downwardapi-volume-e07660c7-d5f1-4aff-87f9-d96f4cc2ae1f" satisfied condition "Succeeded or Failed" Jan 13 00:33:41.959: INFO: Trying to get logs from node leguer-worker pod downwardapi-volume-e07660c7-d5f1-4aff-87f9-d96f4cc2ae1f container client-container: STEP: delete the pod Jan 13 00:33:42.021: INFO: Waiting for pod downwardapi-volume-e07660c7-d5f1-4aff-87f9-d96f4cc2ae1f to disappear Jan 13 00:33:42.041: INFO: Pod downwardapi-volume-e07660c7-d5f1-4aff-87f9-d96f4cc2ae1f no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:33:42.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5504" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":309,"completed":264,"skipped":4534,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SS ------------------------------ [k8s.io] Pods should delete a collection of pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:33:42.049: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 [It] should delete a collection of pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Create set of pods Jan 13 00:33:42.193: INFO: created test-pod-1 Jan 13 00:33:42.209: INFO: created test-pod-2 Jan 13 00:33:42.277: INFO: created test-pod-3 STEP: waiting for all 3 pods to be located STEP: waiting for all pods to be deleted [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:33:42.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6316" for this suite. •{"msg":"PASSED [k8s.io] Pods should delete a collection of pods [Conformance]","total":309,"completed":265,"skipped":4536,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:33:42.490: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test override command Jan 13 00:33:42.576: INFO: Waiting up to 5m0s for pod "client-containers-776702cb-129d-439a-8144-2e3e17e1dcc7" in namespace "containers-3834" to be "Succeeded or Failed" Jan 13 00:33:42.595: INFO: Pod "client-containers-776702cb-129d-439a-8144-2e3e17e1dcc7": Phase="Pending", Reason="", readiness=false. Elapsed: 19.176498ms Jan 13 00:33:44.656: INFO: Pod "client-containers-776702cb-129d-439a-8144-2e3e17e1dcc7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080271579s Jan 13 00:33:46.661: INFO: Pod "client-containers-776702cb-129d-439a-8144-2e3e17e1dcc7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.08500313s STEP: Saw pod success Jan 13 00:33:46.661: INFO: Pod "client-containers-776702cb-129d-439a-8144-2e3e17e1dcc7" satisfied condition "Succeeded or Failed" Jan 13 00:33:46.664: INFO: Trying to get logs from node leguer-worker pod client-containers-776702cb-129d-439a-8144-2e3e17e1dcc7 container agnhost-container: STEP: delete the pod Jan 13 00:33:46.738: INFO: Waiting for pod client-containers-776702cb-129d-439a-8144-2e3e17e1dcc7 to disappear Jan 13 00:33:46.751: INFO: Pod client-containers-776702cb-129d-439a-8144-2e3e17e1dcc7 no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:33:46.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3834" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":309,"completed":266,"skipped":4597,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:33:46.761: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2820.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2820.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 13 00:33:54.940: INFO: DNS probes using dns-2820/dns-test-5cab84a1-c4e6-47fe-b958-b478b20b5619 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:33:54.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2820" for this suite. • [SLOW TEST:8.301 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":309,"completed":267,"skipped":4608,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:33:55.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Jan 13 00:33:55.537: INFO: Waiting up to 1m0s for all nodes to be ready Jan 13 00:34:55.564: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:34:55.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:679 [It] verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 13 00:34:55.648: INFO: PriorityClass.scheduling.k8s.io "p1" is invalid: Value: Forbidden: may not be changed in an update. Jan 13 00:34:55.652: INFO: PriorityClass.scheduling.k8s.io "p2" is invalid: Value: Forbidden: may not be changed in an update. [AfterEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:34:55.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-7513" for this suite. [AfterEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:693 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:34:55.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-395" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:60.728 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:673 verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance]","total":309,"completed":268,"skipped":4621,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:34:55.793: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should run through the lifecycle of a ServiceAccount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a ServiceAccount STEP: watching for the ServiceAccount to be added STEP: patching the ServiceAccount STEP: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector) STEP: deleting the ServiceAccount [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:34:55.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-3049" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":309,"completed":269,"skipped":4697,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:34:56.022: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating pod pod-subpath-test-downwardapi-2w67 STEP: Creating a pod to test atomic-volume-subpath Jan 13 00:34:56.153: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-2w67" in namespace "subpath-3003" to be "Succeeded or Failed" Jan 13 00:34:56.167: INFO: Pod "pod-subpath-test-downwardapi-2w67": Phase="Pending", Reason="", readiness=false. Elapsed: 14.36566ms Jan 13 00:34:58.172: INFO: Pod "pod-subpath-test-downwardapi-2w67": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019086202s Jan 13 00:35:00.176: INFO: Pod "pod-subpath-test-downwardapi-2w67": Phase="Running", Reason="", readiness=true. Elapsed: 4.023015037s Jan 13 00:35:02.181: INFO: Pod "pod-subpath-test-downwardapi-2w67": Phase="Running", Reason="", readiness=true. Elapsed: 6.027871585s Jan 13 00:35:04.186: INFO: Pod "pod-subpath-test-downwardapi-2w67": Phase="Running", Reason="", readiness=true. Elapsed: 8.032839387s Jan 13 00:35:06.191: INFO: Pod "pod-subpath-test-downwardapi-2w67": Phase="Running", Reason="", readiness=true. Elapsed: 10.038025147s Jan 13 00:35:08.197: INFO: Pod "pod-subpath-test-downwardapi-2w67": Phase="Running", Reason="", readiness=true. Elapsed: 12.043660138s Jan 13 00:35:10.202: INFO: Pod "pod-subpath-test-downwardapi-2w67": Phase="Running", Reason="", readiness=true. Elapsed: 14.048706015s Jan 13 00:35:12.207: INFO: Pod "pod-subpath-test-downwardapi-2w67": Phase="Running", Reason="", readiness=true. Elapsed: 16.053551488s Jan 13 00:35:14.211: INFO: Pod "pod-subpath-test-downwardapi-2w67": Phase="Running", Reason="", readiness=true. Elapsed: 18.05780118s Jan 13 00:35:16.217: INFO: Pod "pod-subpath-test-downwardapi-2w67": Phase="Running", Reason="", readiness=true. Elapsed: 20.063460446s Jan 13 00:35:18.220: INFO: Pod "pod-subpath-test-downwardapi-2w67": Phase="Running", Reason="", readiness=true. Elapsed: 22.067337626s Jan 13 00:35:20.225: INFO: Pod "pod-subpath-test-downwardapi-2w67": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.071768109s STEP: Saw pod success Jan 13 00:35:20.225: INFO: Pod "pod-subpath-test-downwardapi-2w67" satisfied condition "Succeeded or Failed" Jan 13 00:35:20.228: INFO: Trying to get logs from node leguer-worker pod pod-subpath-test-downwardapi-2w67 container test-container-subpath-downwardapi-2w67: STEP: delete the pod Jan 13 00:35:20.278: INFO: Waiting for pod pod-subpath-test-downwardapi-2w67 to disappear Jan 13 00:35:20.293: INFO: Pod pod-subpath-test-downwardapi-2w67 no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-2w67 Jan 13 00:35:20.293: INFO: Deleting pod "pod-subpath-test-downwardapi-2w67" in namespace "subpath-3003" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:35:20.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3003" for this suite. • [SLOW TEST:24.283 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":309,"completed":270,"skipped":4711,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:35:20.305: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward API volume plugin Jan 13 00:35:20.423: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0b5ba6e7-1553-4365-b8af-52e224469666" in namespace "downward-api-1143" to be "Succeeded or Failed" Jan 13 00:35:20.426: INFO: Pod "downwardapi-volume-0b5ba6e7-1553-4365-b8af-52e224469666": Phase="Pending", Reason="", readiness=false. Elapsed: 3.35997ms Jan 13 00:35:22.431: INFO: Pod "downwardapi-volume-0b5ba6e7-1553-4365-b8af-52e224469666": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008486767s Jan 13 00:35:24.436: INFO: Pod "downwardapi-volume-0b5ba6e7-1553-4365-b8af-52e224469666": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012862616s STEP: Saw pod success Jan 13 00:35:24.436: INFO: Pod "downwardapi-volume-0b5ba6e7-1553-4365-b8af-52e224469666" satisfied condition "Succeeded or Failed" Jan 13 00:35:24.438: INFO: Trying to get logs from node leguer-worker pod downwardapi-volume-0b5ba6e7-1553-4365-b8af-52e224469666 container client-container: STEP: delete the pod Jan 13 00:35:24.646: INFO: Waiting for pod downwardapi-volume-0b5ba6e7-1553-4365-b8af-52e224469666 to disappear Jan 13 00:35:24.714: INFO: Pod downwardapi-volume-0b5ba6e7-1553-4365-b8af-52e224469666 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:35:24.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1143" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":309,"completed":271,"skipped":4711,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSSSSSSS ------------------------------ [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:35:24.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange Jan 13 00:35:24.859: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values Jan 13 00:35:24.863: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Jan 13 00:35:24.863: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange Jan 13 00:35:24.882: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Jan 13 00:35:24.882: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange Jan 13 00:35:24.937: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] Jan 13 00:35:24.937: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted Jan 13 00:35:32.936: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:35:32.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-2535" for this suite. • [SLOW TEST:8.265 seconds] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":309,"completed":272,"skipped":4721,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:35:32.989: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:35:44.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2842" for this suite. • [SLOW TEST:11.572 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":309,"completed":273,"skipped":4730,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:35:44.562: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 13 00:35:45.127: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 13 00:35:47.139: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746094945, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746094945, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746094945, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746094945, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 13 00:35:50.174: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 13 00:35:50.179: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4134-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:35:51.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6128" for this suite. STEP: Destroying namespace "webhook-6128-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:7.010 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":309,"completed":274,"skipped":4742,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:35:51.572: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward API volume plugin Jan 13 00:35:51.639: INFO: Waiting up to 5m0s for pod "downwardapi-volume-00c11a0e-c3e9-4c8a-9218-85875eca7587" in namespace "downward-api-2947" to be "Succeeded or Failed" Jan 13 00:35:51.655: INFO: Pod "downwardapi-volume-00c11a0e-c3e9-4c8a-9218-85875eca7587": Phase="Pending", Reason="", readiness=false. Elapsed: 15.918231ms Jan 13 00:35:53.661: INFO: Pod "downwardapi-volume-00c11a0e-c3e9-4c8a-9218-85875eca7587": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022108143s Jan 13 00:35:55.697: INFO: Pod "downwardapi-volume-00c11a0e-c3e9-4c8a-9218-85875eca7587": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.057776227s STEP: Saw pod success Jan 13 00:35:55.697: INFO: Pod "downwardapi-volume-00c11a0e-c3e9-4c8a-9218-85875eca7587" satisfied condition "Succeeded or Failed" Jan 13 00:35:55.700: INFO: Trying to get logs from node leguer-worker2 pod downwardapi-volume-00c11a0e-c3e9-4c8a-9218-85875eca7587 container client-container: STEP: delete the pod Jan 13 00:35:55.761: INFO: Waiting for pod downwardapi-volume-00c11a0e-c3e9-4c8a-9218-85875eca7587 to disappear Jan 13 00:35:55.775: INFO: Pod downwardapi-volume-00c11a0e-c3e9-4c8a-9218-85875eca7587 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:35:55.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2947" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":309,"completed":275,"skipped":4749,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:35:55.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 13 00:35:56.672: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created Jan 13 00:35:58.683: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746094956, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746094956, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746094956, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746094956, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 13 00:36:01.721: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:36:02.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2526" for this suite. STEP: Destroying namespace "webhook-2526-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:6.791 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":309,"completed":276,"skipped":4755,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:36:02.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name configmap-test-volume-map-18c1db2c-d5f4-4aec-99d5-a04832e046c6 STEP: Creating a pod to test consume configMaps Jan 13 00:36:02.880: INFO: Waiting up to 5m0s for pod "pod-configmaps-0053de05-be42-4ffb-8b25-564a914e1ecb" in namespace "configmap-4402" to be "Succeeded or Failed" Jan 13 00:36:03.440: INFO: Pod "pod-configmaps-0053de05-be42-4ffb-8b25-564a914e1ecb": Phase="Pending", Reason="", readiness=false. Elapsed: 560.36471ms Jan 13 00:36:05.446: INFO: Pod "pod-configmaps-0053de05-be42-4ffb-8b25-564a914e1ecb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.565565063s Jan 13 00:36:07.448: INFO: Pod "pod-configmaps-0053de05-be42-4ffb-8b25-564a914e1ecb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.568210581s STEP: Saw pod success Jan 13 00:36:07.448: INFO: Pod "pod-configmaps-0053de05-be42-4ffb-8b25-564a914e1ecb" satisfied condition "Succeeded or Failed" Jan 13 00:36:07.451: INFO: Trying to get logs from node leguer-worker2 pod pod-configmaps-0053de05-be42-4ffb-8b25-564a914e1ecb container agnhost-container: STEP: delete the pod Jan 13 00:36:07.484: INFO: Waiting for pod pod-configmaps-0053de05-be42-4ffb-8b25-564a914e1ecb to disappear Jan 13 00:36:07.502: INFO: Pod pod-configmaps-0053de05-be42-4ffb-8b25-564a914e1ecb no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:36:07.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4402" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":309,"completed":277,"skipped":4796,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:36:07.527: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward API volume plugin Jan 13 00:36:07.653: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f48aed11-0ffe-4d35-988c-f8b05f60a46a" in namespace "downward-api-4154" to be "Succeeded or Failed" Jan 13 00:36:07.693: INFO: Pod "downwardapi-volume-f48aed11-0ffe-4d35-988c-f8b05f60a46a": Phase="Pending", Reason="", readiness=false. Elapsed: 39.721523ms Jan 13 00:36:09.721: INFO: Pod "downwardapi-volume-f48aed11-0ffe-4d35-988c-f8b05f60a46a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068400382s Jan 13 00:36:11.725: INFO: Pod "downwardapi-volume-f48aed11-0ffe-4d35-988c-f8b05f60a46a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.072417907s STEP: Saw pod success Jan 13 00:36:11.726: INFO: Pod "downwardapi-volume-f48aed11-0ffe-4d35-988c-f8b05f60a46a" satisfied condition "Succeeded or Failed" Jan 13 00:36:11.728: INFO: Trying to get logs from node leguer-worker2 pod downwardapi-volume-f48aed11-0ffe-4d35-988c-f8b05f60a46a container client-container: STEP: delete the pod Jan 13 00:36:11.746: INFO: Waiting for pod downwardapi-volume-f48aed11-0ffe-4d35-988c-f8b05f60a46a to disappear Jan 13 00:36:11.767: INFO: Pod downwardapi-volume-f48aed11-0ffe-4d35-988c-f8b05f60a46a no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:36:11.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4154" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":309,"completed":278,"skipped":4813,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:36:11.775: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3704.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3704.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3704.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3704.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 13 00:36:18.321: INFO: DNS probes using dns-test-d6e13a56-1b68-4e09-a78d-9dae0790c5e7 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3704.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3704.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3704.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3704.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 13 00:36:24.487: INFO: File wheezy_udp@dns-test-service-3.dns-3704.svc.cluster.local from pod dns-3704/dns-test-f05e15da-2065-4859-bc17-79b6bf64dc25 contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 13 00:36:24.491: INFO: File jessie_udp@dns-test-service-3.dns-3704.svc.cluster.local from pod dns-3704/dns-test-f05e15da-2065-4859-bc17-79b6bf64dc25 contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 13 00:36:24.491: INFO: Lookups using dns-3704/dns-test-f05e15da-2065-4859-bc17-79b6bf64dc25 failed for: [wheezy_udp@dns-test-service-3.dns-3704.svc.cluster.local jessie_udp@dns-test-service-3.dns-3704.svc.cluster.local] Jan 13 00:36:29.497: INFO: File wheezy_udp@dns-test-service-3.dns-3704.svc.cluster.local from pod dns-3704/dns-test-f05e15da-2065-4859-bc17-79b6bf64dc25 contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 13 00:36:29.501: INFO: File jessie_udp@dns-test-service-3.dns-3704.svc.cluster.local from pod dns-3704/dns-test-f05e15da-2065-4859-bc17-79b6bf64dc25 contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 13 00:36:29.501: INFO: Lookups using dns-3704/dns-test-f05e15da-2065-4859-bc17-79b6bf64dc25 failed for: [wheezy_udp@dns-test-service-3.dns-3704.svc.cluster.local jessie_udp@dns-test-service-3.dns-3704.svc.cluster.local] Jan 13 00:36:34.497: INFO: File wheezy_udp@dns-test-service-3.dns-3704.svc.cluster.local from pod dns-3704/dns-test-f05e15da-2065-4859-bc17-79b6bf64dc25 contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 13 00:36:34.501: INFO: File jessie_udp@dns-test-service-3.dns-3704.svc.cluster.local from pod dns-3704/dns-test-f05e15da-2065-4859-bc17-79b6bf64dc25 contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 13 00:36:34.501: INFO: Lookups using dns-3704/dns-test-f05e15da-2065-4859-bc17-79b6bf64dc25 failed for: [wheezy_udp@dns-test-service-3.dns-3704.svc.cluster.local jessie_udp@dns-test-service-3.dns-3704.svc.cluster.local] Jan 13 00:36:39.496: INFO: File wheezy_udp@dns-test-service-3.dns-3704.svc.cluster.local from pod dns-3704/dns-test-f05e15da-2065-4859-bc17-79b6bf64dc25 contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 13 00:36:39.501: INFO: File jessie_udp@dns-test-service-3.dns-3704.svc.cluster.local from pod dns-3704/dns-test-f05e15da-2065-4859-bc17-79b6bf64dc25 contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 13 00:36:39.501: INFO: Lookups using dns-3704/dns-test-f05e15da-2065-4859-bc17-79b6bf64dc25 failed for: [wheezy_udp@dns-test-service-3.dns-3704.svc.cluster.local jessie_udp@dns-test-service-3.dns-3704.svc.cluster.local] Jan 13 00:36:44.497: INFO: File wheezy_udp@dns-test-service-3.dns-3704.svc.cluster.local from pod dns-3704/dns-test-f05e15da-2065-4859-bc17-79b6bf64dc25 contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 13 00:36:44.501: INFO: File jessie_udp@dns-test-service-3.dns-3704.svc.cluster.local from pod dns-3704/dns-test-f05e15da-2065-4859-bc17-79b6bf64dc25 contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 13 00:36:44.501: INFO: Lookups using dns-3704/dns-test-f05e15da-2065-4859-bc17-79b6bf64dc25 failed for: [wheezy_udp@dns-test-service-3.dns-3704.svc.cluster.local jessie_udp@dns-test-service-3.dns-3704.svc.cluster.local] Jan 13 00:36:49.500: INFO: DNS probes using dns-test-f05e15da-2065-4859-bc17-79b6bf64dc25 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3704.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-3704.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3704.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-3704.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 13 00:36:58.258: INFO: DNS probes using dns-test-f8ce4da1-5514-42cb-9a0d-bce63be06a7d succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:36:58.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3704" for this suite. • [SLOW TEST:46.794 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":309,"completed":279,"skipped":4842,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:36:58.570: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test emptydir 0777 on tmpfs Jan 13 00:36:58.960: INFO: Waiting up to 5m0s for pod "pod-01c1ae98-3ebd-4050-9377-4343394ecea1" in namespace "emptydir-3177" to be "Succeeded or Failed" Jan 13 00:36:58.987: INFO: Pod "pod-01c1ae98-3ebd-4050-9377-4343394ecea1": Phase="Pending", Reason="", readiness=false. Elapsed: 26.798396ms Jan 13 00:37:00.992: INFO: Pod "pod-01c1ae98-3ebd-4050-9377-4343394ecea1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031078654s Jan 13 00:37:03.054: INFO: Pod "pod-01c1ae98-3ebd-4050-9377-4343394ecea1": Phase="Running", Reason="", readiness=true. Elapsed: 4.093575166s Jan 13 00:37:05.082: INFO: Pod "pod-01c1ae98-3ebd-4050-9377-4343394ecea1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.121683589s STEP: Saw pod success Jan 13 00:37:05.082: INFO: Pod "pod-01c1ae98-3ebd-4050-9377-4343394ecea1" satisfied condition "Succeeded or Failed" Jan 13 00:37:05.085: INFO: Trying to get logs from node leguer-worker2 pod pod-01c1ae98-3ebd-4050-9377-4343394ecea1 container test-container: STEP: delete the pod Jan 13 00:37:05.149: INFO: Waiting for pod pod-01c1ae98-3ebd-4050-9377-4343394ecea1 to disappear Jan 13 00:37:05.159: INFO: Pod pod-01c1ae98-3ebd-4050-9377-4343394ecea1 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:37:05.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3177" for this suite. • [SLOW TEST:6.598 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:45 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":280,"skipped":4843,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:37:05.168: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:37:11.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-4279" for this suite. STEP: Destroying namespace "nsdeletetest-5650" for this suite. Jan 13 00:37:11.514: INFO: Namespace nsdeletetest-5650 was already deleted STEP: Destroying namespace "nsdeletetest-9699" for this suite. • [SLOW TEST:6.352 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":309,"completed":281,"skipped":4883,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:37:11.520: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [AfterEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:37:19.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5198" for this suite. • [SLOW TEST:8.178 seconds] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 when scheduling a busybox command that always fails in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:79 should have an terminated reason [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":309,"completed":282,"skipped":4888,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:37:19.699: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward API volume plugin Jan 13 00:37:19.829: INFO: Waiting up to 5m0s for pod "downwardapi-volume-296af071-76fc-4e30-876e-413838a751cf" in namespace "downward-api-1218" to be "Succeeded or Failed" Jan 13 00:37:19.832: INFO: Pod "downwardapi-volume-296af071-76fc-4e30-876e-413838a751cf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.63098ms Jan 13 00:37:21.837: INFO: Pod "downwardapi-volume-296af071-76fc-4e30-876e-413838a751cf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007745566s Jan 13 00:37:23.842: INFO: Pod "downwardapi-volume-296af071-76fc-4e30-876e-413838a751cf": Phase="Running", Reason="", readiness=true. Elapsed: 4.012587016s Jan 13 00:37:25.847: INFO: Pod "downwardapi-volume-296af071-76fc-4e30-876e-413838a751cf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.017382245s STEP: Saw pod success Jan 13 00:37:25.847: INFO: Pod "downwardapi-volume-296af071-76fc-4e30-876e-413838a751cf" satisfied condition "Succeeded or Failed" Jan 13 00:37:25.850: INFO: Trying to get logs from node leguer-worker2 pod downwardapi-volume-296af071-76fc-4e30-876e-413838a751cf container client-container: STEP: delete the pod Jan 13 00:37:25.901: INFO: Waiting for pod downwardapi-volume-296af071-76fc-4e30-876e-413838a751cf to disappear Jan 13 00:37:25.950: INFO: Pod downwardapi-volume-296af071-76fc-4e30-876e-413838a751cf no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:37:25.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1218" for this suite. • [SLOW TEST:6.259 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36 should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":309,"completed":283,"skipped":4892,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:37:25.958: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [AfterEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:37:30.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-6392" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":309,"completed":284,"skipped":4915,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:37:30.125: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward API volume plugin Jan 13 00:37:30.297: INFO: Waiting up to 5m0s for pod "downwardapi-volume-03e9fb08-0816-4a02-a0a2-79a31a03e538" in namespace "projected-8195" to be "Succeeded or Failed" Jan 13 00:37:30.300: INFO: Pod "downwardapi-volume-03e9fb08-0816-4a02-a0a2-79a31a03e538": Phase="Pending", Reason="", readiness=false. Elapsed: 2.804918ms Jan 13 00:37:32.305: INFO: Pod "downwardapi-volume-03e9fb08-0816-4a02-a0a2-79a31a03e538": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007947641s Jan 13 00:37:34.309: INFO: Pod "downwardapi-volume-03e9fb08-0816-4a02-a0a2-79a31a03e538": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011579026s STEP: Saw pod success Jan 13 00:37:34.309: INFO: Pod "downwardapi-volume-03e9fb08-0816-4a02-a0a2-79a31a03e538" satisfied condition "Succeeded or Failed" Jan 13 00:37:34.311: INFO: Trying to get logs from node leguer-worker2 pod downwardapi-volume-03e9fb08-0816-4a02-a0a2-79a31a03e538 container client-container: STEP: delete the pod Jan 13 00:37:34.331: INFO: Waiting for pod downwardapi-volume-03e9fb08-0816-4a02-a0a2-79a31a03e538 to disappear Jan 13 00:37:34.336: INFO: Pod downwardapi-volume-03e9fb08-0816-4a02-a0a2-79a31a03e538 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:37:34.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8195" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":285,"skipped":4949,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:37:34.344: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [AfterEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:37:42.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-225" for this suite. • [SLOW TEST:8.218 seconds] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 when scheduling a read only busybox container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:188 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":286,"skipped":4954,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:37:42.562: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: getting the auto-created API token Jan 13 00:37:43.170: INFO: created pod pod-service-account-defaultsa Jan 13 00:37:43.170: INFO: pod pod-service-account-defaultsa service account token volume mount: true Jan 13 00:37:43.186: INFO: created pod pod-service-account-mountsa Jan 13 00:37:43.186: INFO: pod pod-service-account-mountsa service account token volume mount: true Jan 13 00:37:43.206: INFO: created pod pod-service-account-nomountsa Jan 13 00:37:43.206: INFO: pod pod-service-account-nomountsa service account token volume mount: false Jan 13 00:37:43.238: INFO: created pod pod-service-account-defaultsa-mountspec Jan 13 00:37:43.238: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Jan 13 00:37:43.266: INFO: created pod pod-service-account-mountsa-mountspec Jan 13 00:37:43.266: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Jan 13 00:37:43.298: INFO: created pod pod-service-account-nomountsa-mountspec Jan 13 00:37:43.298: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Jan 13 00:37:43.326: INFO: created pod pod-service-account-defaultsa-nomountspec Jan 13 00:37:43.326: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Jan 13 00:37:43.377: INFO: created pod pod-service-account-mountsa-nomountspec Jan 13 00:37:43.377: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Jan 13 00:37:43.398: INFO: created pod pod-service-account-nomountsa-nomountspec Jan 13 00:37:43.398: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:37:43.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-1814" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":309,"completed":287,"skipped":4967,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:37:43.454: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating secret with name s-test-opt-del-1ec08a53-1571-4e0e-a5a9-b4849b6ff72d STEP: Creating secret with name s-test-opt-upd-6359bf3c-874e-419c-ae7d-1da5ee9c5a4c STEP: Creating the pod STEP: Deleting secret s-test-opt-del-1ec08a53-1571-4e0e-a5a9-b4849b6ff72d STEP: Updating secret s-test-opt-upd-6359bf3c-874e-419c-ae7d-1da5ee9c5a4c STEP: Creating secret with name s-test-opt-create-94df9881-115f-4ee3-9a19-69012e4a376e STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:39:31.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7005" for this suite. • [SLOW TEST:107.679 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":309,"completed":288,"skipped":4967,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:39:31.133: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating secret with name secret-test-map-3f01f902-ae72-4ad0-8221-90d4cbc21155 STEP: Creating a pod to test consume secrets Jan 13 00:39:31.269: INFO: Waiting up to 5m0s for pod "pod-secrets-00f1d2cc-025e-49ca-9cd5-ee2d4fafe706" in namespace "secrets-3282" to be "Succeeded or Failed" Jan 13 00:39:31.284: INFO: Pod "pod-secrets-00f1d2cc-025e-49ca-9cd5-ee2d4fafe706": Phase="Pending", Reason="", readiness=false. Elapsed: 15.37036ms Jan 13 00:39:33.365: INFO: Pod "pod-secrets-00f1d2cc-025e-49ca-9cd5-ee2d4fafe706": Phase="Pending", Reason="", readiness=false. Elapsed: 2.095661364s Jan 13 00:39:35.369: INFO: Pod "pod-secrets-00f1d2cc-025e-49ca-9cd5-ee2d4fafe706": Phase="Running", Reason="", readiness=true. Elapsed: 4.100144633s Jan 13 00:39:37.374: INFO: Pod "pod-secrets-00f1d2cc-025e-49ca-9cd5-ee2d4fafe706": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.104868978s STEP: Saw pod success Jan 13 00:39:37.374: INFO: Pod "pod-secrets-00f1d2cc-025e-49ca-9cd5-ee2d4fafe706" satisfied condition "Succeeded or Failed" Jan 13 00:39:37.377: INFO: Trying to get logs from node leguer-worker2 pod pod-secrets-00f1d2cc-025e-49ca-9cd5-ee2d4fafe706 container secret-volume-test: STEP: delete the pod Jan 13 00:39:37.511: INFO: Waiting for pod pod-secrets-00f1d2cc-025e-49ca-9cd5-ee2d4fafe706 to disappear Jan 13 00:39:37.531: INFO: Pod pod-secrets-00f1d2cc-025e-49ca-9cd5-ee2d4fafe706 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:39:37.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3282" for this suite. • [SLOW TEST:6.406 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":309,"completed":289,"skipped":4972,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:39:37.539: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Jan 13 00:39:46.953: INFO: 4 pods remaining Jan 13 00:39:46.953: INFO: 0 pods has nil DeletionTimestamp Jan 13 00:39:46.953: INFO: Jan 13 00:39:48.837: INFO: 0 pods remaining Jan 13 00:39:48.837: INFO: 0 pods has nil DeletionTimestamp Jan 13 00:39:48.837: INFO: Jan 13 00:39:50.495: INFO: 0 pods remaining Jan 13 00:39:50.495: INFO: 0 pods has nil DeletionTimestamp Jan 13 00:39:50.495: INFO: STEP: Gathering metrics W0113 00:39:51.606718 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Jan 13 00:40:53.628: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:40:53.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1460" for this suite. • [SLOW TEST:76.098 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":309,"completed":290,"skipped":4994,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:40:53.638: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:92 Jan 13 00:40:53.752: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 13 00:40:53.761: INFO: Waiting for terminating namespaces to be deleted... Jan 13 00:40:53.764: INFO: Logging pods the apiserver thinks is on node leguer-worker before test Jan 13 00:40:53.770: INFO: rally-a8f48c6d-3kmika18-pdtzv from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 13 00:40:53.770: INFO: Container rally-a8f48c6d-3kmika18 ready: true, restart count 0 Jan 13 00:40:53.770: INFO: rally-a8f48c6d-3kmika18-pllzg from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 13 00:40:53.770: INFO: Container rally-a8f48c6d-3kmika18 ready: true, restart count 0 Jan 13 00:40:53.770: INFO: rally-a8f48c6d-4cyi45kq-j5tzz from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 13 00:40:53.770: INFO: Container rally-a8f48c6d-4cyi45kq ready: true, restart count 0 Jan 13 00:40:53.770: INFO: rally-a8f48c6d-f3hls6a3-57dwc from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:32 +0000 UTC (1 container statuses recorded) Jan 13 00:40:53.770: INFO: Container rally-a8f48c6d-f3hls6a3 ready: true, restart count 0 Jan 13 00:40:53.770: INFO: rally-a8f48c6d-1y3amfc0-lp8st from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:32 +0000 UTC (1 container statuses recorded) Jan 13 00:40:53.770: INFO: Container rally-a8f48c6d-1y3amfc0 ready: true, restart count 0 Jan 13 00:40:53.770: INFO: rally-a8f48c6d-9pqmjehi-9zwjj from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 13 00:40:53.770: INFO: Container rally-a8f48c6d-9pqmjehi ready: true, restart count 0 Jan 13 00:40:53.770: INFO: chaos-controller-manager-69c479c674-s796v from default started at 2021-01-10 20:58:24 +0000 UTC (1 container statuses recorded) Jan 13 00:40:53.770: INFO: Container chaos-mesh ready: true, restart count 0 Jan 13 00:40:53.770: INFO: chaos-daemon-lv692 from default started at 2021-01-10 20:58:25 +0000 UTC (1 container statuses recorded) Jan 13 00:40:53.770: INFO: Container chaos-daemon ready: true, restart count 0 Jan 13 00:40:53.770: INFO: kindnet-psm25 from kube-system started at 2021-01-10 17:38:10 +0000 UTC (1 container statuses recorded) Jan 13 00:40:53.770: INFO: Container kindnet-cni ready: true, restart count 0 Jan 13 00:40:53.770: INFO: kube-proxy-bmbcs from kube-system started at 2021-01-10 17:38:10 +0000 UTC (1 container statuses recorded) Jan 13 00:40:53.770: INFO: Container kube-proxy ready: true, restart count 0 Jan 13 00:40:53.770: INFO: Logging pods the apiserver thinks is on node leguer-worker2 before test Jan 13 00:40:53.776: INFO: rally-a8f48c6d-4cyi45kq-knr4r from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 13 00:40:53.776: INFO: Container rally-a8f48c6d-4cyi45kq ready: true, restart count 0 Jan 13 00:40:53.776: INFO: rally-a8f48c6d-f3hls6a3-dwt8n from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:32 +0000 UTC (1 container statuses recorded) Jan 13 00:40:53.776: INFO: Container rally-a8f48c6d-f3hls6a3 ready: true, restart count 0 Jan 13 00:40:53.776: INFO: rally-a8f48c6d-1y3amfc0-hh9qk from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:32 +0000 UTC (1 container statuses recorded) Jan 13 00:40:53.776: INFO: Container rally-a8f48c6d-1y3amfc0 ready: true, restart count 0 Jan 13 00:40:53.776: INFO: rally-a8f48c6d-9pqmjehi-85slb from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 13 00:40:53.776: INFO: Container rally-a8f48c6d-9pqmjehi ready: true, restart count 0 Jan 13 00:40:53.776: INFO: rally-a8f48c6d-vnukxqu0-llj24 from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 13 00:40:53.776: INFO: Container rally-a8f48c6d-vnukxqu0 ready: true, restart count 0 Jan 13 00:40:53.776: INFO: rally-a8f48c6d-vnukxqu0-v85kr from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 13 00:40:53.776: INFO: Container rally-a8f48c6d-vnukxqu0 ready: true, restart count 0 Jan 13 00:40:53.776: INFO: chaos-daemon-ffkg7 from default started at 2021-01-10 20:58:25 +0000 UTC (1 container statuses recorded) Jan 13 00:40:53.776: INFO: Container chaos-daemon ready: true, restart count 0 Jan 13 00:40:53.776: INFO: kindnet-8wggd from kube-system started at 2021-01-10 17:38:10 +0000 UTC (1 container statuses recorded) Jan 13 00:40:53.776: INFO: Container kindnet-cni ready: true, restart count 0 Jan 13 00:40:53.776: INFO: kube-proxy-29gxg from kube-system started at 2021-01-10 17:38:09 +0000 UTC (1 container statuses recorded) Jan 13 00:40:53.776: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.1659a3cf99ba565b], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 node(s) didn't match Pod's node affinity.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:40:54.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1336" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:83 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":309,"completed":291,"skipped":5050,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:40:54.807: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating service in namespace services-34 STEP: creating service affinity-clusterip-transition in namespace services-34 STEP: creating replication controller affinity-clusterip-transition in namespace services-34 I0113 00:40:54.971359 7 runners.go:190] Created replication controller with name: affinity-clusterip-transition, namespace: services-34, replica count: 3 I0113 00:40:58.021789 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0113 00:41:01.022008 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 13 00:41:01.028: INFO: Creating new exec pod Jan 13 00:41:06.097: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-34 exec execpod-affinityc249s -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' Jan 13 00:41:09.352: INFO: stderr: "I0113 00:41:09.255955 4112 log.go:181] (0xc00003a160) (0xc000f88000) Create stream\nI0113 00:41:09.256037 4112 log.go:181] (0xc00003a160) (0xc000f88000) Stream added, broadcasting: 1\nI0113 00:41:09.258917 4112 log.go:181] (0xc00003a160) Reply frame received for 1\nI0113 00:41:09.258957 4112 log.go:181] (0xc00003a160) (0xc000f880a0) Create stream\nI0113 00:41:09.258966 4112 log.go:181] (0xc00003a160) (0xc000f880a0) Stream added, broadcasting: 3\nI0113 00:41:09.259865 4112 log.go:181] (0xc00003a160) Reply frame received for 3\nI0113 00:41:09.259903 4112 log.go:181] (0xc00003a160) (0xc000f88140) Create stream\nI0113 00:41:09.259914 4112 log.go:181] (0xc00003a160) (0xc000f88140) Stream added, broadcasting: 5\nI0113 00:41:09.260669 4112 log.go:181] (0xc00003a160) Reply frame received for 5\nI0113 00:41:09.346288 4112 log.go:181] (0xc00003a160) Data frame received for 5\nI0113 00:41:09.346312 4112 log.go:181] (0xc000f88140) (5) Data frame handling\nI0113 00:41:09.346321 4112 log.go:181] (0xc000f88140) (5) Data frame sent\nI0113 00:41:09.346327 4112 log.go:181] (0xc00003a160) Data frame received for 5\n+ nc -zv -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\nI0113 00:41:09.346333 4112 log.go:181] (0xc000f88140) (5) Data frame handling\nI0113 00:41:09.346367 4112 log.go:181] (0xc00003a160) Data frame received for 3\nI0113 00:41:09.346383 4112 log.go:181] (0xc000f880a0) (3) Data frame handling\nI0113 00:41:09.347564 4112 log.go:181] (0xc00003a160) Data frame received for 1\nI0113 00:41:09.347586 4112 log.go:181] (0xc000f88000) (1) Data frame handling\nI0113 00:41:09.347601 4112 log.go:181] (0xc000f88000) (1) Data frame sent\nI0113 00:41:09.347619 4112 log.go:181] (0xc00003a160) (0xc000f88000) Stream removed, broadcasting: 1\nI0113 00:41:09.347632 4112 log.go:181] (0xc00003a160) Go away received\nI0113 00:41:09.347971 4112 log.go:181] (0xc00003a160) (0xc000f88000) Stream removed, broadcasting: 1\nI0113 00:41:09.347997 4112 log.go:181] (0xc00003a160) (0xc000f880a0) Stream removed, broadcasting: 3\nI0113 00:41:09.348012 4112 log.go:181] (0xc00003a160) (0xc000f88140) Stream removed, broadcasting: 5\n" Jan 13 00:41:09.352: INFO: stdout: "" Jan 13 00:41:09.353: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-34 exec execpod-affinityc249s -- /bin/sh -x -c nc -zv -t -w 2 10.96.34.220 80' Jan 13 00:41:09.574: INFO: stderr: "I0113 00:41:09.489134 4131 log.go:181] (0xc000546000) (0xc000d081e0) Create stream\nI0113 00:41:09.489193 4131 log.go:181] (0xc000546000) (0xc000d081e0) Stream added, broadcasting: 1\nI0113 00:41:09.491168 4131 log.go:181] (0xc000546000) Reply frame received for 1\nI0113 00:41:09.491214 4131 log.go:181] (0xc000546000) (0xc000d08280) Create stream\nI0113 00:41:09.491226 4131 log.go:181] (0xc000546000) (0xc000d08280) Stream added, broadcasting: 3\nI0113 00:41:09.492298 4131 log.go:181] (0xc000546000) Reply frame received for 3\nI0113 00:41:09.492355 4131 log.go:181] (0xc000546000) (0xc000373180) Create stream\nI0113 00:41:09.492376 4131 log.go:181] (0xc000546000) (0xc000373180) Stream added, broadcasting: 5\nI0113 00:41:09.493272 4131 log.go:181] (0xc000546000) Reply frame received for 5\nI0113 00:41:09.567363 4131 log.go:181] (0xc000546000) Data frame received for 5\nI0113 00:41:09.567414 4131 log.go:181] (0xc000373180) (5) Data frame handling\nI0113 00:41:09.567438 4131 log.go:181] (0xc000373180) (5) Data frame sent\nI0113 00:41:09.567452 4131 log.go:181] (0xc000546000) Data frame received for 5\nI0113 00:41:09.567464 4131 log.go:181] (0xc000373180) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.34.220 80\nConnection to 10.96.34.220 80 port [tcp/http] succeeded!\nI0113 00:41:09.567496 4131 log.go:181] (0xc000546000) Data frame received for 3\nI0113 00:41:09.567526 4131 log.go:181] (0xc000d08280) (3) Data frame handling\nI0113 00:41:09.568613 4131 log.go:181] (0xc000546000) Data frame received for 1\nI0113 00:41:09.568634 4131 log.go:181] (0xc000d081e0) (1) Data frame handling\nI0113 00:41:09.568647 4131 log.go:181] (0xc000d081e0) (1) Data frame sent\nI0113 00:41:09.568658 4131 log.go:181] (0xc000546000) (0xc000d081e0) Stream removed, broadcasting: 1\nI0113 00:41:09.568672 4131 log.go:181] (0xc000546000) Go away received\nI0113 00:41:09.569223 4131 log.go:181] (0xc000546000) (0xc000d081e0) Stream removed, broadcasting: 1\nI0113 00:41:09.569248 4131 log.go:181] (0xc000546000) (0xc000d08280) Stream removed, broadcasting: 3\nI0113 00:41:09.569260 4131 log.go:181] (0xc000546000) (0xc000373180) Stream removed, broadcasting: 5\n" Jan 13 00:41:09.574: INFO: stdout: "" Jan 13 00:41:09.582: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-34 exec execpod-affinityc249s -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.96.34.220:80/ ; done' Jan 13 00:41:09.901: INFO: stderr: "I0113 00:41:09.735463 4149 log.go:181] (0xc0000e2000) (0xc000ab4000) Create stream\nI0113 00:41:09.735520 4149 log.go:181] (0xc0000e2000) (0xc000ab4000) Stream added, broadcasting: 1\nI0113 00:41:09.737491 4149 log.go:181] (0xc0000e2000) Reply frame received for 1\nI0113 00:41:09.737543 4149 log.go:181] (0xc0000e2000) (0xc0005083c0) Create stream\nI0113 00:41:09.737569 4149 log.go:181] (0xc0000e2000) (0xc0005083c0) Stream added, broadcasting: 3\nI0113 00:41:09.738317 4149 log.go:181] (0xc0000e2000) Reply frame received for 3\nI0113 00:41:09.738342 4149 log.go:181] (0xc0000e2000) (0xc000508b40) Create stream\nI0113 00:41:09.738349 4149 log.go:181] (0xc0000e2000) (0xc000508b40) Stream added, broadcasting: 5\nI0113 00:41:09.739234 4149 log.go:181] (0xc0000e2000) Reply frame received for 5\nI0113 00:41:09.811082 4149 log.go:181] (0xc0000e2000) Data frame received for 3\nI0113 00:41:09.811118 4149 log.go:181] (0xc0005083c0) (3) Data frame handling\nI0113 00:41:09.811128 4149 log.go:181] (0xc0005083c0) (3) Data frame sent\nI0113 00:41:09.811161 4149 log.go:181] (0xc0000e2000) Data frame received for 5\nI0113 00:41:09.811172 4149 log.go:181] (0xc000508b40) (5) Data frame handling\nI0113 00:41:09.811178 4149 log.go:181] (0xc000508b40) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.34.220:80/\nI0113 00:41:09.814947 4149 log.go:181] (0xc0000e2000) Data frame received for 3\nI0113 00:41:09.814974 4149 log.go:181] (0xc0005083c0) (3) Data frame handling\nI0113 00:41:09.814994 4149 log.go:181] (0xc0005083c0) (3) Data frame sent\nI0113 00:41:09.815190 4149 log.go:181] (0xc0000e2000) Data frame received for 3\nI0113 00:41:09.815207 4149 log.go:181] (0xc0005083c0) (3) Data frame handling\nI0113 00:41:09.815221 4149 log.go:181] (0xc0000e2000) Data frame received for 5\nI0113 00:41:09.815242 4149 log.go:181] (0xc000508b40) (5) Data frame handling\nI0113 00:41:09.815257 4149 log.go:181] (0xc000508b40) (5) Data frame sent\nI0113 00:41:09.815271 4149 log.go:181] (0xc0000e2000) Data frame received for 5\nI0113 00:41:09.815282 4149 log.go:181] (0xc000508b40) (5) Data frame handling\nI0113 00:41:09.815293 4149 log.go:181] (0xc0005083c0) (3) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.34.220:80/\nI0113 00:41:09.815311 4149 log.go:181] (0xc000508b40) (5) Data frame sent\nI0113 00:41:09.820239 4149 log.go:181] (0xc0000e2000) Data frame received for 3\nI0113 00:41:09.820256 4149 log.go:181] (0xc0005083c0) (3) Data frame handling\nI0113 00:41:09.820264 4149 log.go:181] (0xc0005083c0) (3) Data frame sent\nI0113 00:41:09.820702 4149 log.go:181] (0xc0000e2000) Data frame received for 3\nI0113 00:41:09.820767 4149 log.go:181] (0xc0005083c0) (3) Data frame handling\nI0113 00:41:09.820790 4149 log.go:181] (0xc0005083c0) (3) Data frame sent\nI0113 00:41:09.820812 4149 log.go:181] (0xc0000e2000) Data frame received for 5\nI0113 00:41:09.820994 4149 log.go:181] (0xc000508b40) (5) Data frame handling\nI0113 00:41:09.821027 4149 log.go:181] (0xc000508b40) (5) Data frame sent\nI0113 00:41:09.821044 4149 log.go:181] (0xc0000e2000) Data frame received for 5\nI0113 00:41:09.821052 4149 log.go:181] (0xc000508b40) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.34.220:80/\nI0113 00:41:09.821084 4149 log.go:181] (0xc000508b40) (5) Data frame sent\nI0113 00:41:09.824973 4149 log.go:181] (0xc0000e2000) Data frame received for 3\nI0113 00:41:09.824988 4149 log.go:181] (0xc0005083c0) (3) Data frame handling\nI0113 00:41:09.825000 4149 log.go:181] (0xc0005083c0) (3) Data frame sent\nI0113 00:41:09.825673 4149 log.go:181] (0xc0000e2000) Data frame received for 3\nI0113 00:41:09.825695 4149 log.go:181] (0xc0005083c0) (3) Data frame handling\nI0113 00:41:09.825705 4149 log.go:181] (0xc0005083c0) (3) Data frame sent\nI0113 00:41:09.825734 4149 log.go:181] (0xc0000e2000) Data frame received for 5\nI0113 00:41:09.825756 4149 log.go:181] (0xc000508b40) (5) Data frame handling\nI0113 00:41:09.825772 4149 log.go:181] (0xc000508b40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.34.220:80/\nI0113 00:41:09.829098 4149 log.go:181] (0xc0000e2000) Data frame received for 3\nI0113 00:41:09.829127 4149 log.go:181] (0xc0005083c0) (3) Data frame handling\nI0113 00:41:09.829151 4149 log.go:181] (0xc0005083c0) (3) Data frame sent\nI0113 00:41:09.829425 4149 log.go:181] (0xc0000e2000) Data frame received for 5\nI0113 00:41:09.829486 4149 log.go:181] (0xc000508b40) (5) Data frame handling\nI0113 00:41:09.829494 4149 log.go:181] (0xc000508b40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.34.220:80/\nI0113 00:41:09.829520 4149 log.go:181] (0xc0000e2000) Data frame received for 3\nI0113 00:41:09.829543 4149 log.go:181] (0xc0005083c0) (3) Data frame handling\nI0113 00:41:09.829561 4149 log.go:181] (0xc0005083c0) (3) Data frame sent\nI0113 00:41:09.832958 4149 log.go:181] (0xc0000e2000) Data frame received for 3\nI0113 00:41:09.832986 4149 log.go:181] (0xc0005083c0) (3) Data frame handling\nI0113 00:41:09.833016 4149 log.go:181] (0xc0005083c0) (3) Data frame sent\nI0113 00:41:09.833315 4149 log.go:181] (0xc0000e2000) Data frame received for 5\nI0113 00:41:09.833329 4149 log.go:181] (0xc000508b40) (5) Data frame handling\nI0113 00:41:09.833338 4149 log.go:181] (0xc000508b40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.34.220:80/\nI0113 00:41:09.833363 4149 log.go:181] (0xc0000e2000) Data frame received for 3\nI0113 00:41:09.833385 4149 log.go:181] (0xc0005083c0) (3) Data frame handling\nI0113 00:41:09.833410 4149 log.go:181] (0xc0005083c0) (3) Data frame sent\nI0113 00:41:09.837388 4149 log.go:181] (0xc0000e2000) Data frame received for 3\nI0113 00:41:09.837411 4149 log.go:181] (0xc0005083c0) (3) Data frame handling\nI0113 00:41:09.837421 4149 log.go:181] (0xc0005083c0) (3) Data frame sent\nI0113 00:41:09.837898 4149 log.go:181] (0xc0000e2000) Data frame received for 5\nI0113 00:41:09.837936 4149 log.go:181] (0xc000508b40) (5) Data frame handling\nI0113 00:41:09.837963 4149 log.go:181] (0xc000508b40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.34.220:80/\nI0113 00:41:09.838195 4149 log.go:181] (0xc0000e2000) Data frame received for 3\nI0113 00:41:09.838213 4149 log.go:181] (0xc0005083c0) (3) Data frame handling\nI0113 00:41:09.838229 4149 log.go:181] (0xc0005083c0) (3) Data frame sent\nI0113 00:41:09.842855 4149 log.go:181] (0xc0000e2000) Data frame received for 3\nI0113 00:41:09.842885 4149 log.go:181] (0xc0005083c0) (3) Data frame handling\nI0113 00:41:09.842905 4149 log.go:181] (0xc0005083c0) (3) Data frame sent\nI0113 00:41:09.843728 4149 log.go:181] (0xc0000e2000) Data frame received for 3\nI0113 00:41:09.843755 4149 log.go:181] (0xc0005083c0) (3) Data frame handling\nI0113 00:41:09.843774 4149 log.go:181] (0xc0000e2000) Data frame received for 5\nI0113 00:41:09.843819 4149 log.go:181] (0xc000508b40) (5) Data frame handling\nI0113 00:41:09.843839 4149 log.go:181] (0xc000508b40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.34.220:80/\nI0113 00:41:09.843857 4149 log.go:181] (0xc0005083c0) (3) Data frame sent\nI0113 00:41:09.848178 4149 log.go:181] (0xc0000e2000) Data frame received for 3\nI0113 00:41:09.848213 4149 log.go:181] (0xc0005083c0) (3) Data frame handling\nI0113 00:41:09.848234 4149 log.go:181] (0xc0005083c0) (3) Data frame sent\nI0113 00:41:09.849504 4149 log.go:181] (0xc0000e2000) Data frame received for 3\nI0113 00:41:09.849531 4149 log.go:181] (0xc0005083c0) (3) Data frame handling\nI0113 00:41:09.849544 4149 log.go:181] (0xc0005083c0) (3) Data frame sent\nI0113 00:41:09.849582 4149 log.go:181] (0xc0000e2000) Data frame received for 5\nI0113 00:41:09.849606 4149 log.go:181] (0xc000508b40) (5) Data frame handling\nI0113 00:41:09.849625 4149 log.go:181] (0xc000508b40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.34.220:80/\nI0113 00:41:09.854065 4149 log.go:181] (0xc0000e2000) Data frame received for 3\nI0113 00:41:09.854097 4149 log.go:181] (0xc0005083c0) (3) Data frame handling\nI0113 00:41:09.854117 4149 log.go:181] (0xc0005083c0) (3) Data frame sent\nI0113 00:41:09.854424 4149 log.go:181] (0xc0000e2000) Data frame received for 3\nI0113 00:41:09.854446 4149 log.go:181] (0xc0005083c0) (3) Data frame handling\nI0113 00:41:09.854459 4149 log.go:181] (0xc0005083c0) (3) Data frame sent\nI0113 00:41:09.854483 4149 log.go:181] (0xc0000e2000) Data frame received for 5\nI0113 00:41:09.854502 4149 log.go:181] (0xc000508b40) (5) Data frame handling\nI0113 00:41:09.854526 4149 log.go:181] (0xc000508b40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.34.220:80/\nI0113 00:41:09.861596 4149 log.go:181] (0xc0000e2000) Data frame received for 3\nI0113 00:41:09.861621 4149 log.go:181] (0xc0005083c0) (3) Data frame handling\nI0113 00:41:09.861647 4149 log.go:181] (0xc0005083c0) (3) Data frame sent\nI0113 00:41:09.862299 4149 log.go:181] (0xc0000e2000) Data frame received for 5\nI0113 00:41:09.862342 4149 log.go:181] (0xc000508b40) (5) Data frame handling\nI0113 00:41:09.862356 4149 log.go:181] (0xc000508b40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.34.220:80/\nI0113 00:41:09.862375 4149 log.go:181] (0xc0000e2000) Data frame received for 3\nI0113 00:41:09.862390 4149 log.go:181] (0xc0005083c0) (3) Data frame handling\nI0113 00:41:09.862404 4149 log.go:181] (0xc0005083c0) (3) Data frame sent\nI0113 00:41:09.865540 4149 log.go:181] (0xc0000e2000) Data frame received for 3\nI0113 00:41:09.865554 4149 log.go:181] (0xc0005083c0) (3) Data frame handling\nI0113 00:41:09.865560 4149 log.go:181] (0xc0005083c0) (3) Data frame sent\nI0113 00:41:09.866591 4149 log.go:181] (0xc0000e2000) Data frame received for 5\nI0113 00:41:09.866645 4149 log.go:181] (0xc000508b40) (5) Data frame handling\nI0113 00:41:09.866674 4149 log.go:181] (0xc000508b40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.34.220:80/\nI0113 00:41:09.866705 4149 log.go:181] (0xc0000e2000) Data frame received for 3\nI0113 00:41:09.866737 4149 log.go:181] (0xc0005083c0) (3) Data frame handling\nI0113 00:41:09.866765 4149 log.go:181] (0xc0005083c0) (3) Data frame sent\nI0113 00:41:09.872737 4149 log.go:181] (0xc0000e2000) Data frame received for 3\nI0113 00:41:09.872753 4149 log.go:181] (0xc0005083c0) (3) Data frame handling\nI0113 00:41:09.872759 4149 log.go:181] (0xc0005083c0) (3) Data frame sent\nI0113 00:41:09.873213 4149 log.go:181] (0xc0000e2000) Data frame received for 5\nI0113 00:41:09.873230 4149 log.go:181] (0xc000508b40) (5) Data frame handling\nI0113 00:41:09.873239 4149 log.go:181] (0xc000508b40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.34.220:80/\nI0113 00:41:09.873269 4149 log.go:181] (0xc0000e2000) Data frame received for 3\nI0113 00:41:09.873303 4149 log.go:181] (0xc0005083c0) (3) Data frame handling\nI0113 00:41:09.873336 4149 log.go:181] (0xc0005083c0) (3) Data frame sent\nI0113 00:41:09.879841 4149 log.go:181] (0xc0000e2000) Data frame received for 3\nI0113 00:41:09.879866 4149 log.go:181] (0xc0005083c0) (3) Data frame handling\nI0113 00:41:09.879893 4149 log.go:181] (0xc0005083c0) (3) Data frame sent\nI0113 00:41:09.880338 4149 log.go:181] (0xc0000e2000) Data frame received for 3\nI0113 00:41:09.880366 4149 log.go:181] (0xc0005083c0) (3) Data frame handling\nI0113 00:41:09.880382 4149 log.go:181] (0xc0005083c0) (3) Data frame sent\nI0113 00:41:09.880410 4149 log.go:181] (0xc0000e2000) Data frame received for 5\nI0113 00:41:09.880424 4149 log.go:181] (0xc000508b40) (5) Data frame handling\nI0113 00:41:09.880443 4149 log.go:181] (0xc000508b40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.34.220:80/\nI0113 00:41:09.884037 4149 log.go:181] (0xc0000e2000) Data frame received for 3\nI0113 00:41:09.884055 4149 log.go:181] (0xc0005083c0) (3) Data frame handling\nI0113 00:41:09.884064 4149 log.go:181] (0xc0005083c0) (3) Data frame sent\nI0113 00:41:09.884427 4149 log.go:181] (0xc0000e2000) Data frame received for 3\nI0113 00:41:09.884443 4149 log.go:181] (0xc0005083c0) (3) Data frame handling\nI0113 00:41:09.884451 4149 log.go:181] (0xc0005083c0) (3) Data frame sent\nI0113 00:41:09.884536 4149 log.go:181] (0xc0000e2000) Data frame received for 5\nI0113 00:41:09.884554 4149 log.go:181] (0xc000508b40) (5) Data frame handling\nI0113 00:41:09.884573 4149 log.go:181] (0xc000508b40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.34.220:80/\nI0113 00:41:09.887873 4149 log.go:181] (0xc0000e2000) Data frame received for 3\nI0113 00:41:09.887893 4149 log.go:181] (0xc0005083c0) (3) Data frame handling\nI0113 00:41:09.887913 4149 log.go:181] (0xc0005083c0) (3) Data frame sent\nI0113 00:41:09.888498 4149 log.go:181] (0xc0000e2000) Data frame received for 5\nI0113 00:41:09.888514 4149 log.go:181] (0xc000508b40) (5) Data frame handling\nI0113 00:41:09.888522 4149 log.go:181] (0xc000508b40) (5) Data frame sent\nI0113 00:41:09.888528 4149 log.go:181] (0xc0000e2000) Data frame received for 5\nI0113 00:41:09.888536 4149 log.go:181] (0xc000508b40) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.34.220:80/\nI0113 00:41:09.888555 4149 log.go:181] (0xc0000e2000) Data frame received for 3\nI0113 00:41:09.888580 4149 log.go:181] (0xc0005083c0) (3) Data frame handling\nI0113 00:41:09.888596 4149 log.go:181] (0xc0005083c0) (3) Data frame sent\nI0113 00:41:09.888614 4149 log.go:181] (0xc000508b40) (5) Data frame sent\nI0113 00:41:09.892576 4149 log.go:181] (0xc0000e2000) Data frame received for 3\nI0113 00:41:09.892606 4149 log.go:181] (0xc0005083c0) (3) Data frame handling\nI0113 00:41:09.892620 4149 log.go:181] (0xc0005083c0) (3) Data frame sent\nI0113 00:41:09.893469 4149 log.go:181] (0xc0000e2000) Data frame received for 3\nI0113 00:41:09.893504 4149 log.go:181] (0xc0000e2000) Data frame received for 5\nI0113 00:41:09.893550 4149 log.go:181] (0xc000508b40) (5) Data frame handling\nI0113 00:41:09.893595 4149 log.go:181] (0xc0005083c0) (3) Data frame handling\nI0113 00:41:09.895463 4149 log.go:181] (0xc0000e2000) Data frame received for 1\nI0113 00:41:09.895503 4149 log.go:181] (0xc000ab4000) (1) Data frame handling\nI0113 00:41:09.895538 4149 log.go:181] (0xc000ab4000) (1) Data frame sent\nI0113 00:41:09.895566 4149 log.go:181] (0xc0000e2000) (0xc000ab4000) Stream removed, broadcasting: 1\nI0113 00:41:09.895594 4149 log.go:181] (0xc0000e2000) Go away received\nI0113 00:41:09.896081 4149 log.go:181] (0xc0000e2000) (0xc000ab4000) Stream removed, broadcasting: 1\nI0113 00:41:09.896122 4149 log.go:181] (0xc0000e2000) (0xc0005083c0) Stream removed, broadcasting: 3\nI0113 00:41:09.896149 4149 log.go:181] (0xc0000e2000) (0xc000508b40) Stream removed, broadcasting: 5\n" Jan 13 00:41:09.902: INFO: stdout: "\naffinity-clusterip-transition-vvhkx\naffinity-clusterip-transition-qw6bz\naffinity-clusterip-transition-qw6bz\naffinity-clusterip-transition-qw6bz\naffinity-clusterip-transition-vvhkx\naffinity-clusterip-transition-vvhkx\naffinity-clusterip-transition-vrwmb\naffinity-clusterip-transition-vrwmb\naffinity-clusterip-transition-vrwmb\naffinity-clusterip-transition-vvhkx\naffinity-clusterip-transition-qw6bz\naffinity-clusterip-transition-qw6bz\naffinity-clusterip-transition-qw6bz\naffinity-clusterip-transition-qw6bz\naffinity-clusterip-transition-vvhkx\naffinity-clusterip-transition-vvhkx" Jan 13 00:41:09.902: INFO: Received response from host: affinity-clusterip-transition-vvhkx Jan 13 00:41:09.902: INFO: Received response from host: affinity-clusterip-transition-qw6bz Jan 13 00:41:09.902: INFO: Received response from host: affinity-clusterip-transition-qw6bz Jan 13 00:41:09.902: INFO: Received response from host: affinity-clusterip-transition-qw6bz Jan 13 00:41:09.902: INFO: Received response from host: affinity-clusterip-transition-vvhkx Jan 13 00:41:09.902: INFO: Received response from host: affinity-clusterip-transition-vvhkx Jan 13 00:41:09.902: INFO: Received response from host: affinity-clusterip-transition-vrwmb Jan 13 00:41:09.902: INFO: Received response from host: affinity-clusterip-transition-vrwmb Jan 13 00:41:09.902: INFO: Received response from host: affinity-clusterip-transition-vrwmb Jan 13 00:41:09.902: INFO: Received response from host: affinity-clusterip-transition-vvhkx Jan 13 00:41:09.902: INFO: Received response from host: affinity-clusterip-transition-qw6bz Jan 13 00:41:09.902: INFO: Received response from host: affinity-clusterip-transition-qw6bz Jan 13 00:41:09.902: INFO: Received response from host: affinity-clusterip-transition-qw6bz Jan 13 00:41:09.902: INFO: Received response from host: affinity-clusterip-transition-qw6bz Jan 13 00:41:09.902: INFO: Received response from host: affinity-clusterip-transition-vvhkx Jan 13 00:41:09.902: INFO: Received response from host: affinity-clusterip-transition-vvhkx Jan 13 00:41:09.911: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-34 exec execpod-affinityc249s -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.96.34.220:80/ ; done' Jan 13 00:41:10.247: INFO: stderr: "I0113 00:41:10.073773 4168 log.go:181] (0xc000012000) (0xc000b8e000) Create stream\nI0113 00:41:10.073863 4168 log.go:181] (0xc000012000) (0xc000b8e000) Stream added, broadcasting: 1\nI0113 00:41:10.076280 4168 log.go:181] (0xc000012000) Reply frame received for 1\nI0113 00:41:10.076316 4168 log.go:181] (0xc000012000) (0xc000b8e0a0) Create stream\nI0113 00:41:10.076327 4168 log.go:181] (0xc000012000) (0xc000b8e0a0) Stream added, broadcasting: 3\nI0113 00:41:10.078362 4168 log.go:181] (0xc000012000) Reply frame received for 3\nI0113 00:41:10.078421 4168 log.go:181] (0xc000012000) (0xc0007b2000) Create stream\nI0113 00:41:10.078509 4168 log.go:181] (0xc000012000) (0xc0007b2000) Stream added, broadcasting: 5\nI0113 00:41:10.079466 4168 log.go:181] (0xc000012000) Reply frame received for 5\nI0113 00:41:10.139595 4168 log.go:181] (0xc000012000) Data frame received for 3\nI0113 00:41:10.139628 4168 log.go:181] (0xc000b8e0a0) (3) Data frame handling\nI0113 00:41:10.139640 4168 log.go:181] (0xc000b8e0a0) (3) Data frame sent\nI0113 00:41:10.139647 4168 log.go:181] (0xc000012000) Data frame received for 5\nI0113 00:41:10.139652 4168 log.go:181] (0xc0007b2000) (5) Data frame handling\nI0113 00:41:10.139660 4168 log.go:181] (0xc0007b2000) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.34.220:80/\nI0113 00:41:10.145349 4168 log.go:181] (0xc000012000) Data frame received for 3\nI0113 00:41:10.145369 4168 log.go:181] (0xc000b8e0a0) (3) Data frame handling\nI0113 00:41:10.145377 4168 log.go:181] (0xc000b8e0a0) (3) Data frame sent\nI0113 00:41:10.145903 4168 log.go:181] (0xc000012000) Data frame received for 3\nI0113 00:41:10.145928 4168 log.go:181] (0xc000b8e0a0) (3) Data frame handling\nI0113 00:41:10.145940 4168 log.go:181] (0xc000b8e0a0) (3) Data frame sent\nI0113 00:41:10.145956 4168 log.go:181] (0xc000012000) Data frame received for 5\nI0113 00:41:10.145965 4168 log.go:181] (0xc0007b2000) (5) Data frame handling\nI0113 00:41:10.145973 4168 log.go:181] (0xc0007b2000) (5) Data frame sent\nI0113 00:41:10.145983 4168 log.go:181] (0xc000012000) Data frame received for 5\nI0113 00:41:10.145998 4168 log.go:181] (0xc0007b2000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.34.220:80/\nI0113 00:41:10.146018 4168 log.go:181] (0xc0007b2000) (5) Data frame sent\nI0113 00:41:10.149765 4168 log.go:181] (0xc000012000) Data frame received for 3\nI0113 00:41:10.149798 4168 log.go:181] (0xc000b8e0a0) (3) Data frame handling\nI0113 00:41:10.149832 4168 log.go:181] (0xc000b8e0a0) (3) Data frame sent\nI0113 00:41:10.150203 4168 log.go:181] (0xc000012000) Data frame received for 5\nI0113 00:41:10.150245 4168 log.go:181] (0xc0007b2000) (5) Data frame handling\nI0113 00:41:10.150271 4168 log.go:181] (0xc0007b2000) (5) Data frame sent\nI0113 00:41:10.150292 4168 log.go:181] (0xc000012000) Data frame received for 5\nI0113 00:41:10.150311 4168 log.go:181] (0xc0007b2000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.34.220:80/\nI0113 00:41:10.150340 4168 log.go:181] (0xc000012000) Data frame received for 3\nI0113 00:41:10.150378 4168 log.go:181] (0xc000b8e0a0) (3) Data frame handling\nI0113 00:41:10.150411 4168 log.go:181] (0xc000b8e0a0) (3) Data frame sent\nI0113 00:41:10.150441 4168 log.go:181] (0xc0007b2000) (5) Data frame sent\nI0113 00:41:10.157264 4168 log.go:181] (0xc000012000) Data frame received for 3\nI0113 00:41:10.157307 4168 log.go:181] (0xc000b8e0a0) (3) Data frame handling\nI0113 00:41:10.157339 4168 log.go:181] (0xc000b8e0a0) (3) Data frame sent\nI0113 00:41:10.157536 4168 log.go:181] (0xc000012000) Data frame received for 5\nI0113 00:41:10.157557 4168 log.go:181] (0xc0007b2000) (5) Data frame handling\nI0113 00:41:10.157569 4168 log.go:181] (0xc0007b2000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.34.220:80/\nI0113 00:41:10.157591 4168 log.go:181] (0xc000012000) Data frame received for 3\nI0113 00:41:10.157604 4168 log.go:181] (0xc000b8e0a0) (3) Data frame handling\nI0113 00:41:10.157618 4168 log.go:181] (0xc000b8e0a0) (3) Data frame sent\nI0113 00:41:10.162872 4168 log.go:181] (0xc000012000) Data frame received for 3\nI0113 00:41:10.162893 4168 log.go:181] (0xc000b8e0a0) (3) Data frame handling\nI0113 00:41:10.162905 4168 log.go:181] (0xc000b8e0a0) (3) Data frame sent\nI0113 00:41:10.163675 4168 log.go:181] (0xc000012000) Data frame received for 5\nI0113 00:41:10.163709 4168 log.go:181] (0xc000012000) Data frame received for 3\nI0113 00:41:10.163757 4168 log.go:181] (0xc000b8e0a0) (3) Data frame handling\nI0113 00:41:10.163784 4168 log.go:181] (0xc000b8e0a0) (3) Data frame sent\nI0113 00:41:10.163833 4168 log.go:181] (0xc0007b2000) (5) Data frame handling\nI0113 00:41:10.163884 4168 log.go:181] (0xc0007b2000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.34.220:80/\nI0113 00:41:10.171001 4168 log.go:181] (0xc000012000) Data frame received for 3\nI0113 00:41:10.171033 4168 log.go:181] (0xc000b8e0a0) (3) Data frame handling\nI0113 00:41:10.171054 4168 log.go:181] (0xc000b8e0a0) (3) Data frame sent\nI0113 00:41:10.171415 4168 log.go:181] (0xc000012000) Data frame received for 3\nI0113 00:41:10.171431 4168 log.go:181] (0xc000b8e0a0) (3) Data frame handling\nI0113 00:41:10.171448 4168 log.go:181] (0xc000012000) Data frame received for 5\nI0113 00:41:10.171527 4168 log.go:181] (0xc0007b2000) (5) Data frame handling\nI0113 00:41:10.171550 4168 log.go:181] (0xc0007b2000) (5) Data frame sent\nI0113 00:41:10.171571 4168 log.go:181] (0xc000012000) Data frame received for 5\nI0113 00:41:10.171589 4168 log.go:181] (0xc0007b2000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeoutI0113 00:41:10.171603 4168 log.go:181] (0xc000b8e0a0) (3) Data frame sent\n 2 http://10.96.34.220:80/\nI0113 00:41:10.171623 4168 log.go:181] (0xc0007b2000) (5) Data frame sent\nI0113 00:41:10.179658 4168 log.go:181] (0xc000012000) Data frame received for 3\nI0113 00:41:10.179690 4168 log.go:181] (0xc000b8e0a0) (3) Data frame handling\nI0113 00:41:10.179712 4168 log.go:181] (0xc000b8e0a0) (3) Data frame sent\nI0113 00:41:10.180237 4168 log.go:181] (0xc000012000) Data frame received for 5\nI0113 00:41:10.180278 4168 log.go:181] (0xc0007b2000) (5) Data frame handling\nI0113 00:41:10.180299 4168 log.go:181] (0xc0007b2000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.34.220:80/\nI0113 00:41:10.180323 4168 log.go:181] (0xc000012000) Data frame received for 3\nI0113 00:41:10.180336 4168 log.go:181] (0xc000b8e0a0) (3) Data frame handling\nI0113 00:41:10.180346 4168 log.go:181] (0xc000b8e0a0) (3) Data frame sent\nI0113 00:41:10.183755 4168 log.go:181] (0xc000012000) Data frame received for 3\nI0113 00:41:10.183789 4168 log.go:181] (0xc000b8e0a0) (3) Data frame handling\nI0113 00:41:10.183819 4168 log.go:181] (0xc000b8e0a0) (3) Data frame sent\nI0113 00:41:10.184172 4168 log.go:181] (0xc000012000) Data frame received for 3\nI0113 00:41:10.184216 4168 log.go:181] (0xc000b8e0a0) (3) Data frame handling\nI0113 00:41:10.184233 4168 log.go:181] (0xc000b8e0a0) (3) Data frame sent\nI0113 00:41:10.184252 4168 log.go:181] (0xc000012000) Data frame received for 5\nI0113 00:41:10.184263 4168 log.go:181] (0xc0007b2000) (5) Data frame handling\nI0113 00:41:10.184280 4168 log.go:181] (0xc0007b2000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.34.220:80/\nI0113 00:41:10.191390 4168 log.go:181] (0xc000012000) Data frame received for 3\nI0113 00:41:10.191414 4168 log.go:181] (0xc000b8e0a0) (3) Data frame handling\nI0113 00:41:10.191435 4168 log.go:181] (0xc000b8e0a0) (3) Data frame sent\nI0113 00:41:10.192117 4168 log.go:181] (0xc000012000) Data frame received for 3\nI0113 00:41:10.192129 4168 log.go:181] (0xc000b8e0a0) (3) Data frame handling\nI0113 00:41:10.192135 4168 log.go:181] (0xc000b8e0a0) (3) Data frame sent\nI0113 00:41:10.192153 4168 log.go:181] (0xc000012000) Data frame received for 5\nI0113 00:41:10.192181 4168 log.go:181] (0xc0007b2000) (5) Data frame handling\nI0113 00:41:10.192204 4168 log.go:181] (0xc0007b2000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.34.220:80/\nI0113 00:41:10.198297 4168 log.go:181] (0xc000012000) Data frame received for 3\nI0113 00:41:10.198311 4168 log.go:181] (0xc000b8e0a0) (3) Data frame handling\nI0113 00:41:10.198318 4168 log.go:181] (0xc000b8e0a0) (3) Data frame sent\nI0113 00:41:10.199301 4168 log.go:181] (0xc000012000) Data frame received for 3\nI0113 00:41:10.199326 4168 log.go:181] (0xc000b8e0a0) (3) Data frame handling\nI0113 00:41:10.199337 4168 log.go:181] (0xc000b8e0a0) (3) Data frame sent\nI0113 00:41:10.199370 4168 log.go:181] (0xc000012000) Data frame received for 5\nI0113 00:41:10.199403 4168 log.go:181] (0xc0007b2000) (5) Data frame handling\nI0113 00:41:10.199423 4168 log.go:181] (0xc0007b2000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.34.220:80/\nI0113 00:41:10.203391 4168 log.go:181] (0xc000012000) Data frame received for 3\nI0113 00:41:10.203441 4168 log.go:181] (0xc000b8e0a0) (3) Data frame handling\nI0113 00:41:10.203472 4168 log.go:181] (0xc000b8e0a0) (3) Data frame sent\nI0113 00:41:10.203651 4168 log.go:181] (0xc000012000) Data frame received for 5\nI0113 00:41:10.203666 4168 log.go:181] (0xc0007b2000) (5) Data frame handling\nI0113 00:41:10.203673 4168 log.go:181] (0xc0007b2000) (5) Data frame sent\nI0113 00:41:10.203682 4168 log.go:181] (0xc000012000) Data frame received for 3\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.34.220:80/\nI0113 00:41:10.203689 4168 log.go:181] (0xc000b8e0a0) (3) Data frame handling\nI0113 00:41:10.203741 4168 log.go:181] (0xc000b8e0a0) (3) Data frame sent\nI0113 00:41:10.210919 4168 log.go:181] (0xc000012000) Data frame received for 3\nI0113 00:41:10.210941 4168 log.go:181] (0xc000b8e0a0) (3) Data frame handling\nI0113 00:41:10.210962 4168 log.go:181] (0xc000b8e0a0) (3) Data frame sent\nI0113 00:41:10.211629 4168 log.go:181] (0xc000012000) Data frame received for 3\nI0113 00:41:10.211662 4168 log.go:181] (0xc000012000) Data frame received for 5\nI0113 00:41:10.211699 4168 log.go:181] (0xc0007b2000) (5) Data frame handling\nI0113 00:41:10.211720 4168 log.go:181] (0xc0007b2000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.34.220:80/\nI0113 00:41:10.211740 4168 log.go:181] (0xc000b8e0a0) (3) Data frame handling\nI0113 00:41:10.211754 4168 log.go:181] (0xc000b8e0a0) (3) Data frame sent\nI0113 00:41:10.218017 4168 log.go:181] (0xc000012000) Data frame received for 3\nI0113 00:41:10.218038 4168 log.go:181] (0xc000b8e0a0) (3) Data frame handling\nI0113 00:41:10.218052 4168 log.go:181] (0xc000b8e0a0) (3) Data frame sent\nI0113 00:41:10.218988 4168 log.go:181] (0xc000012000) Data frame received for 3\nI0113 00:41:10.219018 4168 log.go:181] (0xc000b8e0a0) (3) Data frame handling\nI0113 00:41:10.219054 4168 log.go:181] (0xc000b8e0a0) (3) Data frame sent\nI0113 00:41:10.219080 4168 log.go:181] (0xc000012000) Data frame received for 5\nI0113 00:41:10.219098 4168 log.go:181] (0xc0007b2000) (5) Data frame handling\nI0113 00:41:10.219120 4168 log.go:181] (0xc0007b2000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.34.220:80/\nI0113 00:41:10.223539 4168 log.go:181] (0xc000012000) Data frame received for 3\nI0113 00:41:10.223563 4168 log.go:181] (0xc000b8e0a0) (3) Data frame handling\nI0113 00:41:10.223580 4168 log.go:181] (0xc000b8e0a0) (3) Data frame sent\nI0113 00:41:10.223948 4168 log.go:181] (0xc000012000) Data frame received for 3\nI0113 00:41:10.223989 4168 log.go:181] (0xc000b8e0a0) (3) Data frame handling\nI0113 00:41:10.224018 4168 log.go:181] (0xc000b8e0a0) (3) Data frame sent\nI0113 00:41:10.224055 4168 log.go:181] (0xc000012000) Data frame received for 5\nI0113 00:41:10.224070 4168 log.go:181] (0xc0007b2000) (5) Data frame handling\nI0113 00:41:10.224088 4168 log.go:181] (0xc0007b2000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.34.220:80/\nI0113 00:41:10.228082 4168 log.go:181] (0xc000012000) Data frame received for 3\nI0113 00:41:10.228109 4168 log.go:181] (0xc000b8e0a0) (3) Data frame handling\nI0113 00:41:10.228130 4168 log.go:181] (0xc000b8e0a0) (3) Data frame sent\nI0113 00:41:10.229061 4168 log.go:181] (0xc000012000) Data frame received for 5\nI0113 00:41:10.229088 4168 log.go:181] (0xc0007b2000) (5) Data frame handling\nI0113 00:41:10.229131 4168 log.go:181] (0xc0007b2000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.34.220:80/\nI0113 00:41:10.229161 4168 log.go:181] (0xc000012000) Data frame received for 3\nI0113 00:41:10.229204 4168 log.go:181] (0xc000b8e0a0) (3) Data frame handling\nI0113 00:41:10.229234 4168 log.go:181] (0xc000b8e0a0) (3) Data frame sent\nI0113 00:41:10.233287 4168 log.go:181] (0xc000012000) Data frame received for 3\nI0113 00:41:10.233320 4168 log.go:181] (0xc000b8e0a0) (3) Data frame handling\nI0113 00:41:10.233355 4168 log.go:181] (0xc000b8e0a0) (3) Data frame sent\nI0113 00:41:10.233662 4168 log.go:181] (0xc000012000) Data frame received for 3\nI0113 00:41:10.233677 4168 log.go:181] (0xc000b8e0a0) (3) Data frame handling\nI0113 00:41:10.233685 4168 log.go:181] (0xc000b8e0a0) (3) Data frame sent\nI0113 00:41:10.233704 4168 log.go:181] (0xc000012000) Data frame received for 5\nI0113 00:41:10.233730 4168 log.go:181] (0xc0007b2000) (5) Data frame handling\nI0113 00:41:10.233755 4168 log.go:181] (0xc0007b2000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.34.220:80/\nI0113 00:41:10.239059 4168 log.go:181] (0xc000012000) Data frame received for 3\nI0113 00:41:10.239087 4168 log.go:181] (0xc000b8e0a0) (3) Data frame handling\nI0113 00:41:10.239106 4168 log.go:181] (0xc000b8e0a0) (3) Data frame sent\nI0113 00:41:10.240101 4168 log.go:181] (0xc000012000) Data frame received for 5\nI0113 00:41:10.240120 4168 log.go:181] (0xc0007b2000) (5) Data frame handling\nI0113 00:41:10.240138 4168 log.go:181] (0xc000012000) Data frame received for 3\nI0113 00:41:10.240160 4168 log.go:181] (0xc000b8e0a0) (3) Data frame handling\nI0113 00:41:10.242296 4168 log.go:181] (0xc000012000) Data frame received for 1\nI0113 00:41:10.242326 4168 log.go:181] (0xc000b8e000) (1) Data frame handling\nI0113 00:41:10.242350 4168 log.go:181] (0xc000b8e000) (1) Data frame sent\nI0113 00:41:10.242463 4168 log.go:181] (0xc000012000) (0xc000b8e000) Stream removed, broadcasting: 1\nI0113 00:41:10.242605 4168 log.go:181] (0xc000012000) Go away received\nI0113 00:41:10.242879 4168 log.go:181] (0xc000012000) (0xc000b8e000) Stream removed, broadcasting: 1\nI0113 00:41:10.242903 4168 log.go:181] (0xc000012000) (0xc000b8e0a0) Stream removed, broadcasting: 3\nI0113 00:41:10.242913 4168 log.go:181] (0xc000012000) (0xc0007b2000) Stream removed, broadcasting: 5\n" Jan 13 00:41:10.248: INFO: stdout: "\naffinity-clusterip-transition-qw6bz\naffinity-clusterip-transition-qw6bz\naffinity-clusterip-transition-qw6bz\naffinity-clusterip-transition-qw6bz\naffinity-clusterip-transition-qw6bz\naffinity-clusterip-transition-qw6bz\naffinity-clusterip-transition-qw6bz\naffinity-clusterip-transition-qw6bz\naffinity-clusterip-transition-qw6bz\naffinity-clusterip-transition-qw6bz\naffinity-clusterip-transition-qw6bz\naffinity-clusterip-transition-qw6bz\naffinity-clusterip-transition-qw6bz\naffinity-clusterip-transition-qw6bz\naffinity-clusterip-transition-qw6bz\naffinity-clusterip-transition-qw6bz" Jan 13 00:41:10.248: INFO: Received response from host: affinity-clusterip-transition-qw6bz Jan 13 00:41:10.248: INFO: Received response from host: affinity-clusterip-transition-qw6bz Jan 13 00:41:10.248: INFO: Received response from host: affinity-clusterip-transition-qw6bz Jan 13 00:41:10.248: INFO: Received response from host: affinity-clusterip-transition-qw6bz Jan 13 00:41:10.248: INFO: Received response from host: affinity-clusterip-transition-qw6bz Jan 13 00:41:10.248: INFO: Received response from host: affinity-clusterip-transition-qw6bz Jan 13 00:41:10.248: INFO: Received response from host: affinity-clusterip-transition-qw6bz Jan 13 00:41:10.248: INFO: Received response from host: affinity-clusterip-transition-qw6bz Jan 13 00:41:10.248: INFO: Received response from host: affinity-clusterip-transition-qw6bz Jan 13 00:41:10.248: INFO: Received response from host: affinity-clusterip-transition-qw6bz Jan 13 00:41:10.248: INFO: Received response from host: affinity-clusterip-transition-qw6bz Jan 13 00:41:10.248: INFO: Received response from host: affinity-clusterip-transition-qw6bz Jan 13 00:41:10.248: INFO: Received response from host: affinity-clusterip-transition-qw6bz Jan 13 00:41:10.248: INFO: Received response from host: affinity-clusterip-transition-qw6bz Jan 13 00:41:10.248: INFO: Received response from host: affinity-clusterip-transition-qw6bz Jan 13 00:41:10.248: INFO: Received response from host: affinity-clusterip-transition-qw6bz Jan 13 00:41:10.248: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-34, will wait for the garbage collector to delete the pods Jan 13 00:41:10.356: INFO: Deleting ReplicationController affinity-clusterip-transition took: 6.655021ms Jan 13 00:41:10.957: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 600.435686ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:42:10.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-34" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 • [SLOW TEST:75.420 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":309,"completed":292,"skipped":5120,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:42:10.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward api env vars Jan 13 00:42:10.309: INFO: Waiting up to 5m0s for pod "downward-api-20999a4c-cc07-48ef-89c7-7e05d21e66e1" in namespace "downward-api-3000" to be "Succeeded or Failed" Jan 13 00:42:10.319: INFO: Pod "downward-api-20999a4c-cc07-48ef-89c7-7e05d21e66e1": Phase="Pending", Reason="", readiness=false. Elapsed: 9.801394ms Jan 13 00:42:12.325: INFO: Pod "downward-api-20999a4c-cc07-48ef-89c7-7e05d21e66e1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015688258s Jan 13 00:42:14.329: INFO: Pod "downward-api-20999a4c-cc07-48ef-89c7-7e05d21e66e1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020068639s STEP: Saw pod success Jan 13 00:42:14.329: INFO: Pod "downward-api-20999a4c-cc07-48ef-89c7-7e05d21e66e1" satisfied condition "Succeeded or Failed" Jan 13 00:42:14.331: INFO: Trying to get logs from node leguer-worker2 pod downward-api-20999a4c-cc07-48ef-89c7-7e05d21e66e1 container dapi-container: STEP: delete the pod Jan 13 00:42:14.425: INFO: Waiting for pod downward-api-20999a4c-cc07-48ef-89c7-7e05d21e66e1 to disappear Jan 13 00:42:14.433: INFO: Pod downward-api-20999a4c-cc07-48ef-89c7-7e05d21e66e1 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:42:14.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3000" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":309,"completed":293,"skipped":5125,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:42:14.462: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: set up a multi version CRD Jan 13 00:42:14.549: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:42:34.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9461" for this suite. • [SLOW TEST:19.713 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":309,"completed":294,"skipped":5141,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:42:34.175: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-8qsrv in namespace proxy-1012 I0113 00:42:34.314689 7 runners.go:190] Created replication controller with name: proxy-service-8qsrv, namespace: proxy-1012, replica count: 1 I0113 00:42:35.365057 7 runners.go:190] proxy-service-8qsrv Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0113 00:42:36.365271 7 runners.go:190] proxy-service-8qsrv Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0113 00:42:37.365454 7 runners.go:190] proxy-service-8qsrv Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0113 00:42:38.365626 7 runners.go:190] proxy-service-8qsrv Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0113 00:42:39.365808 7 runners.go:190] proxy-service-8qsrv Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0113 00:42:40.365987 7 runners.go:190] proxy-service-8qsrv Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0113 00:42:41.366172 7 runners.go:190] proxy-service-8qsrv Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 13 00:42:41.369: INFO: setup took 7.117826165s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Jan 13 00:42:41.377: INFO: (0) /api/v1/namespaces/proxy-1012/pods/http:proxy-service-8qsrv-zrbxj:162/proxy/: bar (200; 7.471191ms) Jan 13 00:42:41.377: INFO: (0) /api/v1/namespaces/proxy-1012/pods/proxy-service-8qsrv-zrbxj:162/proxy/: bar (200; 7.876605ms) Jan 13 00:42:41.377: INFO: (0) /api/v1/namespaces/proxy-1012/pods/proxy-service-8qsrv-zrbxj/proxy/: test (200; 8.054029ms) Jan 13 00:42:41.378: INFO: (0) /api/v1/namespaces/proxy-1012/services/proxy-service-8qsrv:portname2/proxy/: bar (200; 7.996416ms) Jan 13 00:42:41.378: INFO: (0) /api/v1/namespaces/proxy-1012/pods/proxy-service-8qsrv-zrbxj:160/proxy/: foo (200; 8.049762ms) Jan 13 00:42:41.378: INFO: (0) /api/v1/namespaces/proxy-1012/services/http:proxy-service-8qsrv:portname2/proxy/: bar (200; 7.940472ms) Jan 13 00:42:41.378: INFO: (0) /api/v1/namespaces/proxy-1012/pods/proxy-service-8qsrv-zrbxj:1080/proxy/: test<... (200; 8.125934ms) Jan 13 00:42:41.378: INFO: (0) /api/v1/namespaces/proxy-1012/pods/http:proxy-service-8qsrv-zrbxj:160/proxy/: foo (200; 8.145521ms) Jan 13 00:42:41.378: INFO: (0) /api/v1/namespaces/proxy-1012/pods/http:proxy-service-8qsrv-zrbxj:1080/proxy/: ... (200; 8.728798ms) Jan 13 00:42:41.382: INFO: (0) /api/v1/namespaces/proxy-1012/services/proxy-service-8qsrv:portname1/proxy/: foo (200; 11.99108ms) Jan 13 00:42:41.382: INFO: (0) /api/v1/namespaces/proxy-1012/services/http:proxy-service-8qsrv:portname1/proxy/: foo (200; 12.040866ms) Jan 13 00:42:41.386: INFO: (0) /api/v1/namespaces/proxy-1012/pods/https:proxy-service-8qsrv-zrbxj:460/proxy/: tls baz (200; 16.275742ms) Jan 13 00:42:41.386: INFO: (0) /api/v1/namespaces/proxy-1012/services/https:proxy-service-8qsrv:tlsportname1/proxy/: tls baz (200; 16.133035ms) Jan 13 00:42:41.386: INFO: (0) /api/v1/namespaces/proxy-1012/pods/https:proxy-service-8qsrv-zrbxj:462/proxy/: tls qux (200; 16.377935ms) Jan 13 00:42:41.386: INFO: (0) /api/v1/namespaces/proxy-1012/services/https:proxy-service-8qsrv:tlsportname2/proxy/: tls qux (200; 16.142951ms) Jan 13 00:42:41.386: INFO: (0) /api/v1/namespaces/proxy-1012/pods/https:proxy-service-8qsrv-zrbxj:443/proxy/: test<... (200; 5.403879ms) Jan 13 00:42:41.391: INFO: (1) /api/v1/namespaces/proxy-1012/pods/http:proxy-service-8qsrv-zrbxj:1080/proxy/: ... (200; 5.374527ms) Jan 13 00:42:41.391: INFO: (1) /api/v1/namespaces/proxy-1012/services/https:proxy-service-8qsrv:tlsportname1/proxy/: tls baz (200; 5.330658ms) Jan 13 00:42:41.391: INFO: (1) /api/v1/namespaces/proxy-1012/services/http:proxy-service-8qsrv:portname2/proxy/: bar (200; 5.33971ms) Jan 13 00:42:41.391: INFO: (1) /api/v1/namespaces/proxy-1012/pods/proxy-service-8qsrv-zrbxj/proxy/: test (200; 5.379061ms) Jan 13 00:42:41.391: INFO: (1) /api/v1/namespaces/proxy-1012/services/http:proxy-service-8qsrv:portname1/proxy/: foo (200; 5.484469ms) Jan 13 00:42:41.391: INFO: (1) /api/v1/namespaces/proxy-1012/services/proxy-service-8qsrv:portname2/proxy/: bar (200; 5.440782ms) Jan 13 00:42:41.396: INFO: (2) /api/v1/namespaces/proxy-1012/pods/proxy-service-8qsrv-zrbxj/proxy/: test (200; 4.086342ms) Jan 13 00:42:41.396: INFO: (2) /api/v1/namespaces/proxy-1012/services/proxy-service-8qsrv:portname2/proxy/: bar (200; 4.291558ms) Jan 13 00:42:41.396: INFO: (2) /api/v1/namespaces/proxy-1012/pods/proxy-service-8qsrv-zrbxj:162/proxy/: bar (200; 4.258437ms) Jan 13 00:42:41.396: INFO: (2) /api/v1/namespaces/proxy-1012/services/https:proxy-service-8qsrv:tlsportname2/proxy/: tls qux (200; 4.352693ms) Jan 13 00:42:41.396: INFO: (2) /api/v1/namespaces/proxy-1012/services/https:proxy-service-8qsrv:tlsportname1/proxy/: tls baz (200; 4.333712ms) Jan 13 00:42:41.396: INFO: (2) /api/v1/namespaces/proxy-1012/pods/http:proxy-service-8qsrv-zrbxj:1080/proxy/: ... (200; 4.390227ms) Jan 13 00:42:41.396: INFO: (2) /api/v1/namespaces/proxy-1012/pods/http:proxy-service-8qsrv-zrbxj:160/proxy/: foo (200; 4.595681ms) Jan 13 00:42:41.396: INFO: (2) /api/v1/namespaces/proxy-1012/services/http:proxy-service-8qsrv:portname1/proxy/: foo (200; 4.653985ms) Jan 13 00:42:41.396: INFO: (2) /api/v1/namespaces/proxy-1012/pods/proxy-service-8qsrv-zrbxj:1080/proxy/: test<... (200; 4.695267ms) Jan 13 00:42:41.397: INFO: (2) /api/v1/namespaces/proxy-1012/pods/proxy-service-8qsrv-zrbxj:160/proxy/: foo (200; 5.067894ms) Jan 13 00:42:41.397: INFO: (2) /api/v1/namespaces/proxy-1012/pods/https:proxy-service-8qsrv-zrbxj:462/proxy/: tls qux (200; 4.987791ms) Jan 13 00:42:41.397: INFO: (2) /api/v1/namespaces/proxy-1012/pods/https:proxy-service-8qsrv-zrbxj:460/proxy/: tls baz (200; 5.136667ms) Jan 13 00:42:41.397: INFO: (2) /api/v1/namespaces/proxy-1012/services/proxy-service-8qsrv:portname1/proxy/: foo (200; 5.269978ms) Jan 13 00:42:41.397: INFO: (2) /api/v1/namespaces/proxy-1012/services/http:proxy-service-8qsrv:portname2/proxy/: bar (200; 5.283121ms) Jan 13 00:42:41.397: INFO: (2) /api/v1/namespaces/proxy-1012/pods/http:proxy-service-8qsrv-zrbxj:162/proxy/: bar (200; 5.34659ms) Jan 13 00:42:41.397: INFO: (2) /api/v1/namespaces/proxy-1012/pods/https:proxy-service-8qsrv-zrbxj:443/proxy/: test (200; 3.829201ms) Jan 13 00:42:41.402: INFO: (3) /api/v1/namespaces/proxy-1012/services/proxy-service-8qsrv:portname1/proxy/: foo (200; 4.705813ms) Jan 13 00:42:41.402: INFO: (3) /api/v1/namespaces/proxy-1012/services/http:proxy-service-8qsrv:portname1/proxy/: foo (200; 4.758403ms) Jan 13 00:42:41.402: INFO: (3) /api/v1/namespaces/proxy-1012/services/proxy-service-8qsrv:portname2/proxy/: bar (200; 4.762373ms) Jan 13 00:42:41.402: INFO: (3) /api/v1/namespaces/proxy-1012/services/http:proxy-service-8qsrv:portname2/proxy/: bar (200; 4.824285ms) Jan 13 00:42:41.402: INFO: (3) /api/v1/namespaces/proxy-1012/pods/proxy-service-8qsrv-zrbxj:162/proxy/: bar (200; 4.940668ms) Jan 13 00:42:41.402: INFO: (3) /api/v1/namespaces/proxy-1012/pods/http:proxy-service-8qsrv-zrbxj:1080/proxy/: ... (200; 4.956769ms) Jan 13 00:42:41.402: INFO: (3) /api/v1/namespaces/proxy-1012/pods/proxy-service-8qsrv-zrbxj:160/proxy/: foo (200; 5.187485ms) Jan 13 00:42:41.402: INFO: (3) /api/v1/namespaces/proxy-1012/services/https:proxy-service-8qsrv:tlsportname2/proxy/: tls qux (200; 5.182726ms) Jan 13 00:42:41.402: INFO: (3) /api/v1/namespaces/proxy-1012/services/https:proxy-service-8qsrv:tlsportname1/proxy/: tls baz (200; 5.300918ms) Jan 13 00:42:41.402: INFO: (3) /api/v1/namespaces/proxy-1012/pods/https:proxy-service-8qsrv-zrbxj:462/proxy/: tls qux (200; 5.416579ms) Jan 13 00:42:41.402: INFO: (3) /api/v1/namespaces/proxy-1012/pods/http:proxy-service-8qsrv-zrbxj:162/proxy/: bar (200; 5.336517ms) Jan 13 00:42:41.402: INFO: (3) /api/v1/namespaces/proxy-1012/pods/https:proxy-service-8qsrv-zrbxj:443/proxy/: test<... (200; 5.445999ms) Jan 13 00:42:41.406: INFO: (4) /api/v1/namespaces/proxy-1012/pods/http:proxy-service-8qsrv-zrbxj:162/proxy/: bar (200; 3.194679ms) Jan 13 00:42:41.406: INFO: (4) /api/v1/namespaces/proxy-1012/pods/http:proxy-service-8qsrv-zrbxj:160/proxy/: foo (200; 3.450323ms) Jan 13 00:42:41.406: INFO: (4) /api/v1/namespaces/proxy-1012/pods/https:proxy-service-8qsrv-zrbxj:462/proxy/: tls qux (200; 3.724447ms) Jan 13 00:42:41.406: INFO: (4) /api/v1/namespaces/proxy-1012/pods/https:proxy-service-8qsrv-zrbxj:443/proxy/: ... (200; 3.744081ms) Jan 13 00:42:41.406: INFO: (4) /api/v1/namespaces/proxy-1012/pods/proxy-service-8qsrv-zrbxj:162/proxy/: bar (200; 3.756877ms) Jan 13 00:42:41.406: INFO: (4) /api/v1/namespaces/proxy-1012/pods/proxy-service-8qsrv-zrbxj/proxy/: test (200; 3.721539ms) Jan 13 00:42:41.406: INFO: (4) /api/v1/namespaces/proxy-1012/pods/https:proxy-service-8qsrv-zrbxj:460/proxy/: tls baz (200; 3.756681ms) Jan 13 00:42:41.407: INFO: (4) /api/v1/namespaces/proxy-1012/pods/proxy-service-8qsrv-zrbxj:1080/proxy/: test<... (200; 4.148047ms) Jan 13 00:42:41.407: INFO: (4) /api/v1/namespaces/proxy-1012/services/http:proxy-service-8qsrv:portname2/proxy/: bar (200; 4.42642ms) Jan 13 00:42:41.407: INFO: (4) /api/v1/namespaces/proxy-1012/services/proxy-service-8qsrv:portname2/proxy/: bar (200; 4.61759ms) Jan 13 00:42:41.407: INFO: (4) /api/v1/namespaces/proxy-1012/services/https:proxy-service-8qsrv:tlsportname1/proxy/: tls baz (200; 4.712664ms) Jan 13 00:42:41.407: INFO: (4) /api/v1/namespaces/proxy-1012/services/proxy-service-8qsrv:portname1/proxy/: foo (200; 4.702713ms) Jan 13 00:42:41.407: INFO: (4) /api/v1/namespaces/proxy-1012/pods/proxy-service-8qsrv-zrbxj:160/proxy/: foo (200; 4.828689ms) Jan 13 00:42:41.407: INFO: (4) /api/v1/namespaces/proxy-1012/services/https:proxy-service-8qsrv:tlsportname2/proxy/: tls qux (200; 4.738646ms) Jan 13 00:42:41.408: INFO: (4) /api/v1/namespaces/proxy-1012/services/http:proxy-service-8qsrv:portname1/proxy/: foo (200; 4.9271ms) Jan 13 00:42:41.411: INFO: (5) /api/v1/namespaces/proxy-1012/pods/proxy-service-8qsrv-zrbxj/proxy/: test (200; 3.312052ms) Jan 13 00:42:41.411: INFO: (5) /api/v1/namespaces/proxy-1012/services/proxy-service-8qsrv:portname2/proxy/: bar (200; 3.531514ms) Jan 13 00:42:41.411: INFO: (5) /api/v1/namespaces/proxy-1012/pods/proxy-service-8qsrv-zrbxj:162/proxy/: bar (200; 3.551178ms) Jan 13 00:42:41.411: INFO: (5) /api/v1/namespaces/proxy-1012/pods/https:proxy-service-8qsrv-zrbxj:443/proxy/: test<... (200; 3.686085ms) Jan 13 00:42:41.412: INFO: (5) /api/v1/namespaces/proxy-1012/pods/http:proxy-service-8qsrv-zrbxj:162/proxy/: bar (200; 4.140324ms) Jan 13 00:42:41.412: INFO: (5) /api/v1/namespaces/proxy-1012/pods/https:proxy-service-8qsrv-zrbxj:462/proxy/: tls qux (200; 4.420458ms) Jan 13 00:42:41.412: INFO: (5) /api/v1/namespaces/proxy-1012/pods/proxy-service-8qsrv-zrbxj:160/proxy/: foo (200; 4.57471ms) Jan 13 00:42:41.412: INFO: (5) /api/v1/namespaces/proxy-1012/services/proxy-service-8qsrv:portname1/proxy/: foo (200; 4.60118ms) Jan 13 00:42:41.413: INFO: (5) /api/v1/namespaces/proxy-1012/pods/http:proxy-service-8qsrv-zrbxj:1080/proxy/: ... (200; 4.893384ms) Jan 13 00:42:41.413: INFO: (5) /api/v1/namespaces/proxy-1012/services/http:proxy-service-8qsrv:portname2/proxy/: bar (200; 4.814984ms) Jan 13 00:42:41.413: INFO: (5) /api/v1/namespaces/proxy-1012/services/https:proxy-service-8qsrv:tlsportname2/proxy/: tls qux (200; 4.945658ms) Jan 13 00:42:41.413: INFO: (5) /api/v1/namespaces/proxy-1012/services/https:proxy-service-8qsrv:tlsportname1/proxy/: tls baz (200; 5.15626ms) Jan 13 00:42:41.413: INFO: (5) /api/v1/namespaces/proxy-1012/pods/http:proxy-service-8qsrv-zrbxj:160/proxy/: foo (200; 5.138795ms) Jan 13 00:42:41.413: INFO: (5) /api/v1/namespaces/proxy-1012/pods/https:proxy-service-8qsrv-zrbxj:460/proxy/: tls baz (200; 5.135748ms) Jan 13 00:42:41.415: INFO: (6) /api/v1/namespaces/proxy-1012/pods/https:proxy-service-8qsrv-zrbxj:460/proxy/: tls baz (200; 2.58772ms) Jan 13 00:42:41.415: INFO: (6) /api/v1/namespaces/proxy-1012/pods/https:proxy-service-8qsrv-zrbxj:443/proxy/: ... (200; 3.062881ms) Jan 13 00:42:41.416: INFO: (6) /api/v1/namespaces/proxy-1012/pods/http:proxy-service-8qsrv-zrbxj:162/proxy/: bar (200; 3.178266ms) Jan 13 00:42:41.416: INFO: (6) /api/v1/namespaces/proxy-1012/pods/proxy-service-8qsrv-zrbxj:160/proxy/: foo (200; 3.262524ms) Jan 13 00:42:41.417: INFO: (6) /api/v1/namespaces/proxy-1012/pods/proxy-service-8qsrv-zrbxj:1080/proxy/: test<... (200; 4.003122ms) Jan 13 00:42:41.417: INFO: (6) /api/v1/namespaces/proxy-1012/services/https:proxy-service-8qsrv:tlsportname1/proxy/: tls baz (200; 4.04862ms) Jan 13 00:42:41.417: INFO: (6) /api/v1/namespaces/proxy-1012/services/proxy-service-8qsrv:portname1/proxy/: foo (200; 4.361553ms) Jan 13 00:42:41.418: INFO: (6) /api/v1/namespaces/proxy-1012/services/http:proxy-service-8qsrv:portname1/proxy/: foo (200; 4.674114ms) Jan 13 00:42:41.418: INFO: (6) /api/v1/namespaces/proxy-1012/services/http:proxy-service-8qsrv:portname2/proxy/: bar (200; 4.784426ms) Jan 13 00:42:41.418: INFO: (6) /api/v1/namespaces/proxy-1012/pods/proxy-service-8qsrv-zrbxj/proxy/: test (200; 4.756475ms) Jan 13 00:42:41.418: INFO: (6) /api/v1/namespaces/proxy-1012/services/proxy-service-8qsrv:portname2/proxy/: bar (200; 4.842576ms) Jan 13 00:42:41.418: INFO: (6) /api/v1/namespaces/proxy-1012/services/https:proxy-service-8qsrv:tlsportname2/proxy/: tls qux (200; 4.830246ms) Jan 13 00:42:41.422: INFO: (7) /api/v1/namespaces/proxy-1012/services/http:proxy-service-8qsrv:portname2/proxy/: bar (200; 4.617482ms) Jan 13 00:42:41.422: INFO: (7) /api/v1/namespaces/proxy-1012/services/proxy-service-8qsrv:portname2/proxy/: bar (200; 4.47786ms) Jan 13 00:42:41.422: INFO: (7) /api/v1/namespaces/proxy-1012/services/http:proxy-service-8qsrv:portname1/proxy/: foo (200; 4.561006ms) Jan 13 00:42:41.422: INFO: (7) /api/v1/namespaces/proxy-1012/pods/https:proxy-service-8qsrv-zrbxj:460/proxy/: tls baz (200; 4.597741ms) Jan 13 00:42:41.422: INFO: (7) /api/v1/namespaces/proxy-1012/services/https:proxy-service-8qsrv:tlsportname2/proxy/: tls qux (200; 4.627266ms) Jan 13 00:42:41.422: INFO: (7) /api/v1/namespaces/proxy-1012/pods/proxy-service-8qsrv-zrbxj/proxy/: test (200; 4.639993ms) Jan 13 00:42:41.423: INFO: (7) /api/v1/namespaces/proxy-1012/pods/http:proxy-service-8qsrv-zrbxj:162/proxy/: bar (200; 4.628296ms) Jan 13 00:42:41.423: INFO: (7) /api/v1/namespaces/proxy-1012/pods/https:proxy-service-8qsrv-zrbxj:443/proxy/: ... (200; 4.586113ms) Jan 13 00:42:41.423: INFO: (7) /api/v1/namespaces/proxy-1012/pods/proxy-service-8qsrv-zrbxj:162/proxy/: bar (200; 4.657873ms) Jan 13 00:42:41.423: INFO: (7) /api/v1/namespaces/proxy-1012/pods/proxy-service-8qsrv-zrbxj:160/proxy/: foo (200; 4.566143ms) Jan 13 00:42:41.423: INFO: (7) /api/v1/namespaces/proxy-1012/services/proxy-service-8qsrv:portname1/proxy/: foo (200; 4.728724ms) Jan 13 00:42:41.423: INFO: (7) /api/v1/namespaces/proxy-1012/pods/https:proxy-service-8qsrv-zrbxj:462/proxy/: tls qux (200; 4.745294ms) Jan 13 00:42:41.423: INFO: (7) /api/v1/namespaces/proxy-1012/pods/proxy-service-8qsrv-zrbxj:1080/proxy/: test<... (200; 4.762289ms) Jan 13 00:42:41.426: INFO: (8) /api/v1/namespaces/proxy-1012/pods/proxy-service-8qsrv-zrbxj:160/proxy/: foo (200; 3.436293ms) Jan 13 00:42:41.426: INFO: (8) /api/v1/namespaces/proxy-1012/pods/proxy-service-8qsrv-zrbxj:1080/proxy/: test<... (200; 3.48733ms) Jan 13 00:42:41.426: INFO: (8) /api/v1/namespaces/proxy-1012/pods/http:proxy-service-8qsrv-zrbxj:160/proxy/: foo (200; 3.473635ms) Jan 13 00:42:41.426: INFO: (8) /api/v1/namespaces/proxy-1012/services/https:proxy-service-8qsrv:tlsportname2/proxy/: tls qux (200; 3.563268ms) Jan 13 00:42:41.426: INFO: (8) /api/v1/namespaces/proxy-1012/pods/http:proxy-service-8qsrv-zrbxj:162/proxy/: bar (200; 3.519057ms) Jan 13 00:42:41.427: INFO: (8) /api/v1/namespaces/proxy-1012/services/proxy-service-8qsrv:portname2/proxy/: bar (200; 4.153635ms) Jan 13 00:42:41.427: INFO: (8) /api/v1/namespaces/proxy-1012/services/proxy-service-8qsrv:portname1/proxy/: foo (200; 4.14667ms) Jan 13 00:42:41.427: INFO: (8) /api/v1/namespaces/proxy-1012/services/http:proxy-service-8qsrv:portname1/proxy/: foo (200; 4.134652ms) Jan 13 00:42:41.427: INFO: (8) /api/v1/namespaces/proxy-1012/services/http:proxy-service-8qsrv:portname2/proxy/: bar (200; 4.227582ms) Jan 13 00:42:41.427: INFO: (8) /api/v1/namespaces/proxy-1012/services/https:proxy-service-8qsrv:tlsportname1/proxy/: tls baz (200; 4.267626ms) Jan 13 00:42:41.427: INFO: (8) /api/v1/namespaces/proxy-1012/pods/https:proxy-service-8qsrv-zrbxj:462/proxy/: tls qux (200; 4.25544ms) Jan 13 00:42:41.427: INFO: (8) /api/v1/namespaces/proxy-1012/pods/proxy-service-8qsrv-zrbxj/proxy/: test (200; 4.227498ms) Jan 13 00:42:41.427: INFO: (8) /api/v1/namespaces/proxy-1012/pods/https:proxy-service-8qsrv-zrbxj:460/proxy/: tls baz (200; 4.214904ms) Jan 13 00:42:41.427: INFO: (8) /api/v1/namespaces/proxy-1012/pods/https:proxy-service-8qsrv-zrbxj:443/proxy/: ... (200; 6.46959ms) Jan 13 00:42:41.443: INFO: (9) /api/v1/namespaces/proxy-1012/pods/https:proxy-service-8qsrv-zrbxj:462/proxy/: tls qux (200; 13.433319ms) Jan 13 00:42:41.443: INFO: (9) /api/v1/namespaces/proxy-1012/services/proxy-service-8qsrv:portname2/proxy/: bar (200; 13.555573ms) Jan 13 00:42:41.443: INFO: (9) /api/v1/namespaces/proxy-1012/services/https:proxy-service-8qsrv:tlsportname1/proxy/: tls baz (200; 13.600086ms) Jan 13 00:42:41.443: INFO: (9) /api/v1/namespaces/proxy-1012/services/http:proxy-service-8qsrv:portname1/proxy/: foo (200; 13.527305ms) Jan 13 00:42:41.443: INFO: (9) /api/v1/namespaces/proxy-1012/services/proxy-service-8qsrv:portname1/proxy/: foo (200; 13.579814ms) Jan 13 00:42:41.443: INFO: (9) /api/v1/namespaces/proxy-1012/pods/proxy-service-8qsrv-zrbxj:1080/proxy/: test<... (200; 13.58914ms) Jan 13 00:42:41.443: INFO: (9) /api/v1/namespaces/proxy-1012/pods/http:proxy-service-8qsrv-zrbxj:162/proxy/: bar (200; 14.060549ms) Jan 13 00:42:41.444: INFO: (9) /api/v1/namespaces/proxy-1012/pods/proxy-service-8qsrv-zrbxj/proxy/: test (200; 14.313499ms) Jan 13 00:42:41.444: INFO: (9) /api/v1/namespaces/proxy-1012/pods/proxy-service-8qsrv-zrbxj:162/proxy/: bar (200; 14.299113ms) Jan 13 00:42:41.444: INFO: (9) /api/v1/namespaces/proxy-1012/services/http:proxy-service-8qsrv:portname2/proxy/: bar (200; 14.317649ms) Jan 13 00:42:41.444: INFO: (9) /api/v1/namespaces/proxy-1012/pods/proxy-service-8qsrv-zrbxj:160/proxy/: foo (200; 14.364398ms) Jan 13 00:42:41.444: INFO: (9) /api/v1/namespaces/proxy-1012/pods/http:proxy-service-8qsrv-zrbxj:160/proxy/: foo (200; 14.334184ms) Jan 13 00:42:41.444: INFO: (9) /api/v1/namespaces/proxy-1012/pods/http:proxy-service-8qsrv-zrbxj:1080/proxy/: ... (200; 14.789551ms) Jan 13 00:42:41.444: INFO: (9) /api/v1/namespaces/proxy-1012/services/https:proxy-service-8qsrv:tlsportname2/proxy/: tls qux (200; 14.854128ms) Jan 13 00:42:41.445: INFO: (9) /api/v1/namespaces/proxy-1012/pods/https:proxy-service-8qsrv-zrbxj:460/proxy/: tls baz (200; 15.341261ms) Jan 13 00:42:41.445: INFO: (9) /api/v1/namespaces/proxy-1012/pods/https:proxy-service-8qsrv-zrbxj:443/proxy/: test (200; 3.494113ms) Jan 13 00:42:41.449: INFO: (10) /api/v1/namespaces/proxy-1012/pods/proxy-service-8qsrv-zrbxj:160/proxy/: foo (200; 4.01875ms) Jan 13 00:42:41.449: INFO: (10) /api/v1/namespaces/proxy-1012/pods/proxy-service-8qsrv-zrbxj:1080/proxy/: test<... (200; 3.921336ms) Jan 13 00:42:41.449: INFO: (10) /api/v1/namespaces/proxy-1012/pods/https:proxy-service-8qsrv-zrbxj:460/proxy/: tls baz (200; 3.964507ms) Jan 13 00:42:41.449: INFO: (10) /api/v1/namespaces/proxy-1012/pods/http:proxy-service-8qsrv-zrbxj:1080/proxy/: ... (200; 4.010824ms) Jan 13 00:42:41.449: INFO: (10) /api/v1/namespaces/proxy-1012/pods/http:proxy-service-8qsrv-zrbxj:160/proxy/: foo (200; 4.266497ms) Jan 13 00:42:41.449: INFO: (10) /api/v1/namespaces/proxy-1012/pods/http:proxy-service-8qsrv-zrbxj:162/proxy/: bar (200; 4.30774ms) Jan 13 00:42:41.449: INFO: (10) /api/v1/namespaces/proxy-1012/pods/https:proxy-service-8qsrv-zrbxj:443/proxy/: test<... (200; 3.456787ms) Jan 13 00:42:41.454: INFO: (11) /api/v1/namespaces/proxy-1012/pods/https:proxy-service-8qsrv-zrbxj:462/proxy/: tls qux (200; 3.717449ms) Jan 13 00:42:41.454: INFO: (11) /api/v1/namespaces/proxy-1012/services/http:proxy-service-8qsrv:portname1/proxy/: foo (200; 3.935354ms) Jan 13 00:42:41.455: INFO: (11) /api/v1/namespaces/proxy-1012/pods/proxy-service-8qsrv-zrbxj:162/proxy/: bar (200; 4.361434ms) Jan 13 00:42:41.455: INFO: (11) /api/v1/namespaces/proxy-1012/services/proxy-service-8qsrv:portname1/proxy/: foo (200; 4.472978ms) Jan 13 00:42:41.455: INFO: (11) /api/v1/namespaces/proxy-1012/pods/https:proxy-service-8qsrv-zrbxj:460/proxy/: tls baz (200; 4.567087ms) Jan 13 00:42:41.455: INFO: (11) /api/v1/namespaces/proxy-1012/services/https:proxy-service-8qsrv:tlsportname1/proxy/: tls baz (200; 4.530477ms) Jan 13 00:42:41.455: INFO: (11) /api/v1/namespaces/proxy-1012/services/http:proxy-service-8qsrv:portname2/proxy/: bar (200; 4.570175ms) Jan 13 00:42:41.455: INFO: (11) /api/v1/namespaces/proxy-1012/pods/proxy-service-8qsrv-zrbxj:160/proxy/: foo (200; 4.633785ms) Jan 13 00:42:41.455: INFO: (11) /api/v1/namespaces/proxy-1012/pods/http:proxy-service-8qsrv-zrbxj:162/proxy/: bar (200; 4.574237ms) Jan 13 00:42:41.455: INFO: (11) /api/v1/namespaces/proxy-1012/pods/proxy-service-8qsrv-zrbxj/proxy/: test (200; 4.648517ms) Jan 13 00:42:41.455: INFO: (11) /api/v1/namespaces/proxy-1012/services/proxy-service-8qsrv:portname2/proxy/: bar (200; 4.614265ms) Jan 13 00:42:41.455: INFO: (11) /api/v1/namespaces/proxy-1012/pods/https:proxy-service-8qsrv-zrbxj:443/proxy/: ... (200; 5.072871ms) Jan 13 00:42:41.459: INFO: (12) /api/v1/namespaces/proxy-1012/pods/proxy-service-8qsrv-zrbxj/proxy/: test (200; 3.319837ms) Jan 13 00:42:41.459: INFO: (12) /api/v1/namespaces/proxy-1012/services/proxy-service-8qsrv:portname2/proxy/: bar (200; 3.473323ms) Jan 13 00:42:41.459: INFO: (12) /api/v1/namespaces/proxy-1012/services/https:proxy-service-8qsrv:tlsportname1/proxy/: tls baz (200; 3.522893ms) Jan 13 00:42:41.459: INFO: (12) /api/v1/namespaces/proxy-1012/pods/proxy-service-8qsrv-zrbxj:1080/proxy/: test<... (200; 3.498143ms) Jan 13 00:42:41.459: INFO: (12) /api/v1/namespaces/proxy-1012/pods/proxy-service-8qsrv-zrbxj:162/proxy/: bar (200; 3.54124ms) Jan 13 00:42:41.460: INFO: (12) /api/v1/namespaces/proxy-1012/services/http:proxy-service-8qsrv:portname2/proxy/: bar (200; 3.848449ms) Jan 13 00:42:41.460: INFO: (12) /api/v1/namespaces/proxy-1012/pods/proxy-service-8qsrv-zrbxj:160/proxy/: foo (200; 3.978392ms) Jan 13 00:42:41.460: INFO: (12) /api/v1/namespaces/proxy-1012/services/http:proxy-service-8qsrv:portname1/proxy/: foo (200; 4.095574ms) Jan 13 00:42:41.460: INFO: (12) /api/v1/namespaces/proxy-1012/pods/http:proxy-service-8qsrv-zrbxj:162/proxy/: bar (200; 4.265754ms) Jan 13 00:42:41.460: INFO: (12) /api/v1/namespaces/proxy-1012/services/https:proxy-service-8qsrv:tlsportname2/proxy/: tls qux (200; 4.201098ms) Jan 13 00:42:41.460: INFO: (12) /api/v1/namespaces/proxy-1012/services/proxy-service-8qsrv:portname1/proxy/: foo (200; 4.24253ms) Jan 13 00:42:41.460: INFO: (12) /api/v1/namespaces/proxy-1012/pods/https:proxy-service-8qsrv-zrbxj:460/proxy/: tls baz (200; 4.282665ms) Jan 13 00:42:41.460: INFO: (12) /api/v1/namespaces/proxy-1012/pods/http:proxy-service-8qsrv-zrbxj:1080/proxy/: ... (200; 4.468261ms) Jan 13 00:42:41.460: INFO: (12) /api/v1/namespaces/proxy-1012/pods/http:proxy-service-8qsrv-zrbxj:160/proxy/: foo (200; 4.571675ms) Jan 13 00:42:41.460: INFO: (12) /api/v1/namespaces/proxy-1012/pods/https:proxy-service-8qsrv-zrbxj:462/proxy/: tls qux (200; 4.54893ms) Jan 13 00:42:41.460: INFO: (12) /api/v1/namespaces/proxy-1012/pods/https:proxy-service-8qsrv-zrbxj:443/proxy/: test (200; 3.520629ms) Jan 13 00:42:41.464: INFO: (13) /api/v1/namespaces/proxy-1012/pods/https:proxy-service-8qsrv-zrbxj:443/proxy/: ... (200; 3.627181ms) Jan 13 00:42:41.466: INFO: (13) /api/v1/namespaces/proxy-1012/pods/proxy-service-8qsrv-zrbxj:162/proxy/: bar (200; 5.368785ms) Jan 13 00:42:41.466: INFO: (13) /api/v1/namespaces/proxy-1012/pods/https:proxy-service-8qsrv-zrbxj:460/proxy/: tls baz (200; 5.701839ms) Jan 13 00:42:41.466: INFO: (13) /api/v1/namespaces/proxy-1012/services/proxy-service-8qsrv:portname1/proxy/: foo (200; 5.779148ms) Jan 13 00:42:41.466: INFO: (13) /api/v1/namespaces/proxy-1012/services/https:proxy-service-8qsrv:tlsportname1/proxy/: tls baz (200; 5.713361ms) Jan 13 00:42:41.466: INFO: (13) /api/v1/namespaces/proxy-1012/services/proxy-service-8qsrv:portname2/proxy/: bar (200; 5.74152ms) Jan 13 00:42:41.466: INFO: (13) /api/v1/namespaces/proxy-1012/pods/proxy-service-8qsrv-zrbxj:1080/proxy/: test<... (200; 5.784288ms) Jan 13 00:42:41.466: INFO: (13) /api/v1/namespaces/proxy-1012/pods/https:proxy-service-8qsrv-zrbxj:462/proxy/: tls qux (200; 5.783604ms) Jan 13 00:42:41.466: INFO: (13) /api/v1/namespaces/proxy-1012/pods/http:proxy-service-8qsrv-zrbxj:162/proxy/: bar (200; 5.788435ms) Jan 13 00:42:41.466: INFO: (13) /api/v1/namespaces/proxy-1012/services/https:proxy-service-8qsrv:tlsportname2/proxy/: tls qux (200; 5.822946ms) Jan 13 00:42:41.466: INFO: (13) /api/v1/namespaces/proxy-1012/services/http:proxy-service-8qsrv:portname2/proxy/: bar (200; 5.857496ms) Jan 13 00:42:41.466: INFO: (13) /api/v1/namespaces/proxy-1012/pods/proxy-service-8qsrv-zrbxj:160/proxy/: foo (200; 5.869486ms) Jan 13 00:42:41.466: INFO: (13) /api/v1/namespaces/proxy-1012/services/http:proxy-service-8qsrv:portname1/proxy/: foo (200; 5.940707ms) Jan 13 00:42:41.466: INFO: (13) /api/v1/namespaces/proxy-1012/pods/http:proxy-service-8qsrv-zrbxj:160/proxy/: foo (200; 5.887233ms) Jan 13 00:42:41.470: INFO: (14) /api/v1/namespaces/proxy-1012/pods/proxy-service-8qsrv-zrbxj:162/proxy/: bar (200; 3.783492ms) Jan 13 00:42:41.470: INFO: (14) /api/v1/namespaces/proxy-1012/pods/https:proxy-service-8qsrv-zrbxj:460/proxy/: tls baz (200; 3.80455ms) Jan 13 00:42:41.471: INFO: (14) /api/v1/namespaces/proxy-1012/services/proxy-service-8qsrv:portname1/proxy/: foo (200; 4.201672ms) Jan 13 00:42:41.471: INFO: (14) /api/v1/namespaces/proxy-1012/services/https:proxy-service-8qsrv:tlsportname1/proxy/: tls baz (200; 4.176091ms) Jan 13 00:42:41.471: INFO: (14) /api/v1/namespaces/proxy-1012/services/proxy-service-8qsrv:portname2/proxy/: bar (200; 4.194958ms) Jan 13 00:42:41.471: INFO: (14) /api/v1/namespaces/proxy-1012/services/http:proxy-service-8qsrv:portname2/proxy/: bar (200; 4.198542ms) Jan 13 00:42:41.471: INFO: (14) /api/v1/namespaces/proxy-1012/pods/http:proxy-service-8qsrv-zrbxj:160/proxy/: foo (200; 4.252524ms) Jan 13 00:42:41.471: INFO: (14) /api/v1/namespaces/proxy-1012/services/https:proxy-service-8qsrv:tlsportname2/proxy/: tls qux (200; 4.20115ms) Jan 13 00:42:41.471: INFO: (14) /api/v1/namespaces/proxy-1012/services/http:proxy-service-8qsrv:portname1/proxy/: foo (200; 4.190839ms) Jan 13 00:42:41.471: INFO: (14) /api/v1/namespaces/proxy-1012/pods/proxy-service-8qsrv-zrbxj/proxy/: test (200; 4.586118ms) Jan 13 00:42:41.471: INFO: (14) /api/v1/namespaces/proxy-1012/pods/http:proxy-service-8qsrv-zrbxj:162/proxy/: bar (200; 4.64937ms) Jan 13 00:42:41.471: INFO: (14) /api/v1/namespaces/proxy-1012/pods/proxy-service-8qsrv-zrbxj:1080/proxy/: test<... (200; 4.690186ms) Jan 13 00:42:41.471: INFO: (14) /api/v1/namespaces/proxy-1012/pods/proxy-service-8qsrv-zrbxj:160/proxy/: foo (200; 4.659622ms) Jan 13 00:42:41.471: INFO: (14) /api/v1/namespaces/proxy-1012/pods/http:proxy-service-8qsrv-zrbxj:1080/proxy/: ... (200; 4.676848ms) Jan 13 00:42:41.471: INFO: (14) /api/v1/namespaces/proxy-1012/pods/https:proxy-service-8qsrv-zrbxj:443/proxy/: test<... (200; 3.431165ms) Jan 13 00:42:41.475: INFO: (15) /api/v1/namespaces/proxy-1012/pods/http:proxy-service-8qsrv-zrbxj:1080/proxy/: ... (200; 3.420734ms) Jan 13 00:42:41.475: INFO: (15) /api/v1/namespaces/proxy-1012/pods/proxy-service-8qsrv-zrbxj:162/proxy/: bar (200; 3.570694ms) Jan 13 00:42:41.475: INFO: (15) /api/v1/namespaces/proxy-1012/pods/proxy-service-8qsrv-zrbxj:160/proxy/: foo (200; 3.552081ms) Jan 13 00:42:41.475: INFO: (15) /api/v1/namespaces/proxy-1012/pods/https:proxy-service-8qsrv-zrbxj:462/proxy/: tls qux (200; 3.56973ms) Jan 13 00:42:41.475: INFO: (15) /api/v1/namespaces/proxy-1012/pods/proxy-service-8qsrv-zrbxj/proxy/: test (200; 3.667735ms) Jan 13 00:42:41.475: INFO: (15) /api/v1/namespaces/proxy-1012/pods/http:proxy-service-8qsrv-zrbxj:160/proxy/: foo (200; 3.989473ms) Jan 13 00:42:41.476: INFO: (15) /api/v1/namespaces/proxy-1012/services/http:proxy-service-8qsrv:portname1/proxy/: foo (200; 4.535996ms) Jan 13 00:42:41.476: INFO: (15) /api/v1/namespaces/proxy-1012/services/https:proxy-service-8qsrv:tlsportname1/proxy/: tls baz (200; 4.938735ms) Jan 13 00:42:41.476: INFO: (15) /api/v1/namespaces/proxy-1012/services/proxy-service-8qsrv:portname1/proxy/: foo (200; 4.932328ms) Jan 13 00:42:41.477: INFO: (15) /api/v1/namespaces/proxy-1012/services/http:proxy-service-8qsrv:portname2/proxy/: bar (200; 5.13328ms) Jan 13 00:42:41.477: INFO: (15) /api/v1/namespaces/proxy-1012/services/https:proxy-service-8qsrv:tlsportname2/proxy/: tls qux (200; 5.182064ms) Jan 13 00:42:41.477: INFO: (15) /api/v1/namespaces/proxy-1012/services/proxy-service-8qsrv:portname2/proxy/: bar (200; 5.242202ms) Jan 13 00:42:41.480: INFO: (16) /api/v1/namespaces/proxy-1012/pods/proxy-service-8qsrv-zrbxj:1080/proxy/: test<... (200; 3.117827ms) Jan 13 00:42:41.480: INFO: (16) /api/v1/namespaces/proxy-1012/pods/https:proxy-service-8qsrv-zrbxj:462/proxy/: tls qux (200; 3.203079ms) Jan 13 00:42:41.480: INFO: (16) /api/v1/namespaces/proxy-1012/pods/https:proxy-service-8qsrv-zrbxj:443/proxy/: ... (200; 3.413524ms) Jan 13 00:42:41.480: INFO: (16) /api/v1/namespaces/proxy-1012/pods/http:proxy-service-8qsrv-zrbxj:160/proxy/: foo (200; 3.402592ms) Jan 13 00:42:41.481: INFO: (16) /api/v1/namespaces/proxy-1012/pods/proxy-service-8qsrv-zrbxj:162/proxy/: bar (200; 4.019088ms) Jan 13 00:42:41.481: INFO: (16) /api/v1/namespaces/proxy-1012/pods/proxy-service-8qsrv-zrbxj/proxy/: test (200; 4.286377ms) Jan 13 00:42:41.481: INFO: (16) /api/v1/namespaces/proxy-1012/pods/https:proxy-service-8qsrv-zrbxj:460/proxy/: tls baz (200; 4.182332ms) Jan 13 00:42:41.481: INFO: (16) /api/v1/namespaces/proxy-1012/services/http:proxy-service-8qsrv:portname1/proxy/: foo (200; 4.269631ms) Jan 13 00:42:41.481: INFO: (16) /api/v1/namespaces/proxy-1012/services/http:proxy-service-8qsrv:portname2/proxy/: bar (200; 4.373962ms) Jan 13 00:42:41.481: INFO: (16) /api/v1/namespaces/proxy-1012/pods/proxy-service-8qsrv-zrbxj:160/proxy/: foo (200; 4.474417ms) Jan 13 00:42:41.481: INFO: (16) /api/v1/namespaces/proxy-1012/services/https:proxy-service-8qsrv:tlsportname2/proxy/: tls qux (200; 4.402259ms) Jan 13 00:42:41.481: INFO: (16) /api/v1/namespaces/proxy-1012/services/https:proxy-service-8qsrv:tlsportname1/proxy/: tls baz (200; 4.42628ms) Jan 13 00:42:41.481: INFO: (16) /api/v1/namespaces/proxy-1012/services/proxy-service-8qsrv:portname2/proxy/: bar (200; 4.463406ms) Jan 13 00:42:41.481: INFO: (16) /api/v1/namespaces/proxy-1012/services/proxy-service-8qsrv:portname1/proxy/: foo (200; 4.601661ms) Jan 13 00:42:41.485: INFO: (17) /api/v1/namespaces/proxy-1012/services/proxy-service-8qsrv:portname1/proxy/: foo (200; 3.806045ms) Jan 13 00:42:41.485: INFO: (17) /api/v1/namespaces/proxy-1012/services/http:proxy-service-8qsrv:portname2/proxy/: bar (200; 3.982664ms) Jan 13 00:42:41.485: INFO: (17) /api/v1/namespaces/proxy-1012/services/http:proxy-service-8qsrv:portname1/proxy/: foo (200; 4.056741ms) Jan 13 00:42:41.485: INFO: (17) /api/v1/namespaces/proxy-1012/services/https:proxy-service-8qsrv:tlsportname2/proxy/: tls qux (200; 4.033819ms) Jan 13 00:42:41.485: INFO: (17) /api/v1/namespaces/proxy-1012/pods/proxy-service-8qsrv-zrbxj/proxy/: test (200; 4.075744ms) Jan 13 00:42:41.485: INFO: (17) /api/v1/namespaces/proxy-1012/services/proxy-service-8qsrv:portname2/proxy/: bar (200; 4.039532ms) Jan 13 00:42:41.485: INFO: (17) /api/v1/namespaces/proxy-1012/pods/https:proxy-service-8qsrv-zrbxj:460/proxy/: tls baz (200; 4.057466ms) Jan 13 00:42:41.485: INFO: (17) /api/v1/namespaces/proxy-1012/pods/http:proxy-service-8qsrv-zrbxj:1080/proxy/: ... (200; 4.13875ms) Jan 13 00:42:41.486: INFO: (17) /api/v1/namespaces/proxy-1012/pods/proxy-service-8qsrv-zrbxj:162/proxy/: bar (200; 4.235565ms) Jan 13 00:42:41.486: INFO: (17) /api/v1/namespaces/proxy-1012/pods/http:proxy-service-8qsrv-zrbxj:160/proxy/: foo (200; 4.307356ms) Jan 13 00:42:41.486: INFO: (17) /api/v1/namespaces/proxy-1012/pods/http:proxy-service-8qsrv-zrbxj:162/proxy/: bar (200; 4.294853ms) Jan 13 00:42:41.486: INFO: (17) /api/v1/namespaces/proxy-1012/services/https:proxy-service-8qsrv:tlsportname1/proxy/: tls baz (200; 4.374699ms) Jan 13 00:42:41.486: INFO: (17) /api/v1/namespaces/proxy-1012/pods/proxy-service-8qsrv-zrbxj:160/proxy/: foo (200; 4.405253ms) Jan 13 00:42:41.486: INFO: (17) /api/v1/namespaces/proxy-1012/pods/proxy-service-8qsrv-zrbxj:1080/proxy/: test<... (200; 4.53811ms) Jan 13 00:42:41.486: INFO: (17) /api/v1/namespaces/proxy-1012/pods/https:proxy-service-8qsrv-zrbxj:462/proxy/: tls qux (200; 4.635413ms) Jan 13 00:42:41.486: INFO: (17) /api/v1/namespaces/proxy-1012/pods/https:proxy-service-8qsrv-zrbxj:443/proxy/: test (200; 3.737129ms) Jan 13 00:42:41.490: INFO: (18) /api/v1/namespaces/proxy-1012/pods/proxy-service-8qsrv-zrbxj:1080/proxy/: test<... (200; 3.746381ms) Jan 13 00:42:41.490: INFO: (18) /api/v1/namespaces/proxy-1012/pods/proxy-service-8qsrv-zrbxj:162/proxy/: bar (200; 3.741358ms) Jan 13 00:42:41.490: INFO: (18) /api/v1/namespaces/proxy-1012/pods/http:proxy-service-8qsrv-zrbxj:1080/proxy/: ... (200; 3.778516ms) Jan 13 00:42:41.490: INFO: (18) /api/v1/namespaces/proxy-1012/pods/http:proxy-service-8qsrv-zrbxj:162/proxy/: bar (200; 3.759704ms) Jan 13 00:42:41.490: INFO: (18) /api/v1/namespaces/proxy-1012/pods/http:proxy-service-8qsrv-zrbxj:160/proxy/: foo (200; 3.880563ms) Jan 13 00:42:41.490: INFO: (18) /api/v1/namespaces/proxy-1012/pods/https:proxy-service-8qsrv-zrbxj:443/proxy/: test (200; 1.915494ms) Jan 13 00:42:41.493: INFO: (19) /api/v1/namespaces/proxy-1012/pods/http:proxy-service-8qsrv-zrbxj:1080/proxy/: ... (200; 2.875114ms) Jan 13 00:42:41.494: INFO: (19) /api/v1/namespaces/proxy-1012/pods/https:proxy-service-8qsrv-zrbxj:460/proxy/: tls baz (200; 2.990578ms) Jan 13 00:42:41.494: INFO: (19) /api/v1/namespaces/proxy-1012/pods/http:proxy-service-8qsrv-zrbxj:162/proxy/: bar (200; 3.264ms) Jan 13 00:42:41.494: INFO: (19) /api/v1/namespaces/proxy-1012/pods/https:proxy-service-8qsrv-zrbxj:462/proxy/: tls qux (200; 3.361793ms) Jan 13 00:42:41.494: INFO: (19) /api/v1/namespaces/proxy-1012/pods/proxy-service-8qsrv-zrbxj:1080/proxy/: test<... (200; 3.588072ms) Jan 13 00:42:41.494: INFO: (19) /api/v1/namespaces/proxy-1012/services/proxy-service-8qsrv:portname2/proxy/: bar (200; 3.80076ms) Jan 13 00:42:41.495: INFO: (19) /api/v1/namespaces/proxy-1012/pods/https:proxy-service-8qsrv-zrbxj:443/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Jan 13 00:42:59.957: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Registering the sample API server. Jan 13 00:43:00.547: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Jan 13 00:43:03.063: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746095380, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746095380, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746095380, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746095380, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-67dc674868\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 13 00:43:05.933: INFO: Waited 833.761967ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:43:06.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-1531" for this suite. • [SLOW TEST:6.861 seconds] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":309,"completed":296,"skipped":5178,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:43:06.726: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test override arguments Jan 13 00:43:07.134: INFO: Waiting up to 5m0s for pod "client-containers-e1958ee8-aa31-475c-8c7d-5e126a29403b" in namespace "containers-6415" to be "Succeeded or Failed" Jan 13 00:43:07.350: INFO: Pod "client-containers-e1958ee8-aa31-475c-8c7d-5e126a29403b": Phase="Pending", Reason="", readiness=false. Elapsed: 215.599123ms Jan 13 00:43:09.354: INFO: Pod "client-containers-e1958ee8-aa31-475c-8c7d-5e126a29403b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219627856s Jan 13 00:43:11.359: INFO: Pod "client-containers-e1958ee8-aa31-475c-8c7d-5e126a29403b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.2243518s STEP: Saw pod success Jan 13 00:43:11.359: INFO: Pod "client-containers-e1958ee8-aa31-475c-8c7d-5e126a29403b" satisfied condition "Succeeded or Failed" Jan 13 00:43:11.362: INFO: Trying to get logs from node leguer-worker2 pod client-containers-e1958ee8-aa31-475c-8c7d-5e126a29403b container agnhost-container: STEP: delete the pod Jan 13 00:43:11.519: INFO: Waiting for pod client-containers-e1958ee8-aa31-475c-8c7d-5e126a29403b to disappear Jan 13 00:43:11.530: INFO: Pod client-containers-e1958ee8-aa31-475c-8c7d-5e126a29403b no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:43:11.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-6415" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":309,"completed":297,"skipped":5178,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:43:11.537: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Jan 13 00:43:11.781: INFO: Pod name pod-release: Found 0 pods out of 1 Jan 13 00:43:16.828: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:43:17.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5499" for this suite. • [SLOW TEST:6.332 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":309,"completed":298,"skipped":5218,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:43:17.869: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating the pod Jan 13 00:43:18.037: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:43:27.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-3816" for this suite. • [SLOW TEST:9.376 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should invoke init containers on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":309,"completed":299,"skipped":5220,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:43:27.247: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 13 00:43:27.870: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 13 00:43:29.881: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746095407, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746095407, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746095408, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746095407, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 13 00:43:32.933: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 13 00:43:32.937: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-2444-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:43:34.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6805" for this suite. STEP: Destroying namespace "webhook-6805-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:6.969 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":309,"completed":300,"skipped":5259,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:43:34.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test emptydir 0666 on node default medium Jan 13 00:43:34.281: INFO: Waiting up to 5m0s for pod "pod-b5224fa1-6b3f-498d-b6d5-6d6ed3e1d36f" in namespace "emptydir-2230" to be "Succeeded or Failed" Jan 13 00:43:34.331: INFO: Pod "pod-b5224fa1-6b3f-498d-b6d5-6d6ed3e1d36f": Phase="Pending", Reason="", readiness=false. Elapsed: 49.931589ms Jan 13 00:43:36.613: INFO: Pod "pod-b5224fa1-6b3f-498d-b6d5-6d6ed3e1d36f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.331502413s Jan 13 00:43:38.727: INFO: Pod "pod-b5224fa1-6b3f-498d-b6d5-6d6ed3e1d36f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.446087229s Jan 13 00:43:40.732: INFO: Pod "pod-b5224fa1-6b3f-498d-b6d5-6d6ed3e1d36f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.450757416s Jan 13 00:43:42.737: INFO: Pod "pod-b5224fa1-6b3f-498d-b6d5-6d6ed3e1d36f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.455248324s STEP: Saw pod success Jan 13 00:43:42.737: INFO: Pod "pod-b5224fa1-6b3f-498d-b6d5-6d6ed3e1d36f" satisfied condition "Succeeded or Failed" Jan 13 00:43:42.740: INFO: Trying to get logs from node leguer-worker pod pod-b5224fa1-6b3f-498d-b6d5-6d6ed3e1d36f container test-container: STEP: delete the pod Jan 13 00:43:42.830: INFO: Waiting for pod pod-b5224fa1-6b3f-498d-b6d5-6d6ed3e1d36f to disappear Jan 13 00:43:42.836: INFO: Pod pod-b5224fa1-6b3f-498d-b6d5-6d6ed3e1d36f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:43:42.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2230" for this suite. • [SLOW TEST:8.628 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:45 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":301,"skipped":5269,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:43:42.843: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test emptydir 0666 on node default medium Jan 13 00:43:42.969: INFO: Waiting up to 5m0s for pod "pod-b8b03384-4263-44f9-b02c-53bd699af26a" in namespace "emptydir-5625" to be "Succeeded or Failed" Jan 13 00:43:42.980: INFO: Pod "pod-b8b03384-4263-44f9-b02c-53bd699af26a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.192031ms Jan 13 00:43:45.082: INFO: Pod "pod-b8b03384-4263-44f9-b02c-53bd699af26a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.112951717s Jan 13 00:43:47.087: INFO: Pod "pod-b8b03384-4263-44f9-b02c-53bd699af26a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.117953482s STEP: Saw pod success Jan 13 00:43:47.087: INFO: Pod "pod-b8b03384-4263-44f9-b02c-53bd699af26a" satisfied condition "Succeeded or Failed" Jan 13 00:43:47.091: INFO: Trying to get logs from node leguer-worker2 pod pod-b8b03384-4263-44f9-b02c-53bd699af26a container test-container: STEP: delete the pod Jan 13 00:43:47.136: INFO: Waiting for pod pod-b8b03384-4263-44f9-b02c-53bd699af26a to disappear Jan 13 00:43:47.154: INFO: Pod pod-b8b03384-4263-44f9-b02c-53bd699af26a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:43:47.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5625" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":302,"skipped":5270,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:43:47.162: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name configmap-test-volume-map-6defd0d3-ea67-46f6-8b7c-cfc53cd38d68 STEP: Creating a pod to test consume configMaps Jan 13 00:43:47.305: INFO: Waiting up to 5m0s for pod "pod-configmaps-c186d6c0-315b-450f-adc6-8f691fa52857" in namespace "configmap-9583" to be "Succeeded or Failed" Jan 13 00:43:47.309: INFO: Pod "pod-configmaps-c186d6c0-315b-450f-adc6-8f691fa52857": Phase="Pending", Reason="", readiness=false. Elapsed: 3.882028ms Jan 13 00:43:49.315: INFO: Pod "pod-configmaps-c186d6c0-315b-450f-adc6-8f691fa52857": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009456052s Jan 13 00:43:51.319: INFO: Pod "pod-configmaps-c186d6c0-315b-450f-adc6-8f691fa52857": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014003663s STEP: Saw pod success Jan 13 00:43:51.319: INFO: Pod "pod-configmaps-c186d6c0-315b-450f-adc6-8f691fa52857" satisfied condition "Succeeded or Failed" Jan 13 00:43:51.323: INFO: Trying to get logs from node leguer-worker pod pod-configmaps-c186d6c0-315b-450f-adc6-8f691fa52857 container agnhost-container: STEP: delete the pod Jan 13 00:43:51.538: INFO: Waiting for pod pod-configmaps-c186d6c0-315b-450f-adc6-8f691fa52857 to disappear Jan 13 00:43:51.550: INFO: Pod pod-configmaps-c186d6c0-315b-450f-adc6-8f691fa52857 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:43:51.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9583" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":309,"completed":303,"skipped":5294,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} S ------------------------------ [sig-node] RuntimeClass should support RuntimeClasses API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:43:51.558: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename runtimeclass STEP: Waiting for a default service account to be provisioned in namespace [It] should support RuntimeClasses API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: getting /apis STEP: getting /apis/node.k8s.io STEP: getting /apis/node.k8s.io/v1 STEP: creating STEP: watching Jan 13 00:43:51.690: INFO: starting watch STEP: getting STEP: listing STEP: patching STEP: updating Jan 13 00:43:51.754: INFO: waiting for watch events with expected annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:43:51.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "runtimeclass-2732" for this suite. •{"msg":"PASSED [sig-node] RuntimeClass should support RuntimeClasses API operations [Conformance]","total":309,"completed":304,"skipped":5295,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:43:51.810: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating the pod Jan 13 00:43:51.883: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:43:59.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7548" for this suite. • [SLOW TEST:7.561 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should invoke init containers on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":309,"completed":305,"skipped":5306,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:43:59.372: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 13 00:43:59.510: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:44:00.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-8792" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":309,"completed":306,"skipped":5320,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] IngressClass API should support creating IngressClass API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:44:00.105: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingressclass STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingressclass.go:148 [It] should support creating IngressClass API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Jan 13 00:44:00.288: INFO: starting watch STEP: patching STEP: updating Jan 13 00:44:00.304: INFO: waiting for watch events with expected annotations Jan 13 00:44:00.304: INFO: saw patched and updated annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:44:00.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingressclass-9086" for this suite. •{"msg":"PASSED [sig-network] IngressClass API should support creating IngressClass API operations [Conformance]","total":309,"completed":307,"skipped":5343,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 00:44:00.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating pod liveness-e67cc222-9d91-46be-ac81-58c74915e7a7 in namespace container-probe-5740 Jan 13 00:44:04.503: INFO: Started pod liveness-e67cc222-9d91-46be-ac81-58c74915e7a7 in namespace container-probe-5740 STEP: checking the pod's current state and verifying that restartCount is present Jan 13 00:44:04.508: INFO: Initial restart count of pod liveness-e67cc222-9d91-46be-ac81-58c74915e7a7 is 0 Jan 13 00:44:28.568: INFO: Restart count of pod container-probe-5740/liveness-e67cc222-9d91-46be-ac81-58c74915e7a7 is now 1 (24.059619394s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 00:44:28.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5740" for this suite. • [SLOW TEST:28.263 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":309,"completed":308,"skipped":5348,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} SSSSSSSSSSJan 13 00:44:28.638: INFO: Running AfterSuite actions on all nodes Jan 13 00:44:28.638: INFO: Running AfterSuite actions on node 1 Jan 13 00:44:28.638: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml {"msg":"Test Suite completed","total":309,"completed":308,"skipped":5358,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]"]} Summarizing 1 Failure: [Fail] [sig-scheduling] SchedulerPreemption [Serial] [It] validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:932 Ran 309 of 5667 Specs in 7975.448 seconds FAIL! -- 308 Passed | 1 Failed | 0 Pending | 5358 Skipped --- FAIL: TestE2E (7975.55s) FAIL