I0308 10:38:19.394706 6 test_context.go:419] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0308 10:38:19.395008 6 e2e.go:109] Starting e2e run "dbb24fc6-c14c-431f-93aa-3acce1801c6d" on Ginkgo node 1 {"msg":"Test Suite starting","total":278,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1583663897 - Will randomize all specs Will run 278 of 4814 specs Mar 8 10:38:19.462: INFO: >>> kubeConfig: /root/.kube/config Mar 8 10:38:19.466: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Mar 8 10:38:19.489: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Mar 8 10:38:19.527: INFO: 8 / 8 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Mar 8 10:38:19.527: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Mar 8 10:38:19.527: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Mar 8 10:38:19.534: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Mar 8 10:38:19.534: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Mar 8 10:38:19.534: INFO: e2e test version: v1.17.0 Mar 8 10:38:19.535: INFO: kube-apiserver version: v1.17.0 Mar 8 10:38:19.535: INFO: >>> kubeConfig: /root/.kube/config Mar 8 10:38:19.539: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 10:38:19.540: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected Mar 8 10:38:19.592: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-696140b4-325f-46f9-807f-4aabb8a94c14 STEP: Creating secret with name s-test-opt-upd-c704cfcb-27bd-44b9-8087-a781d2e135ec STEP: Creating the pod STEP: Deleting secret s-test-opt-del-696140b4-325f-46f9-807f-4aabb8a94c14 STEP: Updating secret s-test-opt-upd-c704cfcb-27bd-44b9-8087-a781d2e135ec STEP: Creating secret with name s-test-opt-create-20668408-3339-4abe-b390-8a3ad800d634 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 10:38:27.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5010" for this suite. • [SLOW TEST:8.186 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":1,"skipped":62,"failed":0} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 10:38:27.726: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 8 10:38:28.344: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 8 10:38:31.378: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 10:38:31.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7225" for this suite. STEP: Destroying namespace "webhook-7225-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 •{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":278,"completed":2,"skipped":65,"failed":0} SSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 10:38:31.444: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-a441e853-2f6e-4099-bb84-129aaad01fff STEP: Creating a pod to test consume configMaps Mar 8 10:38:31.513: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ef5a13e7-8405-4dab-ac73-75e0cc82002f" in namespace "projected-9098" to be "success or failure" Mar 8 10:38:31.534: INFO: Pod "pod-projected-configmaps-ef5a13e7-8405-4dab-ac73-75e0cc82002f": Phase="Pending", Reason="", readiness=false. Elapsed: 21.820055ms Mar 8 10:38:33.539: INFO: Pod "pod-projected-configmaps-ef5a13e7-8405-4dab-ac73-75e0cc82002f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026094978s Mar 8 10:38:35.542: INFO: Pod "pod-projected-configmaps-ef5a13e7-8405-4dab-ac73-75e0cc82002f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029868153s STEP: Saw pod success Mar 8 10:38:35.543: INFO: Pod "pod-projected-configmaps-ef5a13e7-8405-4dab-ac73-75e0cc82002f" satisfied condition "success or failure" Mar 8 10:38:35.545: INFO: Trying to get logs from node kind-control-plane pod pod-projected-configmaps-ef5a13e7-8405-4dab-ac73-75e0cc82002f container projected-configmap-volume-test: STEP: delete the pod Mar 8 10:38:35.569: INFO: Waiting for pod pod-projected-configmaps-ef5a13e7-8405-4dab-ac73-75e0cc82002f to disappear Mar 8 10:38:35.573: INFO: Pod pod-projected-configmaps-ef5a13e7-8405-4dab-ac73-75e0cc82002f no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 10:38:35.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9098" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":278,"completed":3,"skipped":68,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 10:38:35.581: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-cb18dfcc-8e6f-458f-b57c-176d842474d6 STEP: Creating a pod to test consume secrets Mar 8 10:38:35.680: INFO: Waiting up to 5m0s for pod "pod-secrets-a313b1b4-c61d-4fd0-afd0-595ad9952010" in namespace "secrets-8383" to be "success or failure" Mar 8 10:38:35.690: INFO: Pod "pod-secrets-a313b1b4-c61d-4fd0-afd0-595ad9952010": Phase="Pending", Reason="", readiness=false. Elapsed: 10.315396ms Mar 8 10:38:37.694: INFO: Pod "pod-secrets-a313b1b4-c61d-4fd0-afd0-595ad9952010": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.01429644s STEP: Saw pod success Mar 8 10:38:37.695: INFO: Pod "pod-secrets-a313b1b4-c61d-4fd0-afd0-595ad9952010" satisfied condition "success or failure" Mar 8 10:38:37.698: INFO: Trying to get logs from node kind-control-plane pod pod-secrets-a313b1b4-c61d-4fd0-afd0-595ad9952010 container secret-volume-test: STEP: delete the pod Mar 8 10:38:37.732: INFO: Waiting for pod pod-secrets-a313b1b4-c61d-4fd0-afd0-595ad9952010 to disappear Mar 8 10:38:37.735: INFO: Pod pod-secrets-a313b1b4-c61d-4fd0-afd0-595ad9952010 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 10:38:37.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8383" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":4,"skipped":108,"failed":0} SSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 10:38:37.741: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Mar 8 10:38:37.806: INFO: Waiting up to 5m0s for pod "downward-api-c6c142d0-62af-4c71-8e83-d8b5cd90b228" in namespace "downward-api-5548" to be "success or failure" Mar 8 10:38:37.810: INFO: Pod "downward-api-c6c142d0-62af-4c71-8e83-d8b5cd90b228": Phase="Pending", Reason="", readiness=false. Elapsed: 4.092652ms Mar 8 10:38:39.816: INFO: Pod "downward-api-c6c142d0-62af-4c71-8e83-d8b5cd90b228": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010121962s Mar 8 10:38:41.821: INFO: Pod "downward-api-c6c142d0-62af-4c71-8e83-d8b5cd90b228": Phase="Pending", Reason="", readiness=false. Elapsed: 4.014144454s Mar 8 10:38:43.825: INFO: Pod "downward-api-c6c142d0-62af-4c71-8e83-d8b5cd90b228": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.018543308s STEP: Saw pod success Mar 8 10:38:43.825: INFO: Pod "downward-api-c6c142d0-62af-4c71-8e83-d8b5cd90b228" satisfied condition "success or failure" Mar 8 10:38:43.828: INFO: Trying to get logs from node kind-control-plane pod downward-api-c6c142d0-62af-4c71-8e83-d8b5cd90b228 container dapi-container: STEP: delete the pod Mar 8 10:38:43.851: INFO: Waiting for pod downward-api-c6c142d0-62af-4c71-8e83-d8b5cd90b228 to disappear Mar 8 10:38:43.855: INFO: Pod downward-api-c6c142d0-62af-4c71-8e83-d8b5cd90b228 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 10:38:43.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5548" for this suite. • [SLOW TEST:6.120 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":278,"completed":5,"skipped":117,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 10:38:43.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 8 10:38:43.928: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ca2d8672-03db-4576-bc44-8aad58587e8b" in namespace "downward-api-7492" to be "success or failure" Mar 8 10:38:43.947: INFO: Pod "downwardapi-volume-ca2d8672-03db-4576-bc44-8aad58587e8b": Phase="Pending", Reason="", readiness=false. Elapsed: 18.625802ms Mar 8 10:38:45.951: INFO: Pod "downwardapi-volume-ca2d8672-03db-4576-bc44-8aad58587e8b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.022487123s STEP: Saw pod success Mar 8 10:38:45.951: INFO: Pod "downwardapi-volume-ca2d8672-03db-4576-bc44-8aad58587e8b" satisfied condition "success or failure" Mar 8 10:38:45.954: INFO: Trying to get logs from node kind-control-plane pod downwardapi-volume-ca2d8672-03db-4576-bc44-8aad58587e8b container client-container: STEP: delete the pod Mar 8 10:38:45.970: INFO: Waiting for pod downwardapi-volume-ca2d8672-03db-4576-bc44-8aad58587e8b to disappear Mar 8 10:38:45.975: INFO: Pod downwardapi-volume-ca2d8672-03db-4576-bc44-8aad58587e8b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 10:38:45.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7492" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":6,"skipped":142,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 10:38:45.983: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1877 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 8 10:38:46.054: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-326' Mar 8 10:38:47.909: INFO: stderr: "" Mar 8 10:38:47.909: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Mar 8 10:38:57.960: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-326 -o json' Mar 8 10:38:58.096: INFO: stderr: "" Mar 8 10:38:58.096: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-03-08T10:38:47Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-326\",\n \"resourceVersion\": \"6228\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-326/pods/e2e-test-httpd-pod\",\n \"uid\": \"11d631cf-e8ba-4dcf-9f7d-3f280226d58d\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-cm4tv\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"kind-control-plane\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-cm4tv\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-cm4tv\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-08T10:38:47Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-08T10:38:56Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-08T10:38:56Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-08T10:38:47Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://8a1de4d8ce5a7728e3678aabd8d90ba6e5a5403844c1ed11d7ec8c7b08bc2ae7\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-03-08T10:38:55Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.2\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.0.50\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.0.50\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-03-08T10:38:47Z\"\n }\n}\n" STEP: replace the image in the pod Mar 8 10:38:58.096: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-326' Mar 8 10:38:58.468: INFO: stderr: "" Mar 8 10:38:58.468: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1882 Mar 8 10:38:58.488: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-326' Mar 8 10:39:09.475: INFO: stderr: "" Mar 8 10:39:09.475: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 10:39:09.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-326" for this suite. • [SLOW TEST:23.499 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1873 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":278,"completed":7,"skipped":168,"failed":0} [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 10:39:09.482: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 8 10:39:09.525: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Mar 8 10:39:09.544: INFO: Pod name sample-pod: Found 0 pods out of 1 Mar 8 10:39:14.555: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 8 10:39:14.555: INFO: Creating deployment "test-rolling-update-deployment" Mar 8 10:39:14.583: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Mar 8 10:39:14.593: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Mar 8 10:39:16.600: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Mar 8 10:39:16.603: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719260754, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719260754, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719260754, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719260754, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 8 10:39:18.607: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Mar 8 10:39:18.616: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-7706 /apis/apps/v1/namespaces/deployment-7706/deployments/test-rolling-update-deployment 59335d49-b90d-4c48-ab21-7af064300ba8 6352 1 2020-03-08 10:39:14 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0029eedd8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-03-08 10:39:14 +0000 UTC,LastTransitionTime:2020-03-08 10:39:14 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-03-08 10:39:16 +0000 UTC,LastTransitionTime:2020-03-08 10:39:14 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Mar 8 10:39:18.619: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444 deployment-7706 /apis/apps/v1/namespaces/deployment-7706/replicasets/test-rolling-update-deployment-67cf4f6444 66f70976-eed4-4941-854c-9f746d659c71 6341 1 2020-03-08 10:39:14 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 59335d49-b90d-4c48-ab21-7af064300ba8 0xc002ae9437 0xc002ae9438}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002ae94a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Mar 8 10:39:18.619: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Mar 8 10:39:18.619: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-7706 /apis/apps/v1/namespaces/deployment-7706/replicasets/test-rolling-update-controller 6e8d5a80-9ca0-4c93-a621-18a2fa71137b 6350 2 2020-03-08 10:39:09 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 59335d49-b90d-4c48-ab21-7af064300ba8 0xc002ae9377 0xc002ae9378}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002ae93d8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 8 10:39:18.623: INFO: Pod "test-rolling-update-deployment-67cf4f6444-khchc" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-khchc test-rolling-update-deployment-67cf4f6444- deployment-7706 /api/v1/namespaces/deployment-7706/pods/test-rolling-update-deployment-67cf4f6444-khchc c4b6f909-471b-4e10-9d15-69a11943bbc2 6340 0 2020-03-08 10:39:14 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 66f70976-eed4-4941-854c-9f746d659c71 0xc002ae98f7 0xc002ae98f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4hjjz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4hjjz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4hjjz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-control-plane,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 10:39:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 10:39:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 10:39:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 10:39:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.2,PodIP:10.244.0.52,StartTime:2020-03-08 10:39:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-08 10:39:15 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://91b757d1ca7ada44dd3616c7bbddab1a4b4c67e0dd453636abeb0f72fbecd47c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.0.52,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 10:39:18.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7706" for this suite. • [SLOW TEST:9.147 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":8,"skipped":168,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 10:39:18.630: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs Mar 8 10:39:18.684: INFO: Waiting up to 5m0s for pod "pod-0744be48-e0e6-4990-ac72-c051caf07aab" in namespace "emptydir-3669" to be "success or failure" Mar 8 10:39:18.715: INFO: Pod "pod-0744be48-e0e6-4990-ac72-c051caf07aab": Phase="Pending", Reason="", readiness=false. Elapsed: 30.500991ms Mar 8 10:39:20.719: INFO: Pod "pod-0744be48-e0e6-4990-ac72-c051caf07aab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034223451s Mar 8 10:39:22.723: INFO: Pod "pod-0744be48-e0e6-4990-ac72-c051caf07aab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038348632s STEP: Saw pod success Mar 8 10:39:22.723: INFO: Pod "pod-0744be48-e0e6-4990-ac72-c051caf07aab" satisfied condition "success or failure" Mar 8 10:39:22.726: INFO: Trying to get logs from node kind-control-plane pod pod-0744be48-e0e6-4990-ac72-c051caf07aab container test-container: STEP: delete the pod Mar 8 10:39:22.758: INFO: Waiting for pod pod-0744be48-e0e6-4990-ac72-c051caf07aab to disappear Mar 8 10:39:22.766: INFO: Pod pod-0744be48-e0e6-4990-ac72-c051caf07aab no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 10:39:22.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3669" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":9,"skipped":171,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 10:39:22.774: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 8 10:39:22.836: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a260701d-0cad-4f7e-88e4-1cd3a98dbab8" in namespace "projected-2508" to be "success or failure" Mar 8 10:39:22.839: INFO: Pod "downwardapi-volume-a260701d-0cad-4f7e-88e4-1cd3a98dbab8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.91815ms Mar 8 10:39:24.843: INFO: Pod "downwardapi-volume-a260701d-0cad-4f7e-88e4-1cd3a98dbab8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007145525s STEP: Saw pod success Mar 8 10:39:24.843: INFO: Pod "downwardapi-volume-a260701d-0cad-4f7e-88e4-1cd3a98dbab8" satisfied condition "success or failure" Mar 8 10:39:24.846: INFO: Trying to get logs from node kind-control-plane pod downwardapi-volume-a260701d-0cad-4f7e-88e4-1cd3a98dbab8 container client-container: STEP: delete the pod Mar 8 10:39:24.882: INFO: Waiting for pod downwardapi-volume-a260701d-0cad-4f7e-88e4-1cd3a98dbab8 to disappear Mar 8 10:39:24.907: INFO: Pod downwardapi-volume-a260701d-0cad-4f7e-88e4-1cd3a98dbab8 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 10:39:24.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2508" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":10,"skipped":205,"failed":0} SSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 10:39:24.913: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5310.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5310.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 8 10:39:41.002: INFO: DNS probes using dns-5310/dns-test-3978b3d3-c033-47ed-a05e-da8987e75e85 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 10:39:41.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5310" for this suite. • [SLOW TEST:16.171 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":278,"completed":11,"skipped":210,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 10:39:41.085: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Mar 8 10:39:43.786: INFO: Successfully updated pod "labelsupdate2e963ddc-4dc6-43d8-a571-2829640a1b07" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 10:39:45.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4910" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":12,"skipped":231,"failed":0} ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 10:39:45.814: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 8 10:39:46.508: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 8 10:39:48.518: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719260786, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719260786, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719260786, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719260786, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 8 10:39:51.532: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 8 10:39:51.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-5332-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 10:39:52.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2461" for this suite. STEP: Destroying namespace "webhook-2461-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.494 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":278,"completed":13,"skipped":231,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 10:39:52.309: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-5333 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-5333 STEP: Creating statefulset with conflicting port in namespace statefulset-5333 STEP: Waiting until pod test-pod will start running in namespace statefulset-5333 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-5333 Mar 8 10:39:54.518: INFO: Observed stateful pod in namespace: statefulset-5333, name: ss-0, uid: 2659d6d3-a2b2-4fe5-b567-355463db85f5, status phase: Pending. Waiting for statefulset controller to delete. Mar 8 10:39:59.457: INFO: Observed stateful pod in namespace: statefulset-5333, name: ss-0, uid: 2659d6d3-a2b2-4fe5-b567-355463db85f5, status phase: Failed. Waiting for statefulset controller to delete. Mar 8 10:39:59.470: INFO: Observed stateful pod in namespace: statefulset-5333, name: ss-0, uid: 2659d6d3-a2b2-4fe5-b567-355463db85f5, status phase: Failed. Waiting for statefulset controller to delete. Mar 8 10:39:59.497: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-5333 STEP: Removing pod with conflicting port in namespace statefulset-5333 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-5333 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Mar 8 10:40:03.597: INFO: Deleting all statefulset in ns statefulset-5333 Mar 8 10:40:03.600: INFO: Scaling statefulset ss to 0 Mar 8 10:40:13.637: INFO: Waiting for statefulset status.replicas updated to 0 Mar 8 10:40:13.640: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 10:40:13.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5333" for this suite. • [SLOW TEST:21.389 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":278,"completed":14,"skipped":252,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 10:40:13.698: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Mar 8 10:40:22.280: INFO: Successfully updated pod "pod-update-89a5e058-a010-47f6-8126-4b602d94d005" STEP: verifying the updated pod is in kubernetes Mar 8 10:40:22.291: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 10:40:22.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4785" for this suite. • [SLOW TEST:8.600 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":278,"completed":15,"skipped":267,"failed":0} [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 10:40:22.299: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Mar 8 10:40:24.910: INFO: Successfully updated pod "annotationupdate280021cb-35b8-4bde-83ff-25ddfb5b2000" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 10:40:26.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8156" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":16,"skipped":267,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 10:40:26.935: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 8 10:40:27.012: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-13928088-762e-49f8-bec5-6bf9c0f73a5e" in namespace "security-context-test-7923" to be "success or failure" Mar 8 10:40:27.015: INFO: Pod "busybox-readonly-false-13928088-762e-49f8-bec5-6bf9c0f73a5e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.983188ms Mar 8 10:40:29.027: INFO: Pod "busybox-readonly-false-13928088-762e-49f8-bec5-6bf9c0f73a5e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.015744528s Mar 8 10:40:29.027: INFO: Pod "busybox-readonly-false-13928088-762e-49f8-bec5-6bf9c0f73a5e" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 10:40:29.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-7923" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":278,"completed":17,"skipped":277,"failed":0} SSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 10:40:29.036: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 8 10:40:29.089: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-8126 I0308 10:40:29.103747 6 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-8126, replica count: 1 I0308 10:40:30.154181 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0308 10:40:31.154413 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 8 10:40:31.276: INFO: Created: latency-svc-86khg Mar 8 10:40:31.283: INFO: Got endpoints: latency-svc-86khg [28.453446ms] Mar 8 10:40:31.306: INFO: Created: latency-svc-9vz7x Mar 8 10:40:31.312: INFO: Got endpoints: latency-svc-9vz7x [29.079629ms] Mar 8 10:40:31.330: INFO: Created: latency-svc-x6ht6 Mar 8 10:40:31.336: INFO: Got endpoints: latency-svc-x6ht6 [52.850817ms] Mar 8 10:40:31.375: INFO: Created: latency-svc-wjjhp Mar 8 10:40:31.396: INFO: Created: latency-svc-pfb4b Mar 8 10:40:31.396: INFO: Got endpoints: latency-svc-wjjhp [113.445869ms] Mar 8 10:40:31.414: INFO: Got endpoints: latency-svc-pfb4b [130.98652ms] Mar 8 10:40:31.426: INFO: Created: latency-svc-bvzmv Mar 8 10:40:31.444: INFO: Created: latency-svc-m4mx8 Mar 8 10:40:31.444: INFO: Got endpoints: latency-svc-bvzmv [161.18647ms] Mar 8 10:40:31.462: INFO: Created: latency-svc-8t544 Mar 8 10:40:31.462: INFO: Got endpoints: latency-svc-m4mx8 [179.501266ms] Mar 8 10:40:31.525: INFO: Created: latency-svc-g9dss Mar 8 10:40:31.525: INFO: Got endpoints: latency-svc-8t544 [242.080552ms] Mar 8 10:40:31.552: INFO: Got endpoints: latency-svc-g9dss [268.833952ms] Mar 8 10:40:31.553: INFO: Created: latency-svc-v2vv7 Mar 8 10:40:31.559: INFO: Got endpoints: latency-svc-v2vv7 [276.301232ms] Mar 8 10:40:31.583: INFO: Created: latency-svc-v2ld2 Mar 8 10:40:31.593: INFO: Got endpoints: latency-svc-v2ld2 [310.329354ms] Mar 8 10:40:31.612: INFO: Created: latency-svc-9784k Mar 8 10:40:31.623: INFO: Got endpoints: latency-svc-9784k [340.429197ms] Mar 8 10:40:31.656: INFO: Created: latency-svc-hv9lc Mar 8 10:40:31.665: INFO: Got endpoints: latency-svc-hv9lc [382.80073ms] Mar 8 10:40:31.708: INFO: Created: latency-svc-4gfvz Mar 8 10:40:31.723: INFO: Got endpoints: latency-svc-4gfvz [439.909728ms] Mar 8 10:40:31.744: INFO: Created: latency-svc-8vfv7 Mar 8 10:40:31.749: INFO: Got endpoints: latency-svc-8vfv7 [466.401631ms] Mar 8 10:40:31.812: INFO: Created: latency-svc-nrhg4 Mar 8 10:40:31.835: INFO: Created: latency-svc-kwdsv Mar 8 10:40:31.835: INFO: Got endpoints: latency-svc-nrhg4 [552.079645ms] Mar 8 10:40:31.859: INFO: Got endpoints: latency-svc-kwdsv [546.757117ms] Mar 8 10:40:31.888: INFO: Created: latency-svc-thk94 Mar 8 10:40:31.895: INFO: Got endpoints: latency-svc-thk94 [559.091249ms] Mar 8 10:40:31.956: INFO: Created: latency-svc-t44r5 Mar 8 10:40:31.991: INFO: Created: latency-svc-fpm2r Mar 8 10:40:31.991: INFO: Got endpoints: latency-svc-t44r5 [594.146574ms] Mar 8 10:40:31.997: INFO: Got endpoints: latency-svc-fpm2r [582.852286ms] Mar 8 10:40:32.014: INFO: Created: latency-svc-kl9rz Mar 8 10:40:32.032: INFO: Got endpoints: latency-svc-kl9rz [588.117075ms] Mar 8 10:40:32.050: INFO: Created: latency-svc-scr85 Mar 8 10:40:32.094: INFO: Got endpoints: latency-svc-scr85 [631.270275ms] Mar 8 10:40:32.096: INFO: Created: latency-svc-kr5kx Mar 8 10:40:32.105: INFO: Got endpoints: latency-svc-kr5kx [579.629282ms] Mar 8 10:40:32.134: INFO: Created: latency-svc-k2vmt Mar 8 10:40:32.141: INFO: Got endpoints: latency-svc-k2vmt [589.00817ms] Mar 8 10:40:32.176: INFO: Created: latency-svc-8b8xr Mar 8 10:40:32.183: INFO: Got endpoints: latency-svc-8b8xr [623.423617ms] Mar 8 10:40:32.225: INFO: Created: latency-svc-8qx5z Mar 8 10:40:32.235: INFO: Got endpoints: latency-svc-8qx5z [641.472184ms] Mar 8 10:40:32.255: INFO: Created: latency-svc-d6c8g Mar 8 10:40:32.265: INFO: Got endpoints: latency-svc-d6c8g [641.474349ms] Mar 8 10:40:32.284: INFO: Created: latency-svc-57txj Mar 8 10:40:32.295: INFO: Got endpoints: latency-svc-57txj [629.150708ms] Mar 8 10:40:32.308: INFO: Created: latency-svc-t9zjt Mar 8 10:40:32.313: INFO: Got endpoints: latency-svc-t9zjt [590.108762ms] Mar 8 10:40:32.357: INFO: Created: latency-svc-nv22b Mar 8 10:40:32.381: INFO: Got endpoints: latency-svc-nv22b [631.826609ms] Mar 8 10:40:32.382: INFO: Created: latency-svc-lghr8 Mar 8 10:40:32.391: INFO: Got endpoints: latency-svc-lghr8 [555.555451ms] Mar 8 10:40:32.410: INFO: Created: latency-svc-69b84 Mar 8 10:40:32.421: INFO: Got endpoints: latency-svc-69b84 [562.107836ms] Mar 8 10:40:32.514: INFO: Created: latency-svc-2792t Mar 8 10:40:32.542: INFO: Got endpoints: latency-svc-2792t [647.360117ms] Mar 8 10:40:32.543: INFO: Created: latency-svc-4dgf5 Mar 8 10:40:32.548: INFO: Got endpoints: latency-svc-4dgf5 [557.538458ms] Mar 8 10:40:32.584: INFO: Created: latency-svc-82ssg Mar 8 10:40:32.590: INFO: Got endpoints: latency-svc-82ssg [593.500908ms] Mar 8 10:40:32.651: INFO: Created: latency-svc-zpr85 Mar 8 10:40:32.675: INFO: Got endpoints: latency-svc-zpr85 [642.429936ms] Mar 8 10:40:32.677: INFO: Created: latency-svc-xn6rn Mar 8 10:40:32.686: INFO: Got endpoints: latency-svc-xn6rn [592.19193ms] Mar 8 10:40:32.717: INFO: Created: latency-svc-wn95g Mar 8 10:40:32.728: INFO: Got endpoints: latency-svc-wn95g [623.240103ms] Mar 8 10:40:32.747: INFO: Created: latency-svc-kfz55 Mar 8 10:40:32.794: INFO: Got endpoints: latency-svc-kfz55 [653.491353ms] Mar 8 10:40:32.813: INFO: Created: latency-svc-m5fgz Mar 8 10:40:32.816: INFO: Got endpoints: latency-svc-m5fgz [633.241626ms] Mar 8 10:40:32.836: INFO: Created: latency-svc-zcmp8 Mar 8 10:40:32.841: INFO: Got endpoints: latency-svc-zcmp8 [605.704972ms] Mar 8 10:40:32.855: INFO: Created: latency-svc-4dgc8 Mar 8 10:40:32.858: INFO: Got endpoints: latency-svc-4dgc8 [593.373048ms] Mar 8 10:40:32.879: INFO: Created: latency-svc-j6w4h Mar 8 10:40:32.888: INFO: Got endpoints: latency-svc-j6w4h [593.378876ms] Mar 8 10:40:32.957: INFO: Created: latency-svc-8j8hd Mar 8 10:40:32.993: INFO: Got endpoints: latency-svc-8j8hd [679.494176ms] Mar 8 10:40:32.994: INFO: Created: latency-svc-wwnnh Mar 8 10:40:33.002: INFO: Got endpoints: latency-svc-wwnnh [620.956029ms] Mar 8 10:40:33.047: INFO: Created: latency-svc-f9blm Mar 8 10:40:33.081: INFO: Got endpoints: latency-svc-f9blm [690.547656ms] Mar 8 10:40:33.107: INFO: Created: latency-svc-cgj65 Mar 8 10:40:33.118: INFO: Got endpoints: latency-svc-cgj65 [696.790518ms] Mar 8 10:40:33.138: INFO: Created: latency-svc-c722r Mar 8 10:40:33.141: INFO: Got endpoints: latency-svc-c722r [599.201805ms] Mar 8 10:40:33.162: INFO: Created: latency-svc-vckmk Mar 8 10:40:33.173: INFO: Got endpoints: latency-svc-vckmk [624.860913ms] Mar 8 10:40:33.220: INFO: Created: latency-svc-6knlk Mar 8 10:40:33.317: INFO: Got endpoints: latency-svc-6knlk [727.085619ms] Mar 8 10:40:33.318: INFO: Created: latency-svc-gxgp6 Mar 8 10:40:33.472: INFO: Got endpoints: latency-svc-gxgp6 [796.773237ms] Mar 8 10:40:33.527: INFO: Created: latency-svc-g4xm5 Mar 8 10:40:33.551: INFO: Got endpoints: latency-svc-g4xm5 [864.988339ms] Mar 8 10:40:33.663: INFO: Created: latency-svc-8cf48 Mar 8 10:40:33.673: INFO: Got endpoints: latency-svc-8cf48 [944.787854ms] Mar 8 10:40:33.695: INFO: Created: latency-svc-lznhp Mar 8 10:40:33.704: INFO: Got endpoints: latency-svc-lznhp [909.270844ms] Mar 8 10:40:33.755: INFO: Created: latency-svc-l4trn Mar 8 10:40:33.806: INFO: Got endpoints: latency-svc-l4trn [990.413454ms] Mar 8 10:40:33.827: INFO: Created: latency-svc-7c4d6 Mar 8 10:40:33.851: INFO: Got endpoints: latency-svc-7c4d6 [1.010139898s] Mar 8 10:40:33.851: INFO: Created: latency-svc-h25s4 Mar 8 10:40:33.859: INFO: Got endpoints: latency-svc-h25s4 [1.000517378s] Mar 8 10:40:33.887: INFO: Created: latency-svc-zz8k7 Mar 8 10:40:33.895: INFO: Got endpoints: latency-svc-zz8k7 [1.00648967s] Mar 8 10:40:33.952: INFO: Created: latency-svc-22cf5 Mar 8 10:40:33.961: INFO: Got endpoints: latency-svc-22cf5 [968.168494ms] Mar 8 10:40:33.983: INFO: Created: latency-svc-nm4qg Mar 8 10:40:33.992: INFO: Got endpoints: latency-svc-nm4qg [989.737358ms] Mar 8 10:40:34.019: INFO: Created: latency-svc-5b289 Mar 8 10:40:34.042: INFO: Got endpoints: latency-svc-5b289 [960.842989ms] Mar 8 10:40:34.088: INFO: Created: latency-svc-slp78 Mar 8 10:40:34.109: INFO: Created: latency-svc-kj9ks Mar 8 10:40:34.109: INFO: Got endpoints: latency-svc-slp78 [991.875061ms] Mar 8 10:40:34.113: INFO: Got endpoints: latency-svc-kj9ks [971.417639ms] Mar 8 10:40:34.133: INFO: Created: latency-svc-tsdjp Mar 8 10:40:34.150: INFO: Got endpoints: latency-svc-tsdjp [977.218724ms] Mar 8 10:40:34.181: INFO: Created: latency-svc-qvmr2 Mar 8 10:40:34.219: INFO: Got endpoints: latency-svc-qvmr2 [901.933127ms] Mar 8 10:40:34.254: INFO: Created: latency-svc-8c2tm Mar 8 10:40:34.257: INFO: Got endpoints: latency-svc-8c2tm [785.689649ms] Mar 8 10:40:34.290: INFO: Created: latency-svc-dhmfr Mar 8 10:40:34.296: INFO: Got endpoints: latency-svc-dhmfr [745.298689ms] Mar 8 10:40:34.351: INFO: Created: latency-svc-dcg4r Mar 8 10:40:34.367: INFO: Got endpoints: latency-svc-dcg4r [694.313181ms] Mar 8 10:40:34.397: INFO: Created: latency-svc-85fvc Mar 8 10:40:34.404: INFO: Got endpoints: latency-svc-85fvc [700.626367ms] Mar 8 10:40:34.433: INFO: Created: latency-svc-kx5ph Mar 8 10:40:34.441: INFO: Got endpoints: latency-svc-kx5ph [634.051566ms] Mar 8 10:40:34.501: INFO: Created: latency-svc-6qs8s Mar 8 10:40:34.523: INFO: Created: latency-svc-6qmng Mar 8 10:40:34.523: INFO: Got endpoints: latency-svc-6qs8s [672.385674ms] Mar 8 10:40:34.535: INFO: Got endpoints: latency-svc-6qmng [676.334149ms] Mar 8 10:40:34.553: INFO: Created: latency-svc-h56wb Mar 8 10:40:34.589: INFO: Got endpoints: latency-svc-h56wb [694.731644ms] Mar 8 10:40:34.590: INFO: Created: latency-svc-gz9f5 Mar 8 10:40:34.645: INFO: Got endpoints: latency-svc-gz9f5 [683.762383ms] Mar 8 10:40:34.646: INFO: Created: latency-svc-xtn88 Mar 8 10:40:34.667: INFO: Got endpoints: latency-svc-xtn88 [674.823042ms] Mar 8 10:40:34.667: INFO: Created: latency-svc-kz4zl Mar 8 10:40:34.677: INFO: Got endpoints: latency-svc-kz4zl [634.22417ms] Mar 8 10:40:34.721: INFO: Created: latency-svc-5n7c4 Mar 8 10:40:34.736: INFO: Got endpoints: latency-svc-5n7c4 [626.085676ms] Mar 8 10:40:34.785: INFO: Created: latency-svc-52bt5 Mar 8 10:40:34.796: INFO: Got endpoints: latency-svc-52bt5 [682.796008ms] Mar 8 10:40:34.818: INFO: Created: latency-svc-6rwh9 Mar 8 10:40:34.829: INFO: Got endpoints: latency-svc-6rwh9 [678.835293ms] Mar 8 10:40:34.853: INFO: Created: latency-svc-28dm6 Mar 8 10:40:34.862: INFO: Got endpoints: latency-svc-28dm6 [642.297286ms] Mar 8 10:40:34.926: INFO: Created: latency-svc-8jj5t Mar 8 10:40:34.932: INFO: Got endpoints: latency-svc-8jj5t [674.278248ms] Mar 8 10:40:34.956: INFO: Created: latency-svc-7bw6l Mar 8 10:40:34.979: INFO: Got endpoints: latency-svc-7bw6l [682.991909ms] Mar 8 10:40:35.003: INFO: Created: latency-svc-n7gp7 Mar 8 10:40:35.010: INFO: Got endpoints: latency-svc-n7gp7 [642.701995ms] Mar 8 10:40:35.070: INFO: Created: latency-svc-x5vqd Mar 8 10:40:35.087: INFO: Created: latency-svc-86pr6 Mar 8 10:40:35.088: INFO: Got endpoints: latency-svc-x5vqd [683.951197ms] Mar 8 10:40:35.105: INFO: Got endpoints: latency-svc-86pr6 [664.83518ms] Mar 8 10:40:35.106: INFO: Created: latency-svc-5kcxc Mar 8 10:40:35.123: INFO: Got endpoints: latency-svc-5kcxc [600.113461ms] Mar 8 10:40:35.147: INFO: Created: latency-svc-r29nj Mar 8 10:40:35.153: INFO: Got endpoints: latency-svc-r29nj [65.08452ms] Mar 8 10:40:35.214: INFO: Created: latency-svc-8thtf Mar 8 10:40:35.237: INFO: Got endpoints: latency-svc-8thtf [701.284044ms] Mar 8 10:40:35.237: INFO: Created: latency-svc-f9kng Mar 8 10:40:35.245: INFO: Got endpoints: latency-svc-f9kng [655.478209ms] Mar 8 10:40:35.291: INFO: Created: latency-svc-lz259 Mar 8 10:40:35.299: INFO: Got endpoints: latency-svc-lz259 [654.445301ms] Mar 8 10:40:35.363: INFO: Created: latency-svc-mv6hx Mar 8 10:40:35.387: INFO: Created: latency-svc-fxpp9 Mar 8 10:40:35.388: INFO: Got endpoints: latency-svc-mv6hx [720.562762ms] Mar 8 10:40:35.399: INFO: Got endpoints: latency-svc-fxpp9 [722.726978ms] Mar 8 10:40:35.417: INFO: Created: latency-svc-g7gpw Mar 8 10:40:35.425: INFO: Got endpoints: latency-svc-g7gpw [689.23925ms] Mar 8 10:40:35.447: INFO: Created: latency-svc-6fzzw Mar 8 10:40:35.455: INFO: Got endpoints: latency-svc-6fzzw [659.00845ms] Mar 8 10:40:35.495: INFO: Created: latency-svc-zv895 Mar 8 10:40:35.501: INFO: Got endpoints: latency-svc-zv895 [671.473767ms] Mar 8 10:40:35.519: INFO: Created: latency-svc-wcv5p Mar 8 10:40:35.531: INFO: Got endpoints: latency-svc-wcv5p [669.490983ms] Mar 8 10:40:35.549: INFO: Created: latency-svc-l75zq Mar 8 10:40:35.555: INFO: Got endpoints: latency-svc-l75zq [623.22288ms] Mar 8 10:40:35.579: INFO: Created: latency-svc-nbv4r Mar 8 10:40:35.585: INFO: Got endpoints: latency-svc-nbv4r [605.470196ms] Mar 8 10:40:35.671: INFO: Created: latency-svc-vrhf9 Mar 8 10:40:35.681: INFO: Got endpoints: latency-svc-vrhf9 [671.278947ms] Mar 8 10:40:35.736: INFO: Created: latency-svc-vj8s7 Mar 8 10:40:35.747: INFO: Got endpoints: latency-svc-vj8s7 [641.892581ms] Mar 8 10:40:35.824: INFO: Created: latency-svc-7mnt6 Mar 8 10:40:35.844: INFO: Got endpoints: latency-svc-7mnt6 [720.608087ms] Mar 8 10:40:35.886: INFO: Created: latency-svc-g7xxq Mar 8 10:40:35.899: INFO: Got endpoints: latency-svc-g7xxq [745.194276ms] Mar 8 10:40:35.922: INFO: Created: latency-svc-hb2q5 Mar 8 10:40:35.980: INFO: Got endpoints: latency-svc-hb2q5 [742.953244ms] Mar 8 10:40:35.981: INFO: Created: latency-svc-lpqc9 Mar 8 10:40:36.001: INFO: Got endpoints: latency-svc-lpqc9 [756.088577ms] Mar 8 10:40:36.037: INFO: Created: latency-svc-lsr5j Mar 8 10:40:36.054: INFO: Got endpoints: latency-svc-lsr5j [754.814134ms] Mar 8 10:40:36.130: INFO: Created: latency-svc-kn8xc Mar 8 10:40:36.157: INFO: Created: latency-svc-g42lz Mar 8 10:40:36.157: INFO: Got endpoints: latency-svc-kn8xc [768.961191ms] Mar 8 10:40:36.180: INFO: Got endpoints: latency-svc-g42lz [780.637671ms] Mar 8 10:40:36.216: INFO: Created: latency-svc-7zvq2 Mar 8 10:40:36.228: INFO: Got endpoints: latency-svc-7zvq2 [802.996296ms] Mar 8 10:40:36.285: INFO: Created: latency-svc-tb8mv Mar 8 10:40:36.292: INFO: Got endpoints: latency-svc-tb8mv [837.130465ms] Mar 8 10:40:36.324: INFO: Created: latency-svc-b4xmp Mar 8 10:40:36.334: INFO: Got endpoints: latency-svc-b4xmp [832.835266ms] Mar 8 10:40:36.348: INFO: Created: latency-svc-qs7ws Mar 8 10:40:36.361: INFO: Got endpoints: latency-svc-qs7ws [829.8435ms] Mar 8 10:40:36.441: INFO: Created: latency-svc-scmdm Mar 8 10:40:36.462: INFO: Got endpoints: latency-svc-scmdm [907.281915ms] Mar 8 10:40:36.464: INFO: Created: latency-svc-7898d Mar 8 10:40:36.481: INFO: Got endpoints: latency-svc-7898d [896.211362ms] Mar 8 10:40:36.481: INFO: Created: latency-svc-gb7hf Mar 8 10:40:36.502: INFO: Got endpoints: latency-svc-gb7hf [820.787986ms] Mar 8 10:40:36.596: INFO: Created: latency-svc-bf498 Mar 8 10:40:36.604: INFO: Got endpoints: latency-svc-bf498 [856.699983ms] Mar 8 10:40:36.638: INFO: Created: latency-svc-r2btw Mar 8 10:40:36.648: INFO: Got endpoints: latency-svc-r2btw [803.473998ms] Mar 8 10:40:36.674: INFO: Created: latency-svc-qrp4c Mar 8 10:40:36.683: INFO: Got endpoints: latency-svc-qrp4c [784.36098ms] Mar 8 10:40:36.740: INFO: Created: latency-svc-llk5t Mar 8 10:40:36.775: INFO: Got endpoints: latency-svc-llk5t [795.825094ms] Mar 8 10:40:36.776: INFO: Created: latency-svc-5sg57 Mar 8 10:40:36.785: INFO: Got endpoints: latency-svc-5sg57 [783.829291ms] Mar 8 10:40:36.806: INFO: Created: latency-svc-cvb77 Mar 8 10:40:36.816: INFO: Got endpoints: latency-svc-cvb77 [761.583677ms] Mar 8 10:40:36.836: INFO: Created: latency-svc-hpjrp Mar 8 10:40:36.896: INFO: Got endpoints: latency-svc-hpjrp [739.501694ms] Mar 8 10:40:36.899: INFO: Created: latency-svc-55wtt Mar 8 10:40:36.905: INFO: Got endpoints: latency-svc-55wtt [724.685145ms] Mar 8 10:40:36.944: INFO: Created: latency-svc-jdm75 Mar 8 10:40:36.957: INFO: Got endpoints: latency-svc-jdm75 [729.493855ms] Mar 8 10:40:36.986: INFO: Created: latency-svc-zh7nc Mar 8 10:40:37.052: INFO: Got endpoints: latency-svc-zh7nc [759.79968ms] Mar 8 10:40:37.076: INFO: Created: latency-svc-npsst Mar 8 10:40:37.083: INFO: Got endpoints: latency-svc-npsst [749.125351ms] Mar 8 10:40:37.125: INFO: Created: latency-svc-kvlz5 Mar 8 10:40:37.132: INFO: Got endpoints: latency-svc-kvlz5 [770.366752ms] Mar 8 10:40:37.196: INFO: Created: latency-svc-hgvmm Mar 8 10:40:37.203: INFO: Got endpoints: latency-svc-hgvmm [740.782047ms] Mar 8 10:40:37.232: INFO: Created: latency-svc-m7rg4 Mar 8 10:40:37.240: INFO: Got endpoints: latency-svc-m7rg4 [758.658779ms] Mar 8 10:40:37.267: INFO: Created: latency-svc-fpml4 Mar 8 10:40:37.270: INFO: Got endpoints: latency-svc-fpml4 [767.502239ms] Mar 8 10:40:37.346: INFO: Created: latency-svc-2p84b Mar 8 10:40:37.370: INFO: Got endpoints: latency-svc-2p84b [765.982438ms] Mar 8 10:40:37.371: INFO: Created: latency-svc-bwm6q Mar 8 10:40:37.406: INFO: Got endpoints: latency-svc-bwm6q [758.796349ms] Mar 8 10:40:37.437: INFO: Created: latency-svc-p597k Mar 8 10:40:37.444: INFO: Got endpoints: latency-svc-p597k [761.360718ms] Mar 8 10:40:37.483: INFO: Created: latency-svc-nftm4 Mar 8 10:40:37.503: INFO: Got endpoints: latency-svc-nftm4 [727.722905ms] Mar 8 10:40:37.504: INFO: Created: latency-svc-w8lfg Mar 8 10:40:37.516: INFO: Got endpoints: latency-svc-w8lfg [730.893507ms] Mar 8 10:40:37.533: INFO: Created: latency-svc-tv5dq Mar 8 10:40:37.540: INFO: Got endpoints: latency-svc-tv5dq [724.305822ms] Mar 8 10:40:37.563: INFO: Created: latency-svc-rr6xn Mar 8 10:40:37.570: INFO: Got endpoints: latency-svc-rr6xn [673.676157ms] Mar 8 10:40:37.645: INFO: Created: latency-svc-bvrdt Mar 8 10:40:37.653: INFO: Got endpoints: latency-svc-bvrdt [748.377485ms] Mar 8 10:40:37.678: INFO: Created: latency-svc-7r2nn Mar 8 10:40:37.689: INFO: Got endpoints: latency-svc-7r2nn [731.166847ms] Mar 8 10:40:37.713: INFO: Created: latency-svc-nqck7 Mar 8 10:40:37.724: INFO: Got endpoints: latency-svc-nqck7 [672.370379ms] Mar 8 10:40:37.742: INFO: Created: latency-svc-jfhhc Mar 8 10:40:37.782: INFO: Got endpoints: latency-svc-jfhhc [699.083198ms] Mar 8 10:40:37.785: INFO: Created: latency-svc-qlclj Mar 8 10:40:37.791: INFO: Got endpoints: latency-svc-qlclj [659.364329ms] Mar 8 10:40:37.809: INFO: Created: latency-svc-5jrl6 Mar 8 10:40:37.820: INFO: Got endpoints: latency-svc-5jrl6 [616.577921ms] Mar 8 10:40:37.845: INFO: Created: latency-svc-lh6mc Mar 8 10:40:37.864: INFO: Created: latency-svc-2wdkb Mar 8 10:40:37.864: INFO: Got endpoints: latency-svc-lh6mc [623.765733ms] Mar 8 10:40:37.926: INFO: Got endpoints: latency-svc-2wdkb [656.035737ms] Mar 8 10:40:37.927: INFO: Created: latency-svc-qrlvw Mar 8 10:40:37.930: INFO: Got endpoints: latency-svc-qrlvw [559.398566ms] Mar 8 10:40:37.953: INFO: Created: latency-svc-bbkhx Mar 8 10:40:37.960: INFO: Got endpoints: latency-svc-bbkhx [553.152664ms] Mar 8 10:40:37.983: INFO: Created: latency-svc-nvvpd Mar 8 10:40:38.001: INFO: Got endpoints: latency-svc-nvvpd [556.24876ms] Mar 8 10:40:38.013: INFO: Created: latency-svc-zzqmb Mar 8 10:40:38.063: INFO: Got endpoints: latency-svc-zzqmb [560.033233ms] Mar 8 10:40:38.064: INFO: Created: latency-svc-rpbt9 Mar 8 10:40:38.091: INFO: Got endpoints: latency-svc-rpbt9 [574.814785ms] Mar 8 10:40:38.092: INFO: Created: latency-svc-8fxsd Mar 8 10:40:38.109: INFO: Got endpoints: latency-svc-8fxsd [568.885308ms] Mar 8 10:40:38.149: INFO: Created: latency-svc-2pdh2 Mar 8 10:40:38.219: INFO: Got endpoints: latency-svc-2pdh2 [649.252ms] Mar 8 10:40:38.221: INFO: Created: latency-svc-tcffl Mar 8 10:40:38.227: INFO: Got endpoints: latency-svc-tcffl [574.238254ms] Mar 8 10:40:38.257: INFO: Created: latency-svc-9w4fm Mar 8 10:40:38.275: INFO: Got endpoints: latency-svc-9w4fm [586.05345ms] Mar 8 10:40:38.275: INFO: Created: latency-svc-s6jl5 Mar 8 10:40:38.293: INFO: Created: latency-svc-kpfx2 Mar 8 10:40:38.293: INFO: Got endpoints: latency-svc-s6jl5 [568.662199ms] Mar 8 10:40:38.317: INFO: Got endpoints: latency-svc-kpfx2 [535.371597ms] Mar 8 10:40:38.317: INFO: Created: latency-svc-sffnj Mar 8 10:40:38.363: INFO: Got endpoints: latency-svc-sffnj [572.037455ms] Mar 8 10:40:38.365: INFO: Created: latency-svc-fsp89 Mar 8 10:40:38.390: INFO: Got endpoints: latency-svc-fsp89 [569.754625ms] Mar 8 10:40:38.390: INFO: Created: latency-svc-r94qj Mar 8 10:40:38.413: INFO: Created: latency-svc-mlk98 Mar 8 10:40:38.413: INFO: Got endpoints: latency-svc-r94qj [549.285002ms] Mar 8 10:40:38.421: INFO: Got endpoints: latency-svc-mlk98 [495.162174ms] Mar 8 10:40:38.449: INFO: Created: latency-svc-888z6 Mar 8 10:40:38.457: INFO: Got endpoints: latency-svc-888z6 [527.584139ms] Mar 8 10:40:38.495: INFO: Created: latency-svc-2rv8h Mar 8 10:40:38.515: INFO: Got endpoints: latency-svc-2rv8h [555.322665ms] Mar 8 10:40:38.516: INFO: Created: latency-svc-9w89j Mar 8 10:40:38.523: INFO: Got endpoints: latency-svc-9w89j [522.453627ms] Mar 8 10:40:38.539: INFO: Created: latency-svc-nx5v6 Mar 8 10:40:38.563: INFO: Got endpoints: latency-svc-nx5v6 [500.007122ms] Mar 8 10:40:38.564: INFO: Created: latency-svc-6l5r6 Mar 8 10:40:38.571: INFO: Got endpoints: latency-svc-6l5r6 [479.954236ms] Mar 8 10:40:38.587: INFO: Created: latency-svc-vxc6x Mar 8 10:40:38.593: INFO: Got endpoints: latency-svc-vxc6x [484.230302ms] Mar 8 10:40:38.632: INFO: Created: latency-svc-l9lzx Mar 8 10:40:38.653: INFO: Got endpoints: latency-svc-l9lzx [433.425789ms] Mar 8 10:40:38.671: INFO: Created: latency-svc-t4fjt Mar 8 10:40:38.677: INFO: Got endpoints: latency-svc-t4fjt [449.939023ms] Mar 8 10:40:38.695: INFO: Created: latency-svc-clwqb Mar 8 10:40:38.713: INFO: Got endpoints: latency-svc-clwqb [438.457428ms] Mar 8 10:40:38.714: INFO: Created: latency-svc-hlbqc Mar 8 10:40:38.731: INFO: Got endpoints: latency-svc-hlbqc [438.127927ms] Mar 8 10:40:38.764: INFO: Created: latency-svc-h5qfl Mar 8 10:40:38.773: INFO: Got endpoints: latency-svc-h5qfl [455.687595ms] Mar 8 10:40:38.803: INFO: Created: latency-svc-qxrmg Mar 8 10:40:38.811: INFO: Got endpoints: latency-svc-qxrmg [447.625825ms] Mar 8 10:40:38.833: INFO: Created: latency-svc-h8n4l Mar 8 10:40:38.841: INFO: Got endpoints: latency-svc-h8n4l [450.918707ms] Mar 8 10:40:38.863: INFO: Created: latency-svc-gqngj Mar 8 10:40:38.908: INFO: Got endpoints: latency-svc-gqngj [494.929555ms] Mar 8 10:40:38.909: INFO: Created: latency-svc-qx8px Mar 8 10:40:38.919: INFO: Got endpoints: latency-svc-qx8px [497.99008ms] Mar 8 10:40:38.935: INFO: Created: latency-svc-7sj2j Mar 8 10:40:38.953: INFO: Got endpoints: latency-svc-7sj2j [495.954894ms] Mar 8 10:40:38.954: INFO: Created: latency-svc-6vfcm Mar 8 10:40:38.965: INFO: Got endpoints: latency-svc-6vfcm [450.258719ms] Mar 8 10:40:38.977: INFO: Created: latency-svc-t2l8c Mar 8 10:40:39.007: INFO: Got endpoints: latency-svc-t2l8c [483.71196ms] Mar 8 10:40:39.007: INFO: Created: latency-svc-jmpcd Mar 8 10:40:39.052: INFO: Got endpoints: latency-svc-jmpcd [488.216036ms] Mar 8 10:40:39.053: INFO: Created: latency-svc-s6dk7 Mar 8 10:40:39.061: INFO: Got endpoints: latency-svc-s6dk7 [490.628437ms] Mar 8 10:40:39.079: INFO: Created: latency-svc-k9h2d Mar 8 10:40:39.085: INFO: Got endpoints: latency-svc-k9h2d [491.758307ms] Mar 8 10:40:39.103: INFO: Created: latency-svc-jn4vf Mar 8 10:40:39.109: INFO: Got endpoints: latency-svc-jn4vf [456.388063ms] Mar 8 10:40:39.152: INFO: Created: latency-svc-7cvjc Mar 8 10:40:39.196: INFO: Got endpoints: latency-svc-7cvjc [518.309093ms] Mar 8 10:40:39.211: INFO: Created: latency-svc-rqlm7 Mar 8 10:40:39.217: INFO: Got endpoints: latency-svc-rqlm7 [503.954361ms] Mar 8 10:40:39.235: INFO: Created: latency-svc-hc7bv Mar 8 10:40:39.241: INFO: Got endpoints: latency-svc-hc7bv [510.288911ms] Mar 8 10:40:39.265: INFO: Created: latency-svc-sz2dp Mar 8 10:40:39.284: INFO: Got endpoints: latency-svc-sz2dp [510.351771ms] Mar 8 10:40:39.327: INFO: Created: latency-svc-85r62 Mar 8 10:40:39.355: INFO: Created: latency-svc-npl9b Mar 8 10:40:39.355: INFO: Got endpoints: latency-svc-85r62 [544.4253ms] Mar 8 10:40:39.362: INFO: Got endpoints: latency-svc-npl9b [521.393767ms] Mar 8 10:40:39.385: INFO: Created: latency-svc-pj87g Mar 8 10:40:39.409: INFO: Got endpoints: latency-svc-pj87g [500.892014ms] Mar 8 10:40:39.465: INFO: Created: latency-svc-lxww5 Mar 8 10:40:39.488: INFO: Created: latency-svc-rj8mr Mar 8 10:40:39.488: INFO: Got endpoints: latency-svc-lxww5 [569.235863ms] Mar 8 10:40:39.511: INFO: Got endpoints: latency-svc-rj8mr [558.148986ms] Mar 8 10:40:39.512: INFO: Created: latency-svc-xbggr Mar 8 10:40:39.519: INFO: Got endpoints: latency-svc-xbggr [553.392151ms] Mar 8 10:40:39.542: INFO: Created: latency-svc-jfwsw Mar 8 10:40:39.548: INFO: Got endpoints: latency-svc-jfwsw [540.807611ms] Mar 8 10:40:39.598: INFO: Created: latency-svc-wmptm Mar 8 10:40:39.620: INFO: Created: latency-svc-jtf4r Mar 8 10:40:39.620: INFO: Got endpoints: latency-svc-wmptm [568.127275ms] Mar 8 10:40:39.631: INFO: Got endpoints: latency-svc-jtf4r [569.283796ms] Mar 8 10:40:39.656: INFO: Created: latency-svc-7f4zl Mar 8 10:40:39.667: INFO: Got endpoints: latency-svc-7f4zl [581.637366ms] Mar 8 10:40:39.686: INFO: Created: latency-svc-ffd4k Mar 8 10:40:39.735: INFO: Got endpoints: latency-svc-ffd4k [625.356543ms] Mar 8 10:40:39.752: INFO: Created: latency-svc-b2n82 Mar 8 10:40:39.763: INFO: Got endpoints: latency-svc-b2n82 [567.459305ms] Mar 8 10:40:39.788: INFO: Created: latency-svc-2248g Mar 8 10:40:39.794: INFO: Got endpoints: latency-svc-2248g [576.757438ms] Mar 8 10:40:39.812: INFO: Created: latency-svc-j5gfp Mar 8 10:40:39.818: INFO: Got endpoints: latency-svc-j5gfp [576.586798ms] Mar 8 10:40:39.874: INFO: Created: latency-svc-cwrrx Mar 8 10:40:39.884: INFO: Got endpoints: latency-svc-cwrrx [600.855967ms] Mar 8 10:40:39.908: INFO: Created: latency-svc-cwbd9 Mar 8 10:40:39.914: INFO: Got endpoints: latency-svc-cwbd9 [558.54389ms] Mar 8 10:40:39.914: INFO: Latencies: [29.079629ms 52.850817ms 65.08452ms 113.445869ms 130.98652ms 161.18647ms 179.501266ms 242.080552ms 268.833952ms 276.301232ms 310.329354ms 340.429197ms 382.80073ms 433.425789ms 438.127927ms 438.457428ms 439.909728ms 447.625825ms 449.939023ms 450.258719ms 450.918707ms 455.687595ms 456.388063ms 466.401631ms 479.954236ms 483.71196ms 484.230302ms 488.216036ms 490.628437ms 491.758307ms 494.929555ms 495.162174ms 495.954894ms 497.99008ms 500.007122ms 500.892014ms 503.954361ms 510.288911ms 510.351771ms 518.309093ms 521.393767ms 522.453627ms 527.584139ms 535.371597ms 540.807611ms 544.4253ms 546.757117ms 549.285002ms 552.079645ms 553.152664ms 553.392151ms 555.322665ms 555.555451ms 556.24876ms 557.538458ms 558.148986ms 558.54389ms 559.091249ms 559.398566ms 560.033233ms 562.107836ms 567.459305ms 568.127275ms 568.662199ms 568.885308ms 569.235863ms 569.283796ms 569.754625ms 572.037455ms 574.238254ms 574.814785ms 576.586798ms 576.757438ms 579.629282ms 581.637366ms 582.852286ms 586.05345ms 588.117075ms 589.00817ms 590.108762ms 592.19193ms 593.373048ms 593.378876ms 593.500908ms 594.146574ms 599.201805ms 600.113461ms 600.855967ms 605.470196ms 605.704972ms 616.577921ms 620.956029ms 623.22288ms 623.240103ms 623.423617ms 623.765733ms 624.860913ms 625.356543ms 626.085676ms 629.150708ms 631.270275ms 631.826609ms 633.241626ms 634.051566ms 634.22417ms 641.472184ms 641.474349ms 641.892581ms 642.297286ms 642.429936ms 642.701995ms 647.360117ms 649.252ms 653.491353ms 654.445301ms 655.478209ms 656.035737ms 659.00845ms 659.364329ms 664.83518ms 669.490983ms 671.278947ms 671.473767ms 672.370379ms 672.385674ms 673.676157ms 674.278248ms 674.823042ms 676.334149ms 678.835293ms 679.494176ms 682.796008ms 682.991909ms 683.762383ms 683.951197ms 689.23925ms 690.547656ms 694.313181ms 694.731644ms 696.790518ms 699.083198ms 700.626367ms 701.284044ms 720.562762ms 720.608087ms 722.726978ms 724.305822ms 724.685145ms 727.085619ms 727.722905ms 729.493855ms 730.893507ms 731.166847ms 739.501694ms 740.782047ms 742.953244ms 745.194276ms 745.298689ms 748.377485ms 749.125351ms 754.814134ms 756.088577ms 758.658779ms 758.796349ms 759.79968ms 761.360718ms 761.583677ms 765.982438ms 767.502239ms 768.961191ms 770.366752ms 780.637671ms 783.829291ms 784.36098ms 785.689649ms 795.825094ms 796.773237ms 802.996296ms 803.473998ms 820.787986ms 829.8435ms 832.835266ms 837.130465ms 856.699983ms 864.988339ms 896.211362ms 901.933127ms 907.281915ms 909.270844ms 944.787854ms 960.842989ms 968.168494ms 971.417639ms 977.218724ms 989.737358ms 990.413454ms 991.875061ms 1.000517378s 1.00648967s 1.010139898s] Mar 8 10:40:39.914: INFO: 50 %ile: 631.270275ms Mar 8 10:40:39.914: INFO: 90 %ile: 829.8435ms Mar 8 10:40:39.914: INFO: 99 %ile: 1.00648967s Mar 8 10:40:39.914: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 10:40:39.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-8126" for this suite. • [SLOW TEST:10.905 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":278,"completed":18,"skipped":283,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 10:40:39.941: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:329 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller Mar 8 10:40:40.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6875' Mar 8 10:40:40.308: INFO: stderr: "" Mar 8 10:40:40.308: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 8 10:40:40.309: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6875' Mar 8 10:40:40.419: INFO: stderr: "" Mar 8 10:40:40.419: INFO: stdout: "update-demo-nautilus-5n7lb update-demo-nautilus-lxthc " Mar 8 10:40:40.419: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5n7lb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6875' Mar 8 10:40:40.513: INFO: stderr: "" Mar 8 10:40:40.513: INFO: stdout: "" Mar 8 10:40:40.513: INFO: update-demo-nautilus-5n7lb is created but not running Mar 8 10:40:45.514: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6875' Mar 8 10:40:45.649: INFO: stderr: "" Mar 8 10:40:45.649: INFO: stdout: "update-demo-nautilus-5n7lb update-demo-nautilus-lxthc " Mar 8 10:40:45.649: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5n7lb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6875' Mar 8 10:40:45.754: INFO: stderr: "" Mar 8 10:40:45.754: INFO: stdout: "true" Mar 8 10:40:45.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5n7lb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6875' Mar 8 10:40:45.831: INFO: stderr: "" Mar 8 10:40:45.831: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 8 10:40:45.831: INFO: validating pod update-demo-nautilus-5n7lb Mar 8 10:40:45.836: INFO: got data: { "image": "nautilus.jpg" } Mar 8 10:40:45.836: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 8 10:40:45.836: INFO: update-demo-nautilus-5n7lb is verified up and running Mar 8 10:40:45.836: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lxthc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6875' Mar 8 10:40:45.935: INFO: stderr: "" Mar 8 10:40:45.935: INFO: stdout: "true" Mar 8 10:40:45.935: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lxthc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6875' Mar 8 10:40:46.011: INFO: stderr: "" Mar 8 10:40:46.011: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 8 10:40:46.011: INFO: validating pod update-demo-nautilus-lxthc Mar 8 10:40:46.015: INFO: got data: { "image": "nautilus.jpg" } Mar 8 10:40:46.015: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 8 10:40:46.015: INFO: update-demo-nautilus-lxthc is verified up and running STEP: scaling down the replication controller Mar 8 10:40:46.017: INFO: scanned /root for discovery docs: Mar 8 10:40:46.017: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-6875' Mar 8 10:40:47.264: INFO: stderr: "" Mar 8 10:40:47.264: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 8 10:40:47.264: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6875' Mar 8 10:40:47.375: INFO: stderr: "" Mar 8 10:40:47.375: INFO: stdout: "update-demo-nautilus-5n7lb update-demo-nautilus-lxthc " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 8 10:40:52.375: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6875' Mar 8 10:40:52.482: INFO: stderr: "" Mar 8 10:40:52.482: INFO: stdout: "update-demo-nautilus-5n7lb update-demo-nautilus-lxthc " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 8 10:40:57.482: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6875' Mar 8 10:40:57.614: INFO: stderr: "" Mar 8 10:40:57.614: INFO: stdout: "update-demo-nautilus-5n7lb update-demo-nautilus-lxthc " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 8 10:41:02.615: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6875' Mar 8 10:41:02.756: INFO: stderr: "" Mar 8 10:41:02.756: INFO: stdout: "update-demo-nautilus-lxthc " Mar 8 10:41:02.757: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lxthc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6875' Mar 8 10:41:02.873: INFO: stderr: "" Mar 8 10:41:02.873: INFO: stdout: "true" Mar 8 10:41:02.873: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lxthc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6875' Mar 8 10:41:02.986: INFO: stderr: "" Mar 8 10:41:02.986: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 8 10:41:02.986: INFO: validating pod update-demo-nautilus-lxthc Mar 8 10:41:02.989: INFO: got data: { "image": "nautilus.jpg" } Mar 8 10:41:02.990: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 8 10:41:02.990: INFO: update-demo-nautilus-lxthc is verified up and running STEP: scaling up the replication controller Mar 8 10:41:02.992: INFO: scanned /root for discovery docs: Mar 8 10:41:02.992: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-6875' Mar 8 10:41:04.120: INFO: stderr: "" Mar 8 10:41:04.120: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 8 10:41:04.121: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6875' Mar 8 10:41:04.259: INFO: stderr: "" Mar 8 10:41:04.259: INFO: stdout: "update-demo-nautilus-jjqww update-demo-nautilus-lxthc " Mar 8 10:41:04.259: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jjqww -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6875' Mar 8 10:41:04.350: INFO: stderr: "" Mar 8 10:41:04.350: INFO: stdout: "" Mar 8 10:41:04.350: INFO: update-demo-nautilus-jjqww is created but not running Mar 8 10:41:09.350: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6875' Mar 8 10:41:09.490: INFO: stderr: "" Mar 8 10:41:09.490: INFO: stdout: "update-demo-nautilus-jjqww update-demo-nautilus-lxthc " Mar 8 10:41:09.491: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jjqww -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6875' Mar 8 10:41:09.596: INFO: stderr: "" Mar 8 10:41:09.596: INFO: stdout: "true" Mar 8 10:41:09.596: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jjqww -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6875' Mar 8 10:41:09.686: INFO: stderr: "" Mar 8 10:41:09.686: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 8 10:41:09.686: INFO: validating pod update-demo-nautilus-jjqww Mar 8 10:41:09.689: INFO: got data: { "image": "nautilus.jpg" } Mar 8 10:41:09.689: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 8 10:41:09.689: INFO: update-demo-nautilus-jjqww is verified up and running Mar 8 10:41:09.689: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lxthc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6875' Mar 8 10:41:09.802: INFO: stderr: "" Mar 8 10:41:09.802: INFO: stdout: "true" Mar 8 10:41:09.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lxthc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6875' Mar 8 10:41:09.905: INFO: stderr: "" Mar 8 10:41:09.905: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 8 10:41:09.905: INFO: validating pod update-demo-nautilus-lxthc Mar 8 10:41:09.909: INFO: got data: { "image": "nautilus.jpg" } Mar 8 10:41:09.909: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 8 10:41:09.909: INFO: update-demo-nautilus-lxthc is verified up and running STEP: using delete to clean up resources Mar 8 10:41:09.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6875' Mar 8 10:41:10.013: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 8 10:41:10.013: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Mar 8 10:41:10.013: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-6875' Mar 8 10:41:10.131: INFO: stderr: "No resources found in kubectl-6875 namespace.\n" Mar 8 10:41:10.131: INFO: stdout: "" Mar 8 10:41:10.131: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-6875 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 8 10:41:10.238: INFO: stderr: "" Mar 8 10:41:10.238: INFO: stdout: "update-demo-nautilus-jjqww\nupdate-demo-nautilus-lxthc\n" Mar 8 10:41:10.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-6875' Mar 8 10:41:10.878: INFO: stderr: "No resources found in kubectl-6875 namespace.\n" Mar 8 10:41:10.878: INFO: stdout: "" Mar 8 10:41:10.878: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-6875 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 8 10:41:10.992: INFO: stderr: "" Mar 8 10:41:10.992: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 10:41:10.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6875" for this suite. • [SLOW TEST:31.058 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:327 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":278,"completed":19,"skipped":287,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 10:41:10.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 8 10:41:11.121: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8ade0f1b-16bb-4526-99b1-0ea17b5fa3b3" in namespace "projected-4932" to be "success or failure" Mar 8 10:41:11.136: INFO: Pod "downwardapi-volume-8ade0f1b-16bb-4526-99b1-0ea17b5fa3b3": Phase="Pending", Reason="", readiness=false. Elapsed: 15.313992ms Mar 8 10:41:13.154: INFO: Pod "downwardapi-volume-8ade0f1b-16bb-4526-99b1-0ea17b5fa3b3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.032734569s STEP: Saw pod success Mar 8 10:41:13.154: INFO: Pod "downwardapi-volume-8ade0f1b-16bb-4526-99b1-0ea17b5fa3b3" satisfied condition "success or failure" Mar 8 10:41:13.157: INFO: Trying to get logs from node kind-control-plane pod downwardapi-volume-8ade0f1b-16bb-4526-99b1-0ea17b5fa3b3 container client-container: STEP: delete the pod Mar 8 10:41:13.181: INFO: Waiting for pod downwardapi-volume-8ade0f1b-16bb-4526-99b1-0ea17b5fa3b3 to disappear Mar 8 10:41:13.198: INFO: Pod downwardapi-volume-8ade0f1b-16bb-4526-99b1-0ea17b5fa3b3 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 10:41:13.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4932" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":20,"skipped":293,"failed":0} SSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 10:41:13.205: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Mar 8 10:41:13.833: INFO: Pod name wrapped-volume-race-42a07961-3353-47b7-9153-53f11b86c89b: Found 0 pods out of 5 Mar 8 10:41:18.840: INFO: Pod name wrapped-volume-race-42a07961-3353-47b7-9153-53f11b86c89b: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-42a07961-3353-47b7-9153-53f11b86c89b in namespace emptydir-wrapper-8580, will wait for the garbage collector to delete the pods Mar 8 10:41:28.981: INFO: Deleting ReplicationController wrapped-volume-race-42a07961-3353-47b7-9153-53f11b86c89b took: 6.031694ms Mar 8 10:41:29.382: INFO: Terminating ReplicationController wrapped-volume-race-42a07961-3353-47b7-9153-53f11b86c89b pods took: 400.345764ms STEP: Creating RC which spawns configmap-volume pods Mar 8 10:41:35.223: INFO: Pod name wrapped-volume-race-904cfa0f-ea87-43f4-8590-99a3238762d5: Found 0 pods out of 5 Mar 8 10:41:40.230: INFO: Pod name wrapped-volume-race-904cfa0f-ea87-43f4-8590-99a3238762d5: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-904cfa0f-ea87-43f4-8590-99a3238762d5 in namespace emptydir-wrapper-8580, will wait for the garbage collector to delete the pods Mar 8 10:41:52.313: INFO: Deleting ReplicationController wrapped-volume-race-904cfa0f-ea87-43f4-8590-99a3238762d5 took: 10.113468ms Mar 8 10:41:52.413: INFO: Terminating ReplicationController wrapped-volume-race-904cfa0f-ea87-43f4-8590-99a3238762d5 pods took: 100.25556ms STEP: Creating RC which spawns configmap-volume pods Mar 8 10:41:57.451: INFO: Pod name wrapped-volume-race-fb376c33-6e7a-498a-a099-d038ac8f6778: Found 0 pods out of 5 Mar 8 10:42:02.458: INFO: Pod name wrapped-volume-race-fb376c33-6e7a-498a-a099-d038ac8f6778: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-fb376c33-6e7a-498a-a099-d038ac8f6778 in namespace emptydir-wrapper-8580, will wait for the garbage collector to delete the pods Mar 8 10:42:14.558: INFO: Deleting ReplicationController wrapped-volume-race-fb376c33-6e7a-498a-a099-d038ac8f6778 took: 7.336421ms Mar 8 10:42:14.859: INFO: Terminating ReplicationController wrapped-volume-race-fb376c33-6e7a-498a-a099-d038ac8f6778 pods took: 300.256995ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 10:42:21.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-8580" for this suite. • [SLOW TEST:68.067 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":278,"completed":21,"skipped":301,"failed":0} SSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 10:42:21.272: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 10:42:34.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4546" for this suite. • [SLOW TEST:13.154 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":278,"completed":22,"skipped":304,"failed":0} SSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 10:42:34.427: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Mar 8 10:42:34.511: INFO: Number of nodes with available pods: 0 Mar 8 10:42:34.511: INFO: Node kind-control-plane is running more than one daemon pod Mar 8 10:42:35.519: INFO: Number of nodes with available pods: 0 Mar 8 10:42:35.519: INFO: Node kind-control-plane is running more than one daemon pod Mar 8 10:42:36.519: INFO: Number of nodes with available pods: 1 Mar 8 10:42:36.519: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Mar 8 10:42:36.540: INFO: Number of nodes with available pods: 1 Mar 8 10:42:36.540: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4313, will wait for the garbage collector to delete the pods Mar 8 10:42:37.672: INFO: Deleting DaemonSet.extensions daemon-set took: 6.095854ms Mar 8 10:42:37.973: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.237117ms Mar 8 10:42:49.576: INFO: Number of nodes with available pods: 0 Mar 8 10:42:49.576: INFO: Number of running nodes: 0, number of available pods: 0 Mar 8 10:42:49.580: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4313/daemonsets","resourceVersion":"9617"},"items":null} Mar 8 10:42:49.583: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4313/pods","resourceVersion":"9617"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 10:42:49.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4313" for this suite. • [SLOW TEST:15.172 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":278,"completed":23,"skipped":312,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 10:42:49.599: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service endpoint-test2 in namespace services-1881 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1881 to expose endpoints map[] Mar 8 10:42:49.682: INFO: Get endpoints failed (13.803304ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Mar 8 10:42:50.690: INFO: successfully validated that service endpoint-test2 in namespace services-1881 exposes endpoints map[] (1.022201313s elapsed) STEP: Creating pod pod1 in namespace services-1881 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1881 to expose endpoints map[pod1:[80]] Mar 8 10:42:52.760: INFO: successfully validated that service endpoint-test2 in namespace services-1881 exposes endpoints map[pod1:[80]] (2.063017765s elapsed) STEP: Creating pod pod2 in namespace services-1881 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1881 to expose endpoints map[pod1:[80] pod2:[80]] Mar 8 10:42:54.840: INFO: successfully validated that service endpoint-test2 in namespace services-1881 exposes endpoints map[pod1:[80] pod2:[80]] (2.075337397s elapsed) STEP: Deleting pod pod1 in namespace services-1881 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1881 to expose endpoints map[pod2:[80]] Mar 8 10:42:54.873: INFO: successfully validated that service endpoint-test2 in namespace services-1881 exposes endpoints map[pod2:[80]] (29.577526ms elapsed) STEP: Deleting pod pod2 in namespace services-1881 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1881 to expose endpoints map[] Mar 8 10:42:54.893: INFO: successfully validated that service endpoint-test2 in namespace services-1881 exposes endpoints map[] (15.590908ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 10:42:54.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1881" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:5.317 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":278,"completed":24,"skipped":342,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 10:42:54.916: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-cc79a69c-effa-4fe4-bc79-d52aa95aa977 STEP: Creating a pod to test consume configMaps Mar 8 10:42:55.020: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5458ce80-5a18-4669-b48f-f66c905a296d" in namespace "projected-3040" to be "success or failure" Mar 8 10:42:55.031: INFO: Pod "pod-projected-configmaps-5458ce80-5a18-4669-b48f-f66c905a296d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.060224ms Mar 8 10:42:57.034: INFO: Pod "pod-projected-configmaps-5458ce80-5a18-4669-b48f-f66c905a296d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.013962841s STEP: Saw pod success Mar 8 10:42:57.035: INFO: Pod "pod-projected-configmaps-5458ce80-5a18-4669-b48f-f66c905a296d" satisfied condition "success or failure" Mar 8 10:42:57.037: INFO: Trying to get logs from node kind-control-plane pod pod-projected-configmaps-5458ce80-5a18-4669-b48f-f66c905a296d container projected-configmap-volume-test: STEP: delete the pod Mar 8 10:42:57.068: INFO: Waiting for pod pod-projected-configmaps-5458ce80-5a18-4669-b48f-f66c905a296d to disappear Mar 8 10:42:57.073: INFO: Pod pod-projected-configmaps-5458ce80-5a18-4669-b48f-f66c905a296d no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 10:42:57.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3040" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":25,"skipped":353,"failed":0} SSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 10:42:57.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-5bd59b6e-3317-4903-9641-a6975d8cc209 in namespace container-probe-5402 Mar 8 10:42:59.196: INFO: Started pod liveness-5bd59b6e-3317-4903-9641-a6975d8cc209 in namespace container-probe-5402 STEP: checking the pod's current state and verifying that restartCount is present Mar 8 10:42:59.199: INFO: Initial restart count of pod liveness-5bd59b6e-3317-4903-9641-a6975d8cc209 is 0 Mar 8 10:43:13.226: INFO: Restart count of pod container-probe-5402/liveness-5bd59b6e-3317-4903-9641-a6975d8cc209 is now 1 (14.027665401s elapsed) Mar 8 10:43:33.276: INFO: Restart count of pod container-probe-5402/liveness-5bd59b6e-3317-4903-9641-a6975d8cc209 is now 2 (34.076862888s elapsed) Mar 8 10:43:53.312: INFO: Restart count of pod container-probe-5402/liveness-5bd59b6e-3317-4903-9641-a6975d8cc209 is now 3 (54.113651007s elapsed) Mar 8 10:44:13.504: INFO: Restart count of pod container-probe-5402/liveness-5bd59b6e-3317-4903-9641-a6975d8cc209 is now 4 (1m14.30561518s elapsed) Mar 8 10:45:13.672: INFO: Restart count of pod container-probe-5402/liveness-5bd59b6e-3317-4903-9641-a6975d8cc209 is now 5 (2m14.472751221s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 10:45:13.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5402" for this suite. • [SLOW TEST:136.618 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":278,"completed":26,"skipped":359,"failed":0} SSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 10:45:13.698: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 8 10:45:13.772: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Mar 8 10:45:15.828: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 10:45:16.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-4910" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":278,"completed":27,"skipped":363,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 10:45:16.967: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 8 10:45:17.029: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Mar 8 10:45:20.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-420 create -f -' Mar 8 10:45:23.290: INFO: stderr: "" Mar 8 10:45:23.290: INFO: stdout: "e2e-test-crd-publish-openapi-9075-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Mar 8 10:45:23.290: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-420 delete e2e-test-crd-publish-openapi-9075-crds test-cr' Mar 8 10:45:23.420: INFO: stderr: "" Mar 8 10:45:23.420: INFO: stdout: "e2e-test-crd-publish-openapi-9075-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Mar 8 10:45:23.421: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-420 apply -f -' Mar 8 10:45:23.711: INFO: stderr: "" Mar 8 10:45:23.711: INFO: stdout: "e2e-test-crd-publish-openapi-9075-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Mar 8 10:45:23.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-420 delete e2e-test-crd-publish-openapi-9075-crds test-cr' Mar 8 10:45:23.848: INFO: stderr: "" Mar 8 10:45:23.849: INFO: stdout: "e2e-test-crd-publish-openapi-9075-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Mar 8 10:45:23.849: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9075-crds' Mar 8 10:45:24.152: INFO: stderr: "" Mar 8 10:45:24.152: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9075-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 10:45:26.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-420" for this suite. • [SLOW TEST:9.241 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":278,"completed":28,"skipped":367,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 10:45:26.209: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium Mar 8 10:45:26.321: INFO: Waiting up to 5m0s for pod "pod-e0793490-478a-41c9-a59f-c00e6577f9dd" in namespace "emptydir-2918" to be "success or failure" Mar 8 10:45:26.328: INFO: Pod "pod-e0793490-478a-41c9-a59f-c00e6577f9dd": Phase="Pending", Reason="", readiness=false. Elapsed: 7.573666ms Mar 8 10:45:28.332: INFO: Pod "pod-e0793490-478a-41c9-a59f-c00e6577f9dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011163435s STEP: Saw pod success Mar 8 10:45:28.332: INFO: Pod "pod-e0793490-478a-41c9-a59f-c00e6577f9dd" satisfied condition "success or failure" Mar 8 10:45:28.335: INFO: Trying to get logs from node kind-control-plane pod pod-e0793490-478a-41c9-a59f-c00e6577f9dd container test-container: STEP: delete the pod Mar 8 10:45:28.390: INFO: Waiting for pod pod-e0793490-478a-41c9-a59f-c00e6577f9dd to disappear Mar 8 10:45:28.400: INFO: Pod pod-e0793490-478a-41c9-a59f-c00e6577f9dd no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 10:45:28.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2918" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":29,"skipped":386,"failed":0} SSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 10:45:28.408: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 8 10:45:30.495: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 10:45:30.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-832" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":30,"skipped":390,"failed":0} S ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 10:45:30.521: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-d045df43-26e7-46d4-934b-32b3f37d3e9c STEP: Creating the pod STEP: Updating configmap configmap-test-upd-d045df43-26e7-46d4-934b-32b3f37d3e9c STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 10:47:05.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4588" for this suite. • [SLOW TEST:94.497 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":31,"skipped":391,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 10:47:05.019: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-rl7t STEP: Creating a pod to test atomic-volume-subpath Mar 8 10:47:05.119: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-rl7t" in namespace "subpath-2846" to be "success or failure" Mar 8 10:47:05.127: INFO: Pod "pod-subpath-test-configmap-rl7t": Phase="Pending", Reason="", readiness=false. Elapsed: 7.67279ms Mar 8 10:47:07.131: INFO: Pod "pod-subpath-test-configmap-rl7t": Phase="Running", Reason="", readiness=true. Elapsed: 2.011510767s Mar 8 10:47:09.135: INFO: Pod "pod-subpath-test-configmap-rl7t": Phase="Running", Reason="", readiness=true. Elapsed: 4.015336223s Mar 8 10:47:11.138: INFO: Pod "pod-subpath-test-configmap-rl7t": Phase="Running", Reason="", readiness=true. Elapsed: 6.019070224s Mar 8 10:47:13.142: INFO: Pod "pod-subpath-test-configmap-rl7t": Phase="Running", Reason="", readiness=true. Elapsed: 8.022353956s Mar 8 10:47:15.146: INFO: Pod "pod-subpath-test-configmap-rl7t": Phase="Running", Reason="", readiness=true. Elapsed: 10.026183154s Mar 8 10:47:17.149: INFO: Pod "pod-subpath-test-configmap-rl7t": Phase="Running", Reason="", readiness=true. Elapsed: 12.030030889s Mar 8 10:47:19.153: INFO: Pod "pod-subpath-test-configmap-rl7t": Phase="Running", Reason="", readiness=true. Elapsed: 14.033956798s Mar 8 10:47:21.158: INFO: Pod "pod-subpath-test-configmap-rl7t": Phase="Running", Reason="", readiness=true. Elapsed: 16.0383622s Mar 8 10:47:23.161: INFO: Pod "pod-subpath-test-configmap-rl7t": Phase="Running", Reason="", readiness=true. Elapsed: 18.042048434s Mar 8 10:47:25.166: INFO: Pod "pod-subpath-test-configmap-rl7t": Phase="Running", Reason="", readiness=true. Elapsed: 20.046143683s Mar 8 10:47:27.169: INFO: Pod "pod-subpath-test-configmap-rl7t": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.050070375s STEP: Saw pod success Mar 8 10:47:27.170: INFO: Pod "pod-subpath-test-configmap-rl7t" satisfied condition "success or failure" Mar 8 10:47:27.172: INFO: Trying to get logs from node kind-control-plane pod pod-subpath-test-configmap-rl7t container test-container-subpath-configmap-rl7t: STEP: delete the pod Mar 8 10:47:27.204: INFO: Waiting for pod pod-subpath-test-configmap-rl7t to disappear Mar 8 10:47:27.218: INFO: Pod pod-subpath-test-configmap-rl7t no longer exists STEP: Deleting pod pod-subpath-test-configmap-rl7t Mar 8 10:47:27.218: INFO: Deleting pod "pod-subpath-test-configmap-rl7t" in namespace "subpath-2846" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 10:47:27.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2846" for this suite. • [SLOW TEST:22.208 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":278,"completed":32,"skipped":413,"failed":0} SSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 10:47:27.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 8 10:47:27.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Mar 8 10:47:27.426: INFO: stderr: "" Mar 8 10:47:27.426: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.0\", GitCommit:\"70132b0f130acc0bed193d9ba59dd186f0e634cf\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T16:10:40Z\", GoVersion:\"go1.13.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.0\", GitCommit:\"70132b0f130acc0bed193d9ba59dd186f0e634cf\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T00:09:19Z\", GoVersion:\"go1.13.4\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 10:47:27.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6957" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":278,"completed":33,"skipped":422,"failed":0} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 10:47:27.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:329 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the initial replication controller Mar 8 10:47:27.483: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9089' Mar 8 10:47:27.844: INFO: stderr: "" Mar 8 10:47:27.844: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 8 10:47:27.844: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9089' Mar 8 10:47:27.979: INFO: stderr: "" Mar 8 10:47:27.979: INFO: stdout: "update-demo-nautilus-247hb update-demo-nautilus-9c4ch " Mar 8 10:47:27.979: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-247hb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9089' Mar 8 10:47:28.064: INFO: stderr: "" Mar 8 10:47:28.064: INFO: stdout: "" Mar 8 10:47:28.064: INFO: update-demo-nautilus-247hb is created but not running Mar 8 10:47:33.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9089' Mar 8 10:47:33.199: INFO: stderr: "" Mar 8 10:47:33.199: INFO: stdout: "update-demo-nautilus-247hb update-demo-nautilus-9c4ch " Mar 8 10:47:33.199: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-247hb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9089' Mar 8 10:47:33.317: INFO: stderr: "" Mar 8 10:47:33.317: INFO: stdout: "true" Mar 8 10:47:33.317: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-247hb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9089' Mar 8 10:47:33.432: INFO: stderr: "" Mar 8 10:47:33.432: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 8 10:47:33.432: INFO: validating pod update-demo-nautilus-247hb Mar 8 10:47:33.436: INFO: got data: { "image": "nautilus.jpg" } Mar 8 10:47:33.436: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 8 10:47:33.436: INFO: update-demo-nautilus-247hb is verified up and running Mar 8 10:47:33.436: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9c4ch -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9089' Mar 8 10:47:33.539: INFO: stderr: "" Mar 8 10:47:33.539: INFO: stdout: "true" Mar 8 10:47:33.539: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9c4ch -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9089' Mar 8 10:47:33.637: INFO: stderr: "" Mar 8 10:47:33.637: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 8 10:47:33.637: INFO: validating pod update-demo-nautilus-9c4ch Mar 8 10:47:33.641: INFO: got data: { "image": "nautilus.jpg" } Mar 8 10:47:33.641: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 8 10:47:33.641: INFO: update-demo-nautilus-9c4ch is verified up and running STEP: rolling-update to new replication controller Mar 8 10:47:33.644: INFO: scanned /root for discovery docs: Mar 8 10:47:33.644: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-9089' Mar 8 10:47:56.307: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Mar 8 10:47:56.307: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 8 10:47:56.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9089' Mar 8 10:47:56.436: INFO: stderr: "" Mar 8 10:47:56.436: INFO: stdout: "update-demo-kitten-shjcd update-demo-kitten-x8btx " Mar 8 10:47:56.436: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-shjcd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9089' Mar 8 10:47:56.556: INFO: stderr: "" Mar 8 10:47:56.557: INFO: stdout: "true" Mar 8 10:47:56.557: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-shjcd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9089' Mar 8 10:47:56.665: INFO: stderr: "" Mar 8 10:47:56.665: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Mar 8 10:47:56.665: INFO: validating pod update-demo-kitten-shjcd Mar 8 10:47:56.670: INFO: got data: { "image": "kitten.jpg" } Mar 8 10:47:56.670: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Mar 8 10:47:56.670: INFO: update-demo-kitten-shjcd is verified up and running Mar 8 10:47:56.670: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-x8btx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9089' Mar 8 10:47:56.783: INFO: stderr: "" Mar 8 10:47:56.784: INFO: stdout: "true" Mar 8 10:47:56.784: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-x8btx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9089' Mar 8 10:47:56.877: INFO: stderr: "" Mar 8 10:47:56.877: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Mar 8 10:47:56.877: INFO: validating pod update-demo-kitten-x8btx Mar 8 10:47:56.881: INFO: got data: { "image": "kitten.jpg" } Mar 8 10:47:56.881: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Mar 8 10:47:56.881: INFO: update-demo-kitten-x8btx is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 10:47:56.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9089" for this suite. • [SLOW TEST:29.453 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:327 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance]","total":278,"completed":34,"skipped":430,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 10:47:56.888: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 10:48:56.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4209" for this suite. • [SLOW TEST:60.067 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":278,"completed":35,"skipped":461,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 10:48:56.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 8 10:48:57.452: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 8 10:49:00.485: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 10:49:00.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8129" for this suite. STEP: Destroying namespace "webhook-8129-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 •{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":278,"completed":36,"skipped":465,"failed":0} SSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 10:49:00.912: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 10:49:17.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9181" for this suite. • [SLOW TEST:16.221 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":278,"completed":37,"skipped":469,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 10:49:17.133: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-6af71b6c-a45e-4802-b444-6614490d37d8 STEP: Creating a pod to test consume configMaps Mar 8 10:49:17.218: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-26773989-90d2-47dd-ba28-8653616765bc" in namespace "projected-8629" to be "success or failure" Mar 8 10:49:17.281: INFO: Pod "pod-projected-configmaps-26773989-90d2-47dd-ba28-8653616765bc": Phase="Pending", Reason="", readiness=false. Elapsed: 62.783075ms Mar 8 10:49:19.288: INFO: Pod "pod-projected-configmaps-26773989-90d2-47dd-ba28-8653616765bc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.069923177s STEP: Saw pod success Mar 8 10:49:19.288: INFO: Pod "pod-projected-configmaps-26773989-90d2-47dd-ba28-8653616765bc" satisfied condition "success or failure" Mar 8 10:49:19.291: INFO: Trying to get logs from node kind-control-plane pod pod-projected-configmaps-26773989-90d2-47dd-ba28-8653616765bc container projected-configmap-volume-test: STEP: delete the pod Mar 8 10:49:19.333: INFO: Waiting for pod pod-projected-configmaps-26773989-90d2-47dd-ba28-8653616765bc to disappear Mar 8 10:49:19.335: INFO: Pod pod-projected-configmaps-26773989-90d2-47dd-ba28-8653616765bc no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 10:49:19.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8629" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":38,"skipped":488,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 10:49:19.342: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating cluster-info Mar 8 10:49:19.381: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Mar 8 10:49:19.458: INFO: stderr: "" Mar 8 10:49:19.458: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32768\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32768/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 10:49:19.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7473" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":278,"completed":39,"skipped":505,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 10:49:19.464: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 8 10:49:21.569: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 10:49:21.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9393" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":40,"skipped":522,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 10:49:21.611: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 8 10:49:22.642: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 8 10:49:24.681: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719261362, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719261362, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719261362, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719261362, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 8 10:49:27.724: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 8 10:49:27.727: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-3799-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 10:49:29.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-826" for this suite. STEP: Destroying namespace "webhook-826-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.469 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":278,"completed":41,"skipped":550,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 10:49:29.080: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Mar 8 10:49:29.147: INFO: >>> kubeConfig: /root/.kube/config Mar 8 10:49:32.238: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 10:49:42.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9712" for this suite. • [SLOW TEST:13.667 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":278,"completed":42,"skipped":558,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 10:49:42.747: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-ee50da44-cb6d-4257-8110-273ea26b1914 STEP: Creating a pod to test consume configMaps Mar 8 10:49:42.832: INFO: Waiting up to 5m0s for pod "pod-configmaps-1d7e14fc-b378-44b1-aae2-9ceec437fb52" in namespace "configmap-7975" to be "success or failure" Mar 8 10:49:42.861: INFO: Pod "pod-configmaps-1d7e14fc-b378-44b1-aae2-9ceec437fb52": Phase="Pending", Reason="", readiness=false. Elapsed: 29.182343ms Mar 8 10:49:44.865: INFO: Pod "pod-configmaps-1d7e14fc-b378-44b1-aae2-9ceec437fb52": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.033062139s STEP: Saw pod success Mar 8 10:49:44.865: INFO: Pod "pod-configmaps-1d7e14fc-b378-44b1-aae2-9ceec437fb52" satisfied condition "success or failure" Mar 8 10:49:44.868: INFO: Trying to get logs from node kind-control-plane pod pod-configmaps-1d7e14fc-b378-44b1-aae2-9ceec437fb52 container configmap-volume-test: STEP: delete the pod Mar 8 10:49:44.886: INFO: Waiting for pod pod-configmaps-1d7e14fc-b378-44b1-aae2-9ceec437fb52 to disappear Mar 8 10:49:44.890: INFO: Pod pod-configmaps-1d7e14fc-b378-44b1-aae2-9ceec437fb52 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 10:49:44.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7975" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":43,"skipped":575,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 10:49:44.896: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 8 10:49:44.965: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1f9ac4dc-37d9-483a-bbc3-12ced06a105f" in namespace "downward-api-4305" to be "success or failure" Mar 8 10:49:44.997: INFO: Pod "downwardapi-volume-1f9ac4dc-37d9-483a-bbc3-12ced06a105f": Phase="Pending", Reason="", readiness=false. Elapsed: 31.649733ms Mar 8 10:49:47.001: INFO: Pod "downwardapi-volume-1f9ac4dc-37d9-483a-bbc3-12ced06a105f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.036063383s STEP: Saw pod success Mar 8 10:49:47.001: INFO: Pod "downwardapi-volume-1f9ac4dc-37d9-483a-bbc3-12ced06a105f" satisfied condition "success or failure" Mar 8 10:49:47.004: INFO: Trying to get logs from node kind-control-plane pod downwardapi-volume-1f9ac4dc-37d9-483a-bbc3-12ced06a105f container client-container: STEP: delete the pod Mar 8 10:49:47.024: INFO: Waiting for pod downwardapi-volume-1f9ac4dc-37d9-483a-bbc3-12ced06a105f to disappear Mar 8 10:49:47.028: INFO: Pod downwardapi-volume-1f9ac4dc-37d9-483a-bbc3-12ced06a105f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 10:49:47.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4305" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":44,"skipped":589,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 10:49:47.035: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD Mar 8 10:49:47.080: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 10:50:03.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9094" for this suite. • [SLOW TEST:16.837 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":278,"completed":45,"skipped":609,"failed":0} SSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 10:50:03.873: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 10:50:03.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1950" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":278,"completed":46,"skipped":615,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 10:50:03.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium Mar 8 10:50:04.031: INFO: Waiting up to 5m0s for pod "pod-161624ad-51f1-4117-90c0-3f2fe91e4312" in namespace "emptydir-4783" to be "success or failure" Mar 8 10:50:04.037: INFO: Pod "pod-161624ad-51f1-4117-90c0-3f2fe91e4312": Phase="Pending", Reason="", readiness=false. Elapsed: 6.722797ms Mar 8 10:50:06.043: INFO: Pod "pod-161624ad-51f1-4117-90c0-3f2fe91e4312": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011841999s STEP: Saw pod success Mar 8 10:50:06.043: INFO: Pod "pod-161624ad-51f1-4117-90c0-3f2fe91e4312" satisfied condition "success or failure" Mar 8 10:50:06.047: INFO: Trying to get logs from node kind-control-plane pod pod-161624ad-51f1-4117-90c0-3f2fe91e4312 container test-container: STEP: delete the pod Mar 8 10:50:06.072: INFO: Waiting for pod pod-161624ad-51f1-4117-90c0-3f2fe91e4312 to disappear Mar 8 10:50:06.077: INFO: Pod pod-161624ad-51f1-4117-90c0-3f2fe91e4312 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 10:50:06.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4783" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":47,"skipped":628,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 10:50:06.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-8414.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-8414.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8414.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-8414.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-8414.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8414.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 8 10:50:10.270: INFO: DNS probes using dns-8414/dns-test-dc85ec44-45c0-40c3-be44-e56a29e37237 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 10:50:10.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8414" for this suite. •{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":278,"completed":48,"skipped":672,"failed":0} ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 10:50:10.330: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-73590c3f-0cd4-41e0-a3f5-23b4b160b065 STEP: Creating a pod to test consume secrets Mar 8 10:50:10.487: INFO: Waiting up to 5m0s for pod "pod-secrets-49d7ead0-afa4-43cd-9e99-4da817a1e24b" in namespace "secrets-2971" to be "success or failure" Mar 8 10:50:10.490: INFO: Pod "pod-secrets-49d7ead0-afa4-43cd-9e99-4da817a1e24b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.161152ms Mar 8 10:50:12.494: INFO: Pod "pod-secrets-49d7ead0-afa4-43cd-9e99-4da817a1e24b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006752679s STEP: Saw pod success Mar 8 10:50:12.494: INFO: Pod "pod-secrets-49d7ead0-afa4-43cd-9e99-4da817a1e24b" satisfied condition "success or failure" Mar 8 10:50:12.496: INFO: Trying to get logs from node kind-control-plane pod pod-secrets-49d7ead0-afa4-43cd-9e99-4da817a1e24b container secret-volume-test: STEP: delete the pod Mar 8 10:50:12.531: INFO: Waiting for pod pod-secrets-49d7ead0-afa4-43cd-9e99-4da817a1e24b to disappear Mar 8 10:50:12.540: INFO: Pod pod-secrets-49d7ead0-afa4-43cd-9e99-4da817a1e24b no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 10:50:12.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2971" for this suite. STEP: Destroying namespace "secret-namespace-9197" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":278,"completed":49,"skipped":672,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 10:50:12.554: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 10:50:41.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-9291" for this suite. STEP: Destroying namespace "nsdeletetest-9115" for this suite. Mar 8 10:50:41.772: INFO: Namespace nsdeletetest-9115 was already deleted STEP: Destroying namespace "nsdeletetest-8758" for this suite. • [SLOW TEST:29.222 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":278,"completed":50,"skipped":690,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 10:50:41.777: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 8 10:50:41.852: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-cf36f7d0-d337-40d1-881e-801914add1b2" in namespace "security-context-test-3091" to be "success or failure" Mar 8 10:50:41.873: INFO: Pod "alpine-nnp-false-cf36f7d0-d337-40d1-881e-801914add1b2": Phase="Pending", Reason="", readiness=false. Elapsed: 20.473065ms Mar 8 10:50:43.877: INFO: Pod "alpine-nnp-false-cf36f7d0-d337-40d1-881e-801914add1b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024497725s Mar 8 10:50:45.880: INFO: Pod "alpine-nnp-false-cf36f7d0-d337-40d1-881e-801914add1b2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02776704s Mar 8 10:50:47.884: INFO: Pod "alpine-nnp-false-cf36f7d0-d337-40d1-881e-801914add1b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.03146817s Mar 8 10:50:47.884: INFO: Pod "alpine-nnp-false-cf36f7d0-d337-40d1-881e-801914add1b2" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 10:50:47.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3091" for this suite. • [SLOW TEST:6.124 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when creating containers with AllowPrivilegeEscalation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:289 should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":51,"skipped":700,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 10:50:47.901: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1841 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 8 10:50:47.970: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-2516' Mar 8 10:50:48.104: INFO: stderr: "" Mar 8 10:50:48.104: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1846 Mar 8 10:50:48.115: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-2516' Mar 8 10:50:50.487: INFO: stderr: "" Mar 8 10:50:50.487: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 10:50:50.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2516" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":278,"completed":52,"skipped":704,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 10:50:50.521: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-6971 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 8 10:50:50.556: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 8 10:51:10.653: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.0.116:8080/dial?request=hostname&protocol=udp&host=10.244.0.115&port=8081&tries=1'] Namespace:pod-network-test-6971 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 10:51:10.653: INFO: >>> kubeConfig: /root/.kube/config I0308 10:51:10.693952 6 log.go:172] (0xc002ad9a20) (0xc0010585a0) Create stream I0308 10:51:10.693987 6 log.go:172] (0xc002ad9a20) (0xc0010585a0) Stream added, broadcasting: 1 I0308 10:51:10.696315 6 log.go:172] (0xc002ad9a20) Reply frame received for 1 I0308 10:51:10.696369 6 log.go:172] (0xc002ad9a20) (0xc0013d8d20) Create stream I0308 10:51:10.696394 6 log.go:172] (0xc002ad9a20) (0xc0013d8d20) Stream added, broadcasting: 3 I0308 10:51:10.697365 6 log.go:172] (0xc002ad9a20) Reply frame received for 3 I0308 10:51:10.697397 6 log.go:172] (0xc002ad9a20) (0xc0010586e0) Create stream I0308 10:51:10.697411 6 log.go:172] (0xc002ad9a20) (0xc0010586e0) Stream added, broadcasting: 5 I0308 10:51:10.698396 6 log.go:172] (0xc002ad9a20) Reply frame received for 5 I0308 10:51:10.783839 6 log.go:172] (0xc002ad9a20) Data frame received for 3 I0308 10:51:10.783867 6 log.go:172] (0xc0013d8d20) (3) Data frame handling I0308 10:51:10.783905 6 log.go:172] (0xc0013d8d20) (3) Data frame sent I0308 10:51:10.784426 6 log.go:172] (0xc002ad9a20) Data frame received for 5 I0308 10:51:10.784454 6 log.go:172] (0xc0010586e0) (5) Data frame handling I0308 10:51:10.784486 6 log.go:172] (0xc002ad9a20) Data frame received for 3 I0308 10:51:10.784505 6 log.go:172] (0xc0013d8d20) (3) Data frame handling I0308 10:51:10.786169 6 log.go:172] (0xc002ad9a20) Data frame received for 1 I0308 10:51:10.786190 6 log.go:172] (0xc0010585a0) (1) Data frame handling I0308 10:51:10.786203 6 log.go:172] (0xc0010585a0) (1) Data frame sent I0308 10:51:10.786223 6 log.go:172] (0xc002ad9a20) (0xc0010585a0) Stream removed, broadcasting: 1 I0308 10:51:10.786241 6 log.go:172] (0xc002ad9a20) Go away received I0308 10:51:10.786692 6 log.go:172] (0xc002ad9a20) (0xc0010585a0) Stream removed, broadcasting: 1 I0308 10:51:10.786712 6 log.go:172] (0xc002ad9a20) (0xc0013d8d20) Stream removed, broadcasting: 3 I0308 10:51:10.786724 6 log.go:172] (0xc002ad9a20) (0xc0010586e0) Stream removed, broadcasting: 5 Mar 8 10:51:10.786: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 10:51:10.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6971" for this suite. • [SLOW TEST:20.272 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":53,"skipped":736,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 10:51:10.794: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Mar 8 10:51:10.852: INFO: Waiting up to 5m0s for pod "downward-api-b19b6897-c399-44fe-b7f6-b6e81c6d564b" in namespace "downward-api-5922" to be "success or failure" Mar 8 10:51:10.857: INFO: Pod "downward-api-b19b6897-c399-44fe-b7f6-b6e81c6d564b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.351077ms Mar 8 10:51:12.860: INFO: Pod "downward-api-b19b6897-c399-44fe-b7f6-b6e81c6d564b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007941423s STEP: Saw pod success Mar 8 10:51:12.860: INFO: Pod "downward-api-b19b6897-c399-44fe-b7f6-b6e81c6d564b" satisfied condition "success or failure" Mar 8 10:51:12.863: INFO: Trying to get logs from node kind-control-plane pod downward-api-b19b6897-c399-44fe-b7f6-b6e81c6d564b container dapi-container: STEP: delete the pod Mar 8 10:51:12.894: INFO: Waiting for pod downward-api-b19b6897-c399-44fe-b7f6-b6e81c6d564b to disappear Mar 8 10:51:12.899: INFO: Pod downward-api-b19b6897-c399-44fe-b7f6-b6e81c6d564b no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 10:51:12.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5922" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":278,"completed":54,"skipped":750,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 10:51:12.906: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC Mar 8 10:51:12.960: INFO: namespace kubectl-9754 Mar 8 10:51:12.960: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9754' Mar 8 10:51:13.343: INFO: stderr: "" Mar 8 10:51:13.343: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Mar 8 10:51:14.347: INFO: Selector matched 1 pods for map[app:agnhost] Mar 8 10:51:14.347: INFO: Found 0 / 1 Mar 8 10:51:15.347: INFO: Selector matched 1 pods for map[app:agnhost] Mar 8 10:51:15.347: INFO: Found 1 / 1 Mar 8 10:51:15.347: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 8 10:51:15.350: INFO: Selector matched 1 pods for map[app:agnhost] Mar 8 10:51:15.350: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 8 10:51:15.350: INFO: wait on agnhost-master startup in kubectl-9754 Mar 8 10:51:15.350: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-452gd agnhost-master --namespace=kubectl-9754' Mar 8 10:51:15.512: INFO: stderr: "" Mar 8 10:51:15.512: INFO: stdout: "Paused\n" STEP: exposing RC Mar 8 10:51:15.512: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-9754' Mar 8 10:51:15.664: INFO: stderr: "" Mar 8 10:51:15.664: INFO: stdout: "service/rm2 exposed\n" Mar 8 10:51:15.668: INFO: Service rm2 in namespace kubectl-9754 found. STEP: exposing service Mar 8 10:51:17.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-9754' Mar 8 10:51:17.859: INFO: stderr: "" Mar 8 10:51:17.859: INFO: stdout: "service/rm3 exposed\n" Mar 8 10:51:17.866: INFO: Service rm3 in namespace kubectl-9754 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 10:51:19.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9754" for this suite. • [SLOW TEST:6.974 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1275 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":278,"completed":55,"skipped":767,"failed":0} S ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 10:51:19.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 8 10:51:21.972: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 10:51:22.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5202" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":56,"skipped":768,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 10:51:22.021: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-1153 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet Mar 8 10:51:22.090: INFO: Found 0 stateful pods, waiting for 3 Mar 8 10:51:32.094: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 8 10:51:32.095: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 8 10:51:32.095: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Mar 8 10:51:32.104: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1153 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 8 10:51:32.376: INFO: stderr: "I0308 10:51:32.278309 1262 log.go:172] (0xc000ae5970) (0xc000bea320) Create stream\nI0308 10:51:32.278374 1262 log.go:172] (0xc000ae5970) (0xc000bea320) Stream added, broadcasting: 1\nI0308 10:51:32.281042 1262 log.go:172] (0xc000ae5970) Reply frame received for 1\nI0308 10:51:32.281082 1262 log.go:172] (0xc000ae5970) (0xc000adc500) Create stream\nI0308 10:51:32.281101 1262 log.go:172] (0xc000ae5970) (0xc000adc500) Stream added, broadcasting: 3\nI0308 10:51:32.281986 1262 log.go:172] (0xc000ae5970) Reply frame received for 3\nI0308 10:51:32.282021 1262 log.go:172] (0xc000ae5970) (0xc000adc5a0) Create stream\nI0308 10:51:32.282038 1262 log.go:172] (0xc000ae5970) (0xc000adc5a0) Stream added, broadcasting: 5\nI0308 10:51:32.282915 1262 log.go:172] (0xc000ae5970) Reply frame received for 5\nI0308 10:51:32.345179 1262 log.go:172] (0xc000ae5970) Data frame received for 5\nI0308 10:51:32.345202 1262 log.go:172] (0xc000adc5a0) (5) Data frame handling\nI0308 10:51:32.345219 1262 log.go:172] (0xc000adc5a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0308 10:51:32.372008 1262 log.go:172] (0xc000ae5970) Data frame received for 3\nI0308 10:51:32.372035 1262 log.go:172] (0xc000adc500) (3) Data frame handling\nI0308 10:51:32.372053 1262 log.go:172] (0xc000adc500) (3) Data frame sent\nI0308 10:51:32.372068 1262 log.go:172] (0xc000ae5970) Data frame received for 3\nI0308 10:51:32.372077 1262 log.go:172] (0xc000adc500) (3) Data frame handling\nI0308 10:51:32.372106 1262 log.go:172] (0xc000ae5970) Data frame received for 5\nI0308 10:51:32.372132 1262 log.go:172] (0xc000adc5a0) (5) Data frame handling\nI0308 10:51:32.373781 1262 log.go:172] (0xc000ae5970) Data frame received for 1\nI0308 10:51:32.373805 1262 log.go:172] (0xc000bea320) (1) Data frame handling\nI0308 10:51:32.373816 1262 log.go:172] (0xc000bea320) (1) Data frame sent\nI0308 10:51:32.373923 1262 log.go:172] (0xc000ae5970) (0xc000bea320) Stream removed, broadcasting: 1\nI0308 10:51:32.374015 1262 log.go:172] (0xc000ae5970) Go away received\nI0308 10:51:32.374207 1262 log.go:172] (0xc000ae5970) (0xc000bea320) Stream removed, broadcasting: 1\nI0308 10:51:32.374221 1262 log.go:172] (0xc000ae5970) (0xc000adc500) Stream removed, broadcasting: 3\nI0308 10:51:32.374226 1262 log.go:172] (0xc000ae5970) (0xc000adc5a0) Stream removed, broadcasting: 5\n" Mar 8 10:51:32.376: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 8 10:51:32.376: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Mar 8 10:51:42.406: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Mar 8 10:51:52.429: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1153 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 8 10:51:52.686: INFO: stderr: "I0308 10:51:52.612012 1282 log.go:172] (0xc0000f4580) (0xc0009e4000) Create stream\nI0308 10:51:52.612087 1282 log.go:172] (0xc0000f4580) (0xc0009e4000) Stream added, broadcasting: 1\nI0308 10:51:52.617893 1282 log.go:172] (0xc0000f4580) Reply frame received for 1\nI0308 10:51:52.617938 1282 log.go:172] (0xc0000f4580) (0xc0006d7a40) Create stream\nI0308 10:51:52.617949 1282 log.go:172] (0xc0000f4580) (0xc0006d7a40) Stream added, broadcasting: 3\nI0308 10:51:52.619058 1282 log.go:172] (0xc0000f4580) Reply frame received for 3\nI0308 10:51:52.619102 1282 log.go:172] (0xc0000f4580) (0xc0009e40a0) Create stream\nI0308 10:51:52.619123 1282 log.go:172] (0xc0000f4580) (0xc0009e40a0) Stream added, broadcasting: 5\nI0308 10:51:52.620033 1282 log.go:172] (0xc0000f4580) Reply frame received for 5\nI0308 10:51:52.681890 1282 log.go:172] (0xc0000f4580) Data frame received for 3\nI0308 10:51:52.681914 1282 log.go:172] (0xc0006d7a40) (3) Data frame handling\nI0308 10:51:52.681926 1282 log.go:172] (0xc0006d7a40) (3) Data frame sent\nI0308 10:51:52.681935 1282 log.go:172] (0xc0000f4580) Data frame received for 3\nI0308 10:51:52.681944 1282 log.go:172] (0xc0006d7a40) (3) Data frame handling\nI0308 10:51:52.681956 1282 log.go:172] (0xc0000f4580) Data frame received for 5\nI0308 10:51:52.681965 1282 log.go:172] (0xc0009e40a0) (5) Data frame handling\nI0308 10:51:52.681974 1282 log.go:172] (0xc0009e40a0) (5) Data frame sent\nI0308 10:51:52.681983 1282 log.go:172] (0xc0000f4580) Data frame received for 5\nI0308 10:51:52.681991 1282 log.go:172] (0xc0009e40a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0308 10:51:52.683711 1282 log.go:172] (0xc0000f4580) Data frame received for 1\nI0308 10:51:52.683729 1282 log.go:172] (0xc0009e4000) (1) Data frame handling\nI0308 10:51:52.683745 1282 log.go:172] (0xc0009e4000) (1) Data frame sent\nI0308 10:51:52.683756 1282 log.go:172] (0xc0000f4580) (0xc0009e4000) Stream removed, broadcasting: 1\nI0308 10:51:52.683768 1282 log.go:172] (0xc0000f4580) Go away received\nI0308 10:51:52.684134 1282 log.go:172] (0xc0000f4580) (0xc0009e4000) Stream removed, broadcasting: 1\nI0308 10:51:52.684157 1282 log.go:172] (0xc0000f4580) (0xc0006d7a40) Stream removed, broadcasting: 3\nI0308 10:51:52.684167 1282 log.go:172] (0xc0000f4580) (0xc0009e40a0) Stream removed, broadcasting: 5\n" Mar 8 10:51:52.686: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 8 10:51:52.686: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 8 10:52:02.705: INFO: Waiting for StatefulSet statefulset-1153/ss2 to complete update Mar 8 10:52:02.705: INFO: Waiting for Pod statefulset-1153/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Mar 8 10:52:02.705: INFO: Waiting for Pod statefulset-1153/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Mar 8 10:52:12.712: INFO: Waiting for StatefulSet statefulset-1153/ss2 to complete update STEP: Rolling back to a previous revision Mar 8 10:52:22.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1153 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 8 10:52:22.952: INFO: stderr: "I0308 10:52:22.860139 1304 log.go:172] (0xc000a67340) (0xc0008dc6e0) Create stream\nI0308 10:52:22.860193 1304 log.go:172] (0xc000a67340) (0xc0008dc6e0) Stream added, broadcasting: 1\nI0308 10:52:22.863928 1304 log.go:172] (0xc000a67340) Reply frame received for 1\nI0308 10:52:22.863993 1304 log.go:172] (0xc000a67340) (0xc000688780) Create stream\nI0308 10:52:22.864015 1304 log.go:172] (0xc000a67340) (0xc000688780) Stream added, broadcasting: 3\nI0308 10:52:22.864972 1304 log.go:172] (0xc000a67340) Reply frame received for 3\nI0308 10:52:22.864998 1304 log.go:172] (0xc000a67340) (0xc00052d540) Create stream\nI0308 10:52:22.865006 1304 log.go:172] (0xc000a67340) (0xc00052d540) Stream added, broadcasting: 5\nI0308 10:52:22.865697 1304 log.go:172] (0xc000a67340) Reply frame received for 5\nI0308 10:52:22.923687 1304 log.go:172] (0xc000a67340) Data frame received for 5\nI0308 10:52:22.923710 1304 log.go:172] (0xc00052d540) (5) Data frame handling\nI0308 10:52:22.923724 1304 log.go:172] (0xc00052d540) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0308 10:52:22.947838 1304 log.go:172] (0xc000a67340) Data frame received for 3\nI0308 10:52:22.947852 1304 log.go:172] (0xc000688780) (3) Data frame handling\nI0308 10:52:22.947859 1304 log.go:172] (0xc000688780) (3) Data frame sent\nI0308 10:52:22.947864 1304 log.go:172] (0xc000a67340) Data frame received for 3\nI0308 10:52:22.947869 1304 log.go:172] (0xc000688780) (3) Data frame handling\nI0308 10:52:22.947901 1304 log.go:172] (0xc000a67340) Data frame received for 5\nI0308 10:52:22.947910 1304 log.go:172] (0xc00052d540) (5) Data frame handling\nI0308 10:52:22.949677 1304 log.go:172] (0xc000a67340) Data frame received for 1\nI0308 10:52:22.949700 1304 log.go:172] (0xc0008dc6e0) (1) Data frame handling\nI0308 10:52:22.949726 1304 log.go:172] (0xc0008dc6e0) (1) Data frame sent\nI0308 10:52:22.949742 1304 log.go:172] (0xc000a67340) (0xc0008dc6e0) Stream removed, broadcasting: 1\nI0308 10:52:22.949775 1304 log.go:172] (0xc000a67340) Go away received\nI0308 10:52:22.950231 1304 log.go:172] (0xc000a67340) (0xc0008dc6e0) Stream removed, broadcasting: 1\nI0308 10:52:22.950250 1304 log.go:172] (0xc000a67340) (0xc000688780) Stream removed, broadcasting: 3\nI0308 10:52:22.950262 1304 log.go:172] (0xc000a67340) (0xc00052d540) Stream removed, broadcasting: 5\n" Mar 8 10:52:22.952: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 8 10:52:22.952: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 8 10:52:32.979: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Mar 8 10:52:43.013: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1153 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 8 10:52:43.240: INFO: stderr: "I0308 10:52:43.182045 1324 log.go:172] (0xc000115290) (0xc000661ae0) Create stream\nI0308 10:52:43.182107 1324 log.go:172] (0xc000115290) (0xc000661ae0) Stream added, broadcasting: 1\nI0308 10:52:43.186105 1324 log.go:172] (0xc000115290) Reply frame received for 1\nI0308 10:52:43.186206 1324 log.go:172] (0xc000115290) (0xc000661cc0) Create stream\nI0308 10:52:43.186228 1324 log.go:172] (0xc000115290) (0xc000661cc0) Stream added, broadcasting: 3\nI0308 10:52:43.188784 1324 log.go:172] (0xc000115290) Reply frame received for 3\nI0308 10:52:43.188824 1324 log.go:172] (0xc000115290) (0xc0009d2000) Create stream\nI0308 10:52:43.188835 1324 log.go:172] (0xc000115290) (0xc0009d2000) Stream added, broadcasting: 5\nI0308 10:52:43.189899 1324 log.go:172] (0xc000115290) Reply frame received for 5\nI0308 10:52:43.235995 1324 log.go:172] (0xc000115290) Data frame received for 3\nI0308 10:52:43.236031 1324 log.go:172] (0xc000115290) Data frame received for 5\nI0308 10:52:43.236054 1324 log.go:172] (0xc0009d2000) (5) Data frame handling\nI0308 10:52:43.236065 1324 log.go:172] (0xc0009d2000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0308 10:52:43.236079 1324 log.go:172] (0xc000661cc0) (3) Data frame handling\nI0308 10:52:43.236105 1324 log.go:172] (0xc000661cc0) (3) Data frame sent\nI0308 10:52:43.236119 1324 log.go:172] (0xc000115290) Data frame received for 3\nI0308 10:52:43.236131 1324 log.go:172] (0xc000661cc0) (3) Data frame handling\nI0308 10:52:43.236154 1324 log.go:172] (0xc000115290) Data frame received for 5\nI0308 10:52:43.236168 1324 log.go:172] (0xc0009d2000) (5) Data frame handling\nI0308 10:52:43.237305 1324 log.go:172] (0xc000115290) Data frame received for 1\nI0308 10:52:43.237328 1324 log.go:172] (0xc000661ae0) (1) Data frame handling\nI0308 10:52:43.237344 1324 log.go:172] (0xc000661ae0) (1) Data frame sent\nI0308 10:52:43.237538 1324 log.go:172] (0xc000115290) (0xc000661ae0) Stream removed, broadcasting: 1\nI0308 10:52:43.237560 1324 log.go:172] (0xc000115290) Go away received\nI0308 10:52:43.237954 1324 log.go:172] (0xc000115290) (0xc000661ae0) Stream removed, broadcasting: 1\nI0308 10:52:43.237974 1324 log.go:172] (0xc000115290) (0xc000661cc0) Stream removed, broadcasting: 3\nI0308 10:52:43.237983 1324 log.go:172] (0xc000115290) (0xc0009d2000) Stream removed, broadcasting: 5\n" Mar 8 10:52:43.240: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 8 10:52:43.240: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 8 10:52:53.259: INFO: Waiting for StatefulSet statefulset-1153/ss2 to complete update Mar 8 10:52:53.259: INFO: Waiting for Pod statefulset-1153/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Mar 8 10:53:03.266: INFO: Deleting all statefulset in ns statefulset-1153 Mar 8 10:53:03.269: INFO: Scaling statefulset ss2 to 0 Mar 8 10:53:33.289: INFO: Waiting for statefulset status.replicas updated to 0 Mar 8 10:53:33.292: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 10:53:33.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1153" for this suite. • [SLOW TEST:131.292 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":278,"completed":57,"skipped":855,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 10:53:33.313: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:46 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 10:53:33.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-547" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":278,"completed":58,"skipped":867,"failed":0} SSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 10:53:33.365: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-bc967881-f7c1-4f19-b02b-3226188a0ed8 STEP: Creating secret with name s-test-opt-upd-d6ca2cd1-2d7c-4ca2-be36-2ffb7a9a2fd5 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-bc967881-f7c1-4f19-b02b-3226188a0ed8 STEP: Updating secret s-test-opt-upd-d6ca2cd1-2d7c-4ca2-be36-2ffb7a9a2fd5 STEP: Creating secret with name s-test-opt-create-0b89f879-33c6-4407-8637-3aa9b5662c0a STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 10:53:41.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5034" for this suite. • [SLOW TEST:8.224 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":59,"skipped":874,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 10:53:41.589: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Mar 8 10:53:41.662: INFO: Waiting up to 5m0s for pod "downward-api-fa9a9c8d-a33c-4d33-a0c9-eac77d42ecb6" in namespace "downward-api-3304" to be "success or failure" Mar 8 10:53:41.667: INFO: Pod "downward-api-fa9a9c8d-a33c-4d33-a0c9-eac77d42ecb6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.617962ms Mar 8 10:53:43.671: INFO: Pod "downward-api-fa9a9c8d-a33c-4d33-a0c9-eac77d42ecb6": Phase="Running", Reason="", readiness=true. Elapsed: 2.008766712s Mar 8 10:53:45.675: INFO: Pod "downward-api-fa9a9c8d-a33c-4d33-a0c9-eac77d42ecb6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012701962s STEP: Saw pod success Mar 8 10:53:45.675: INFO: Pod "downward-api-fa9a9c8d-a33c-4d33-a0c9-eac77d42ecb6" satisfied condition "success or failure" Mar 8 10:53:45.678: INFO: Trying to get logs from node kind-control-plane pod downward-api-fa9a9c8d-a33c-4d33-a0c9-eac77d42ecb6 container dapi-container: STEP: delete the pod Mar 8 10:53:45.699: INFO: Waiting for pod downward-api-fa9a9c8d-a33c-4d33-a0c9-eac77d42ecb6 to disappear Mar 8 10:53:45.703: INFO: Pod downward-api-fa9a9c8d-a33c-4d33-a0c9-eac77d42ecb6 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 10:53:45.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3304" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":278,"completed":60,"skipped":888,"failed":0} SSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 10:53:45.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-9584 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-9584 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-9584 Mar 8 10:53:45.807: INFO: Found 0 stateful pods, waiting for 1 Mar 8 10:53:55.812: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Mar 8 10:53:55.816: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9584 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 8 10:53:56.090: INFO: stderr: "I0308 10:53:55.991256 1344 log.go:172] (0xc0009eb290) (0xc0009448c0) Create stream\nI0308 10:53:55.991305 1344 log.go:172] (0xc0009eb290) (0xc0009448c0) Stream added, broadcasting: 1\nI0308 10:53:55.994961 1344 log.go:172] (0xc0009eb290) Reply frame received for 1\nI0308 10:53:55.995004 1344 log.go:172] (0xc0009eb290) (0xc000678640) Create stream\nI0308 10:53:55.995014 1344 log.go:172] (0xc0009eb290) (0xc000678640) Stream added, broadcasting: 3\nI0308 10:53:55.995899 1344 log.go:172] (0xc0009eb290) Reply frame received for 3\nI0308 10:53:55.995931 1344 log.go:172] (0xc0009eb290) (0xc000393400) Create stream\nI0308 10:53:55.995940 1344 log.go:172] (0xc0009eb290) (0xc000393400) Stream added, broadcasting: 5\nI0308 10:53:55.996754 1344 log.go:172] (0xc0009eb290) Reply frame received for 5\nI0308 10:53:56.061091 1344 log.go:172] (0xc0009eb290) Data frame received for 5\nI0308 10:53:56.061116 1344 log.go:172] (0xc000393400) (5) Data frame handling\nI0308 10:53:56.061137 1344 log.go:172] (0xc000393400) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0308 10:53:56.085238 1344 log.go:172] (0xc0009eb290) Data frame received for 3\nI0308 10:53:56.085261 1344 log.go:172] (0xc000678640) (3) Data frame handling\nI0308 10:53:56.085277 1344 log.go:172] (0xc000678640) (3) Data frame sent\nI0308 10:53:56.085291 1344 log.go:172] (0xc0009eb290) Data frame received for 3\nI0308 10:53:56.085308 1344 log.go:172] (0xc000678640) (3) Data frame handling\nI0308 10:53:56.085650 1344 log.go:172] (0xc0009eb290) Data frame received for 5\nI0308 10:53:56.085676 1344 log.go:172] (0xc000393400) (5) Data frame handling\nI0308 10:53:56.087228 1344 log.go:172] (0xc0009eb290) Data frame received for 1\nI0308 10:53:56.087254 1344 log.go:172] (0xc0009448c0) (1) Data frame handling\nI0308 10:53:56.087276 1344 log.go:172] (0xc0009448c0) (1) Data frame sent\nI0308 10:53:56.087302 1344 log.go:172] (0xc0009eb290) (0xc0009448c0) Stream removed, broadcasting: 1\nI0308 10:53:56.087324 1344 log.go:172] (0xc0009eb290) Go away received\nI0308 10:53:56.087652 1344 log.go:172] (0xc0009eb290) (0xc0009448c0) Stream removed, broadcasting: 1\nI0308 10:53:56.087676 1344 log.go:172] (0xc0009eb290) (0xc000678640) Stream removed, broadcasting: 3\nI0308 10:53:56.087686 1344 log.go:172] (0xc0009eb290) (0xc000393400) Stream removed, broadcasting: 5\n" Mar 8 10:53:56.090: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 8 10:53:56.090: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 8 10:53:56.093: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Mar 8 10:54:06.098: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 8 10:54:06.098: INFO: Waiting for statefulset status.replicas updated to 0 Mar 8 10:54:06.113: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999305s Mar 8 10:54:07.117: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.993487893s Mar 8 10:54:08.121: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.989107993s Mar 8 10:54:09.126: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.984880299s Mar 8 10:54:10.129: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.980553687s Mar 8 10:54:11.134: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.976823952s Mar 8 10:54:12.138: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.97266449s Mar 8 10:54:13.141: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.968414916s Mar 8 10:54:14.146: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.964863745s Mar 8 10:54:15.150: INFO: Verifying statefulset ss doesn't scale past 1 for another 960.598166ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-9584 Mar 8 10:54:16.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9584 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 8 10:54:16.407: INFO: stderr: "I0308 10:54:16.334638 1365 log.go:172] (0xc0000f4b00) (0xc0006d9d60) Create stream\nI0308 10:54:16.334708 1365 log.go:172] (0xc0000f4b00) (0xc0006d9d60) Stream added, broadcasting: 1\nI0308 10:54:16.337167 1365 log.go:172] (0xc0000f4b00) Reply frame received for 1\nI0308 10:54:16.337205 1365 log.go:172] (0xc0000f4b00) (0xc0006b0640) Create stream\nI0308 10:54:16.337222 1365 log.go:172] (0xc0000f4b00) (0xc0006b0640) Stream added, broadcasting: 3\nI0308 10:54:16.338183 1365 log.go:172] (0xc0000f4b00) Reply frame received for 3\nI0308 10:54:16.338265 1365 log.go:172] (0xc0000f4b00) (0xc0006d9e00) Create stream\nI0308 10:54:16.338297 1365 log.go:172] (0xc0000f4b00) (0xc0006d9e00) Stream added, broadcasting: 5\nI0308 10:54:16.339334 1365 log.go:172] (0xc0000f4b00) Reply frame received for 5\nI0308 10:54:16.402377 1365 log.go:172] (0xc0000f4b00) Data frame received for 3\nI0308 10:54:16.402411 1365 log.go:172] (0xc0006b0640) (3) Data frame handling\nI0308 10:54:16.402430 1365 log.go:172] (0xc0006b0640) (3) Data frame sent\nI0308 10:54:16.402449 1365 log.go:172] (0xc0000f4b00) Data frame received for 3\nI0308 10:54:16.402462 1365 log.go:172] (0xc0006b0640) (3) Data frame handling\nI0308 10:54:16.402710 1365 log.go:172] (0xc0000f4b00) Data frame received for 5\nI0308 10:54:16.402741 1365 log.go:172] (0xc0006d9e00) (5) Data frame handling\nI0308 10:54:16.402752 1365 log.go:172] (0xc0006d9e00) (5) Data frame sent\nI0308 10:54:16.402761 1365 log.go:172] (0xc0000f4b00) Data frame received for 5\nI0308 10:54:16.402768 1365 log.go:172] (0xc0006d9e00) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0308 10:54:16.404363 1365 log.go:172] (0xc0000f4b00) Data frame received for 1\nI0308 10:54:16.404388 1365 log.go:172] (0xc0006d9d60) (1) Data frame handling\nI0308 10:54:16.404411 1365 log.go:172] (0xc0006d9d60) (1) Data frame sent\nI0308 10:54:16.404434 1365 log.go:172] (0xc0000f4b00) (0xc0006d9d60) Stream removed, broadcasting: 1\nI0308 10:54:16.404558 1365 log.go:172] (0xc0000f4b00) Go away received\nI0308 10:54:16.404845 1365 log.go:172] (0xc0000f4b00) (0xc0006d9d60) Stream removed, broadcasting: 1\nI0308 10:54:16.404867 1365 log.go:172] (0xc0000f4b00) (0xc0006b0640) Stream removed, broadcasting: 3\nI0308 10:54:16.404881 1365 log.go:172] (0xc0000f4b00) (0xc0006d9e00) Stream removed, broadcasting: 5\n" Mar 8 10:54:16.407: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 8 10:54:16.407: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 8 10:54:16.411: INFO: Found 1 stateful pods, waiting for 3 Mar 8 10:54:26.415: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Mar 8 10:54:26.416: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Mar 8 10:54:26.416: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Mar 8 10:54:26.423: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9584 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 8 10:54:26.684: INFO: stderr: "I0308 10:54:26.596236 1385 log.go:172] (0xc000a06000) (0xc000906000) Create stream\nI0308 10:54:26.596301 1385 log.go:172] (0xc000a06000) (0xc000906000) Stream added, broadcasting: 1\nI0308 10:54:26.599417 1385 log.go:172] (0xc000a06000) Reply frame received for 1\nI0308 10:54:26.599461 1385 log.go:172] (0xc000a06000) (0xc0006c1c20) Create stream\nI0308 10:54:26.599480 1385 log.go:172] (0xc000a06000) (0xc0006c1c20) Stream added, broadcasting: 3\nI0308 10:54:26.600385 1385 log.go:172] (0xc000a06000) Reply frame received for 3\nI0308 10:54:26.600425 1385 log.go:172] (0xc000a06000) (0xc0009060a0) Create stream\nI0308 10:54:26.600439 1385 log.go:172] (0xc000a06000) (0xc0009060a0) Stream added, broadcasting: 5\nI0308 10:54:26.601325 1385 log.go:172] (0xc000a06000) Reply frame received for 5\nI0308 10:54:26.679077 1385 log.go:172] (0xc000a06000) Data frame received for 3\nI0308 10:54:26.679156 1385 log.go:172] (0xc0006c1c20) (3) Data frame handling\nI0308 10:54:26.679189 1385 log.go:172] (0xc0006c1c20) (3) Data frame sent\nI0308 10:54:26.679208 1385 log.go:172] (0xc000a06000) Data frame received for 3\nI0308 10:54:26.679235 1385 log.go:172] (0xc0006c1c20) (3) Data frame handling\nI0308 10:54:26.679441 1385 log.go:172] (0xc000a06000) Data frame received for 5\nI0308 10:54:26.679465 1385 log.go:172] (0xc0009060a0) (5) Data frame handling\nI0308 10:54:26.679478 1385 log.go:172] (0xc0009060a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0308 10:54:26.679514 1385 log.go:172] (0xc000a06000) Data frame received for 5\nI0308 10:54:26.679527 1385 log.go:172] (0xc0009060a0) (5) Data frame handling\nI0308 10:54:26.680863 1385 log.go:172] (0xc000a06000) Data frame received for 1\nI0308 10:54:26.680883 1385 log.go:172] (0xc000906000) (1) Data frame handling\nI0308 10:54:26.680898 1385 log.go:172] (0xc000906000) (1) Data frame sent\nI0308 10:54:26.680907 1385 log.go:172] (0xc000a06000) (0xc000906000) Stream removed, broadcasting: 1\nI0308 10:54:26.680948 1385 log.go:172] (0xc000a06000) Go away received\nI0308 10:54:26.681187 1385 log.go:172] (0xc000a06000) (0xc000906000) Stream removed, broadcasting: 1\nI0308 10:54:26.681201 1385 log.go:172] (0xc000a06000) (0xc0006c1c20) Stream removed, broadcasting: 3\nI0308 10:54:26.681208 1385 log.go:172] (0xc000a06000) (0xc0009060a0) Stream removed, broadcasting: 5\n" Mar 8 10:54:26.684: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 8 10:54:26.684: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 8 10:54:26.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9584 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 8 10:54:26.929: INFO: stderr: "I0308 10:54:26.839187 1405 log.go:172] (0xc0009c4000) (0xc00043b400) Create stream\nI0308 10:54:26.839232 1405 log.go:172] (0xc0009c4000) (0xc00043b400) Stream added, broadcasting: 1\nI0308 10:54:26.842450 1405 log.go:172] (0xc0009c4000) Reply frame received for 1\nI0308 10:54:26.842489 1405 log.go:172] (0xc0009c4000) (0xc000986000) Create stream\nI0308 10:54:26.842502 1405 log.go:172] (0xc0009c4000) (0xc000986000) Stream added, broadcasting: 3\nI0308 10:54:26.843439 1405 log.go:172] (0xc0009c4000) Reply frame received for 3\nI0308 10:54:26.843477 1405 log.go:172] (0xc0009c4000) (0xc0006c39a0) Create stream\nI0308 10:54:26.843486 1405 log.go:172] (0xc0009c4000) (0xc0006c39a0) Stream added, broadcasting: 5\nI0308 10:54:26.844379 1405 log.go:172] (0xc0009c4000) Reply frame received for 5\nI0308 10:54:26.896626 1405 log.go:172] (0xc0009c4000) Data frame received for 5\nI0308 10:54:26.896645 1405 log.go:172] (0xc0006c39a0) (5) Data frame handling\nI0308 10:54:26.896660 1405 log.go:172] (0xc0006c39a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0308 10:54:26.924275 1405 log.go:172] (0xc0009c4000) Data frame received for 3\nI0308 10:54:26.924300 1405 log.go:172] (0xc000986000) (3) Data frame handling\nI0308 10:54:26.924333 1405 log.go:172] (0xc000986000) (3) Data frame sent\nI0308 10:54:26.924689 1405 log.go:172] (0xc0009c4000) Data frame received for 5\nI0308 10:54:26.924707 1405 log.go:172] (0xc0006c39a0) (5) Data frame handling\nI0308 10:54:26.924902 1405 log.go:172] (0xc0009c4000) Data frame received for 3\nI0308 10:54:26.924972 1405 log.go:172] (0xc000986000) (3) Data frame handling\nI0308 10:54:26.926577 1405 log.go:172] (0xc0009c4000) Data frame received for 1\nI0308 10:54:26.926599 1405 log.go:172] (0xc00043b400) (1) Data frame handling\nI0308 10:54:26.926625 1405 log.go:172] (0xc00043b400) (1) Data frame sent\nI0308 10:54:26.926647 1405 log.go:172] (0xc0009c4000) (0xc00043b400) Stream removed, broadcasting: 1\nI0308 10:54:26.926666 1405 log.go:172] (0xc0009c4000) Go away received\nI0308 10:54:26.927004 1405 log.go:172] (0xc0009c4000) (0xc00043b400) Stream removed, broadcasting: 1\nI0308 10:54:26.927022 1405 log.go:172] (0xc0009c4000) (0xc000986000) Stream removed, broadcasting: 3\nI0308 10:54:26.927031 1405 log.go:172] (0xc0009c4000) (0xc0006c39a0) Stream removed, broadcasting: 5\n" Mar 8 10:54:26.929: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 8 10:54:26.929: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 8 10:54:26.929: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9584 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 8 10:54:27.176: INFO: stderr: "I0308 10:54:27.079599 1426 log.go:172] (0xc000226840) (0xc0008ec1e0) Create stream\nI0308 10:54:27.079649 1426 log.go:172] (0xc000226840) (0xc0008ec1e0) Stream added, broadcasting: 1\nI0308 10:54:27.081610 1426 log.go:172] (0xc000226840) Reply frame received for 1\nI0308 10:54:27.081644 1426 log.go:172] (0xc000226840) (0xc0008ec320) Create stream\nI0308 10:54:27.081660 1426 log.go:172] (0xc000226840) (0xc0008ec320) Stream added, broadcasting: 3\nI0308 10:54:27.082404 1426 log.go:172] (0xc000226840) Reply frame received for 3\nI0308 10:54:27.082432 1426 log.go:172] (0xc000226840) (0xc0006d7a40) Create stream\nI0308 10:54:27.082453 1426 log.go:172] (0xc000226840) (0xc0006d7a40) Stream added, broadcasting: 5\nI0308 10:54:27.083289 1426 log.go:172] (0xc000226840) Reply frame received for 5\nI0308 10:54:27.149728 1426 log.go:172] (0xc000226840) Data frame received for 5\nI0308 10:54:27.149749 1426 log.go:172] (0xc0006d7a40) (5) Data frame handling\nI0308 10:54:27.149759 1426 log.go:172] (0xc0006d7a40) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0308 10:54:27.170140 1426 log.go:172] (0xc000226840) Data frame received for 3\nI0308 10:54:27.170179 1426 log.go:172] (0xc0008ec320) (3) Data frame handling\nI0308 10:54:27.170212 1426 log.go:172] (0xc0008ec320) (3) Data frame sent\nI0308 10:54:27.170355 1426 log.go:172] (0xc000226840) Data frame received for 5\nI0308 10:54:27.170426 1426 log.go:172] (0xc0006d7a40) (5) Data frame handling\nI0308 10:54:27.170449 1426 log.go:172] (0xc000226840) Data frame received for 3\nI0308 10:54:27.170481 1426 log.go:172] (0xc0008ec320) (3) Data frame handling\nI0308 10:54:27.172122 1426 log.go:172] (0xc000226840) Data frame received for 1\nI0308 10:54:27.172151 1426 log.go:172] (0xc0008ec1e0) (1) Data frame handling\nI0308 10:54:27.172180 1426 log.go:172] (0xc0008ec1e0) (1) Data frame sent\nI0308 10:54:27.172271 1426 log.go:172] (0xc000226840) (0xc0008ec1e0) Stream removed, broadcasting: 1\nI0308 10:54:27.172303 1426 log.go:172] (0xc000226840) Go away received\nI0308 10:54:27.172608 1426 log.go:172] (0xc000226840) (0xc0008ec1e0) Stream removed, broadcasting: 1\nI0308 10:54:27.172629 1426 log.go:172] (0xc000226840) (0xc0008ec320) Stream removed, broadcasting: 3\nI0308 10:54:27.172644 1426 log.go:172] (0xc000226840) (0xc0006d7a40) Stream removed, broadcasting: 5\n" Mar 8 10:54:27.176: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 8 10:54:27.176: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 8 10:54:27.176: INFO: Waiting for statefulset status.replicas updated to 0 Mar 8 10:54:27.213: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Mar 8 10:54:37.220: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 8 10:54:37.220: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Mar 8 10:54:37.220: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Mar 8 10:54:37.240: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999548s Mar 8 10:54:38.244: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.98893997s Mar 8 10:54:39.248: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.984800507s Mar 8 10:54:40.253: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.980366354s Mar 8 10:54:41.257: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.97616198s Mar 8 10:54:42.262: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.971441335s Mar 8 10:54:43.266: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.967184984s Mar 8 10:54:44.272: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.962404324s Mar 8 10:54:45.277: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.957049238s Mar 8 10:54:46.281: INFO: Verifying statefulset ss doesn't scale past 3 for another 952.24814ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-9584 Mar 8 10:54:47.286: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9584 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 8 10:54:47.505: INFO: stderr: "I0308 10:54:47.438322 1448 log.go:172] (0xc0001062c0) (0xc000aa8000) Create stream\nI0308 10:54:47.438376 1448 log.go:172] (0xc0001062c0) (0xc000aa8000) Stream added, broadcasting: 1\nI0308 10:54:47.440563 1448 log.go:172] (0xc0001062c0) Reply frame received for 1\nI0308 10:54:47.440593 1448 log.go:172] (0xc0001062c0) (0xc0006edae0) Create stream\nI0308 10:54:47.440602 1448 log.go:172] (0xc0001062c0) (0xc0006edae0) Stream added, broadcasting: 3\nI0308 10:54:47.441533 1448 log.go:172] (0xc0001062c0) Reply frame received for 3\nI0308 10:54:47.441568 1448 log.go:172] (0xc0001062c0) (0xc0006edcc0) Create stream\nI0308 10:54:47.441577 1448 log.go:172] (0xc0001062c0) (0xc0006edcc0) Stream added, broadcasting: 5\nI0308 10:54:47.442289 1448 log.go:172] (0xc0001062c0) Reply frame received for 5\nI0308 10:54:47.501252 1448 log.go:172] (0xc0001062c0) Data frame received for 5\nI0308 10:54:47.501284 1448 log.go:172] (0xc0006edcc0) (5) Data frame handling\nI0308 10:54:47.501297 1448 log.go:172] (0xc0006edcc0) (5) Data frame sent\nI0308 10:54:47.501308 1448 log.go:172] (0xc0001062c0) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0308 10:54:47.501327 1448 log.go:172] (0xc0006edcc0) (5) Data frame handling\nI0308 10:54:47.501351 1448 log.go:172] (0xc0001062c0) Data frame received for 3\nI0308 10:54:47.501381 1448 log.go:172] (0xc0006edae0) (3) Data frame handling\nI0308 10:54:47.501396 1448 log.go:172] (0xc0006edae0) (3) Data frame sent\nI0308 10:54:47.501418 1448 log.go:172] (0xc0001062c0) Data frame received for 3\nI0308 10:54:47.501428 1448 log.go:172] (0xc0006edae0) (3) Data frame handling\nI0308 10:54:47.502665 1448 log.go:172] (0xc0001062c0) Data frame received for 1\nI0308 10:54:47.502728 1448 log.go:172] (0xc000aa8000) (1) Data frame handling\nI0308 10:54:47.502742 1448 log.go:172] (0xc000aa8000) (1) Data frame sent\nI0308 10:54:47.502752 1448 log.go:172] (0xc0001062c0) (0xc000aa8000) Stream removed, broadcasting: 1\nI0308 10:54:47.502763 1448 log.go:172] (0xc0001062c0) Go away received\nI0308 10:54:47.503175 1448 log.go:172] (0xc0001062c0) (0xc000aa8000) Stream removed, broadcasting: 1\nI0308 10:54:47.503192 1448 log.go:172] (0xc0001062c0) (0xc0006edae0) Stream removed, broadcasting: 3\nI0308 10:54:47.503200 1448 log.go:172] (0xc0001062c0) (0xc0006edcc0) Stream removed, broadcasting: 5\n" Mar 8 10:54:47.505: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 8 10:54:47.505: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 8 10:54:47.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9584 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 8 10:54:47.709: INFO: stderr: "I0308 10:54:47.639927 1468 log.go:172] (0xc0000f1600) (0xc0006aba40) Create stream\nI0308 10:54:47.639974 1468 log.go:172] (0xc0000f1600) (0xc0006aba40) Stream added, broadcasting: 1\nI0308 10:54:47.642037 1468 log.go:172] (0xc0000f1600) Reply frame received for 1\nI0308 10:54:47.642086 1468 log.go:172] (0xc0000f1600) (0xc00090c000) Create stream\nI0308 10:54:47.642107 1468 log.go:172] (0xc0000f1600) (0xc00090c000) Stream added, broadcasting: 3\nI0308 10:54:47.643094 1468 log.go:172] (0xc0000f1600) Reply frame received for 3\nI0308 10:54:47.643128 1468 log.go:172] (0xc0000f1600) (0xc00047a000) Create stream\nI0308 10:54:47.643140 1468 log.go:172] (0xc0000f1600) (0xc00047a000) Stream added, broadcasting: 5\nI0308 10:54:47.644043 1468 log.go:172] (0xc0000f1600) Reply frame received for 5\nI0308 10:54:47.704238 1468 log.go:172] (0xc0000f1600) Data frame received for 3\nI0308 10:54:47.704269 1468 log.go:172] (0xc00090c000) (3) Data frame handling\nI0308 10:54:47.704280 1468 log.go:172] (0xc00090c000) (3) Data frame sent\nI0308 10:54:47.704298 1468 log.go:172] (0xc0000f1600) Data frame received for 5\nI0308 10:54:47.704305 1468 log.go:172] (0xc00047a000) (5) Data frame handling\nI0308 10:54:47.704312 1468 log.go:172] (0xc00047a000) (5) Data frame sent\nI0308 10:54:47.704324 1468 log.go:172] (0xc0000f1600) Data frame received for 5\nI0308 10:54:47.704333 1468 log.go:172] (0xc00047a000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0308 10:54:47.704467 1468 log.go:172] (0xc0000f1600) Data frame received for 3\nI0308 10:54:47.704487 1468 log.go:172] (0xc00090c000) (3) Data frame handling\nI0308 10:54:47.705767 1468 log.go:172] (0xc0000f1600) Data frame received for 1\nI0308 10:54:47.705786 1468 log.go:172] (0xc0006aba40) (1) Data frame handling\nI0308 10:54:47.705794 1468 log.go:172] (0xc0006aba40) (1) Data frame sent\nI0308 10:54:47.705804 1468 log.go:172] (0xc0000f1600) (0xc0006aba40) Stream removed, broadcasting: 1\nI0308 10:54:47.705814 1468 log.go:172] (0xc0000f1600) Go away received\nI0308 10:54:47.706164 1468 log.go:172] (0xc0000f1600) (0xc0006aba40) Stream removed, broadcasting: 1\nI0308 10:54:47.706183 1468 log.go:172] (0xc0000f1600) (0xc00090c000) Stream removed, broadcasting: 3\nI0308 10:54:47.706192 1468 log.go:172] (0xc0000f1600) (0xc00047a000) Stream removed, broadcasting: 5\n" Mar 8 10:54:47.709: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 8 10:54:47.709: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 8 10:54:47.709: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9584 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 8 10:54:47.917: INFO: stderr: "I0308 10:54:47.851538 1491 log.go:172] (0xc000becc60) (0xc000be4460) Create stream\nI0308 10:54:47.851573 1491 log.go:172] (0xc000becc60) (0xc000be4460) Stream added, broadcasting: 1\nI0308 10:54:47.852724 1491 log.go:172] (0xc000becc60) Reply frame received for 1\nI0308 10:54:47.852746 1491 log.go:172] (0xc000becc60) (0xc000be4500) Create stream\nI0308 10:54:47.852752 1491 log.go:172] (0xc000becc60) (0xc000be4500) Stream added, broadcasting: 3\nI0308 10:54:47.853313 1491 log.go:172] (0xc000becc60) Reply frame received for 3\nI0308 10:54:47.853335 1491 log.go:172] (0xc000becc60) (0xc0009fe140) Create stream\nI0308 10:54:47.853342 1491 log.go:172] (0xc000becc60) (0xc0009fe140) Stream added, broadcasting: 5\nI0308 10:54:47.853873 1491 log.go:172] (0xc000becc60) Reply frame received for 5\nI0308 10:54:47.912913 1491 log.go:172] (0xc000becc60) Data frame received for 3\nI0308 10:54:47.912932 1491 log.go:172] (0xc000be4500) (3) Data frame handling\nI0308 10:54:47.912945 1491 log.go:172] (0xc000be4500) (3) Data frame sent\nI0308 10:54:47.912954 1491 log.go:172] (0xc000becc60) Data frame received for 3\nI0308 10:54:47.912962 1491 log.go:172] (0xc000be4500) (3) Data frame handling\nI0308 10:54:47.912972 1491 log.go:172] (0xc000becc60) Data frame received for 5\nI0308 10:54:47.912980 1491 log.go:172] (0xc0009fe140) (5) Data frame handling\nI0308 10:54:47.912988 1491 log.go:172] (0xc0009fe140) (5) Data frame sent\nI0308 10:54:47.913022 1491 log.go:172] (0xc000becc60) Data frame received for 5\nI0308 10:54:47.913029 1491 log.go:172] (0xc0009fe140) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0308 10:54:47.914166 1491 log.go:172] (0xc000becc60) Data frame received for 1\nI0308 10:54:47.914188 1491 log.go:172] (0xc000be4460) (1) Data frame handling\nI0308 10:54:47.914210 1491 log.go:172] (0xc000be4460) (1) Data frame sent\nI0308 10:54:47.914222 1491 log.go:172] (0xc000becc60) (0xc000be4460) Stream removed, broadcasting: 1\nI0308 10:54:47.914233 1491 log.go:172] (0xc000becc60) Go away received\nI0308 10:54:47.914560 1491 log.go:172] (0xc000becc60) (0xc000be4460) Stream removed, broadcasting: 1\nI0308 10:54:47.914578 1491 log.go:172] (0xc000becc60) (0xc000be4500) Stream removed, broadcasting: 3\nI0308 10:54:47.914586 1491 log.go:172] (0xc000becc60) (0xc0009fe140) Stream removed, broadcasting: 5\n" Mar 8 10:54:47.917: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 8 10:54:47.917: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 8 10:54:47.917: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Mar 8 10:54:57.949: INFO: Deleting all statefulset in ns statefulset-9584 Mar 8 10:54:57.953: INFO: Scaling statefulset ss to 0 Mar 8 10:54:57.961: INFO: Waiting for statefulset status.replicas updated to 0 Mar 8 10:54:57.964: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 10:54:57.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9584" for this suite. • [SLOW TEST:72.275 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":278,"completed":61,"skipped":894,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 10:54:57.986: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Mar 8 10:55:06.084: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 8 10:55:06.090: INFO: Pod pod-with-poststart-http-hook still exists Mar 8 10:55:08.091: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 8 10:55:08.095: INFO: Pod pod-with-poststart-http-hook still exists Mar 8 10:55:10.091: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 8 10:55:10.095: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 10:55:10.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9401" for this suite. • [SLOW TEST:12.116 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":278,"completed":62,"skipped":914,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 10:55:10.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 10:55:26.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4835" for this suite. • [SLOW TEST:16.124 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":278,"completed":63,"skipped":953,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 10:55:26.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 8 10:55:26.308: INFO: Waiting up to 5m0s for pod "downwardapi-volume-18d0204e-2410-40a2-bf44-c3245b5f12c0" in namespace "downward-api-8342" to be "success or failure" Mar 8 10:55:26.311: INFO: Pod "downwardapi-volume-18d0204e-2410-40a2-bf44-c3245b5f12c0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.627072ms Mar 8 10:55:28.321: INFO: Pod "downwardapi-volume-18d0204e-2410-40a2-bf44-c3245b5f12c0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.013639638s STEP: Saw pod success Mar 8 10:55:28.321: INFO: Pod "downwardapi-volume-18d0204e-2410-40a2-bf44-c3245b5f12c0" satisfied condition "success or failure" Mar 8 10:55:28.324: INFO: Trying to get logs from node kind-control-plane pod downwardapi-volume-18d0204e-2410-40a2-bf44-c3245b5f12c0 container client-container: STEP: delete the pod Mar 8 10:55:28.358: INFO: Waiting for pod downwardapi-volume-18d0204e-2410-40a2-bf44-c3245b5f12c0 to disappear Mar 8 10:55:28.365: INFO: Pod downwardapi-volume-18d0204e-2410-40a2-bf44-c3245b5f12c0 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 10:55:28.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8342" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":64,"skipped":963,"failed":0} SSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 10:55:28.371: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 8 10:55:28.416: INFO: Waiting up to 5m0s for pod "downwardapi-volume-776f07e2-0522-4603-b2c3-e33db3de92f5" in namespace "downward-api-352" to be "success or failure" Mar 8 10:55:28.419: INFO: Pod "downwardapi-volume-776f07e2-0522-4603-b2c3-e33db3de92f5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.885766ms Mar 8 10:55:30.423: INFO: Pod "downwardapi-volume-776f07e2-0522-4603-b2c3-e33db3de92f5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006944951s STEP: Saw pod success Mar 8 10:55:30.423: INFO: Pod "downwardapi-volume-776f07e2-0522-4603-b2c3-e33db3de92f5" satisfied condition "success or failure" Mar 8 10:55:30.426: INFO: Trying to get logs from node kind-control-plane pod downwardapi-volume-776f07e2-0522-4603-b2c3-e33db3de92f5 container client-container: STEP: delete the pod Mar 8 10:55:30.463: INFO: Waiting for pod downwardapi-volume-776f07e2-0522-4603-b2c3-e33db3de92f5 to disappear Mar 8 10:55:30.474: INFO: Pod downwardapi-volume-776f07e2-0522-4603-b2c3-e33db3de92f5 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 10:55:30.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-352" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":65,"skipped":967,"failed":0} SSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 10:55:30.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6929.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-6929.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6929.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-6929.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6929.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-6929.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6929.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-6929.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6929.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-6929.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6929.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-6929.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6929.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 1.249.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.249.1_udp@PTR;check="$$(dig +tcp +noall +answer +search 1.249.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.249.1_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6929.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-6929.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6929.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-6929.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6929.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-6929.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6929.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-6929.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6929.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-6929.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6929.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-6929.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6929.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 1.249.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.249.1_udp@PTR;check="$$(dig +tcp +noall +answer +search 1.249.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.249.1_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 8 10:55:34.618: INFO: Unable to read wheezy_udp@dns-test-service.dns-6929.svc.cluster.local from pod dns-6929/dns-test-9e4e6696-c703-4273-8381-2be8034cbccb: the server could not find the requested resource (get pods dns-test-9e4e6696-c703-4273-8381-2be8034cbccb) Mar 8 10:55:34.622: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6929.svc.cluster.local from pod dns-6929/dns-test-9e4e6696-c703-4273-8381-2be8034cbccb: the server could not find the requested resource (get pods dns-test-9e4e6696-c703-4273-8381-2be8034cbccb) Mar 8 10:55:34.625: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6929.svc.cluster.local from pod dns-6929/dns-test-9e4e6696-c703-4273-8381-2be8034cbccb: the server could not find the requested resource (get pods dns-test-9e4e6696-c703-4273-8381-2be8034cbccb) Mar 8 10:55:34.628: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6929.svc.cluster.local from pod dns-6929/dns-test-9e4e6696-c703-4273-8381-2be8034cbccb: the server could not find the requested resource (get pods dns-test-9e4e6696-c703-4273-8381-2be8034cbccb) Mar 8 10:55:34.659: INFO: Unable to read jessie_udp@dns-test-service.dns-6929.svc.cluster.local from pod dns-6929/dns-test-9e4e6696-c703-4273-8381-2be8034cbccb: the server could not find the requested resource (get pods dns-test-9e4e6696-c703-4273-8381-2be8034cbccb) Mar 8 10:55:34.662: INFO: Unable to read jessie_tcp@dns-test-service.dns-6929.svc.cluster.local from pod dns-6929/dns-test-9e4e6696-c703-4273-8381-2be8034cbccb: the server could not find the requested resource (get pods dns-test-9e4e6696-c703-4273-8381-2be8034cbccb) Mar 8 10:55:34.665: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6929.svc.cluster.local from pod dns-6929/dns-test-9e4e6696-c703-4273-8381-2be8034cbccb: the server could not find the requested resource (get pods dns-test-9e4e6696-c703-4273-8381-2be8034cbccb) Mar 8 10:55:34.668: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6929.svc.cluster.local from pod dns-6929/dns-test-9e4e6696-c703-4273-8381-2be8034cbccb: the server could not find the requested resource (get pods dns-test-9e4e6696-c703-4273-8381-2be8034cbccb) Mar 8 10:55:34.686: INFO: Lookups using dns-6929/dns-test-9e4e6696-c703-4273-8381-2be8034cbccb failed for: [wheezy_udp@dns-test-service.dns-6929.svc.cluster.local wheezy_tcp@dns-test-service.dns-6929.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6929.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6929.svc.cluster.local jessie_udp@dns-test-service.dns-6929.svc.cluster.local jessie_tcp@dns-test-service.dns-6929.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6929.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6929.svc.cluster.local] Mar 8 10:55:39.691: INFO: Unable to read wheezy_udp@dns-test-service.dns-6929.svc.cluster.local from pod dns-6929/dns-test-9e4e6696-c703-4273-8381-2be8034cbccb: the server could not find the requested resource (get pods dns-test-9e4e6696-c703-4273-8381-2be8034cbccb) Mar 8 10:55:39.695: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6929.svc.cluster.local from pod dns-6929/dns-test-9e4e6696-c703-4273-8381-2be8034cbccb: the server could not find the requested resource (get pods dns-test-9e4e6696-c703-4273-8381-2be8034cbccb) Mar 8 10:55:39.698: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6929.svc.cluster.local from pod dns-6929/dns-test-9e4e6696-c703-4273-8381-2be8034cbccb: the server could not find the requested resource (get pods dns-test-9e4e6696-c703-4273-8381-2be8034cbccb) Mar 8 10:55:39.729: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6929.svc.cluster.local from pod dns-6929/dns-test-9e4e6696-c703-4273-8381-2be8034cbccb: the server could not find the requested resource (get pods dns-test-9e4e6696-c703-4273-8381-2be8034cbccb) Mar 8 10:55:39.756: INFO: Unable to read jessie_udp@dns-test-service.dns-6929.svc.cluster.local from pod dns-6929/dns-test-9e4e6696-c703-4273-8381-2be8034cbccb: the server could not find the requested resource (get pods dns-test-9e4e6696-c703-4273-8381-2be8034cbccb) Mar 8 10:55:39.759: INFO: Unable to read jessie_tcp@dns-test-service.dns-6929.svc.cluster.local from pod dns-6929/dns-test-9e4e6696-c703-4273-8381-2be8034cbccb: the server could not find the requested resource (get pods dns-test-9e4e6696-c703-4273-8381-2be8034cbccb) Mar 8 10:55:39.762: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6929.svc.cluster.local from pod dns-6929/dns-test-9e4e6696-c703-4273-8381-2be8034cbccb: the server could not find the requested resource (get pods dns-test-9e4e6696-c703-4273-8381-2be8034cbccb) Mar 8 10:55:39.765: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6929.svc.cluster.local from pod dns-6929/dns-test-9e4e6696-c703-4273-8381-2be8034cbccb: the server could not find the requested resource (get pods dns-test-9e4e6696-c703-4273-8381-2be8034cbccb) Mar 8 10:55:39.781: INFO: Lookups using dns-6929/dns-test-9e4e6696-c703-4273-8381-2be8034cbccb failed for: [wheezy_udp@dns-test-service.dns-6929.svc.cluster.local wheezy_tcp@dns-test-service.dns-6929.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6929.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6929.svc.cluster.local jessie_udp@dns-test-service.dns-6929.svc.cluster.local jessie_tcp@dns-test-service.dns-6929.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6929.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6929.svc.cluster.local] Mar 8 10:55:44.691: INFO: Unable to read wheezy_udp@dns-test-service.dns-6929.svc.cluster.local from pod dns-6929/dns-test-9e4e6696-c703-4273-8381-2be8034cbccb: the server could not find the requested resource (get pods dns-test-9e4e6696-c703-4273-8381-2be8034cbccb) Mar 8 10:55:44.694: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6929.svc.cluster.local from pod dns-6929/dns-test-9e4e6696-c703-4273-8381-2be8034cbccb: the server could not find the requested resource (get pods dns-test-9e4e6696-c703-4273-8381-2be8034cbccb) Mar 8 10:55:44.697: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6929.svc.cluster.local from pod dns-6929/dns-test-9e4e6696-c703-4273-8381-2be8034cbccb: the server could not find the requested resource (get pods dns-test-9e4e6696-c703-4273-8381-2be8034cbccb) Mar 8 10:55:44.700: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6929.svc.cluster.local from pod dns-6929/dns-test-9e4e6696-c703-4273-8381-2be8034cbccb: the server could not find the requested resource (get pods dns-test-9e4e6696-c703-4273-8381-2be8034cbccb) Mar 8 10:55:44.721: INFO: Unable to read jessie_udp@dns-test-service.dns-6929.svc.cluster.local from pod dns-6929/dns-test-9e4e6696-c703-4273-8381-2be8034cbccb: the server could not find the requested resource (get pods dns-test-9e4e6696-c703-4273-8381-2be8034cbccb) Mar 8 10:55:44.724: INFO: Unable to read jessie_tcp@dns-test-service.dns-6929.svc.cluster.local from pod dns-6929/dns-test-9e4e6696-c703-4273-8381-2be8034cbccb: the server could not find the requested resource (get pods dns-test-9e4e6696-c703-4273-8381-2be8034cbccb) Mar 8 10:55:44.727: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6929.svc.cluster.local from pod dns-6929/dns-test-9e4e6696-c703-4273-8381-2be8034cbccb: the server could not find the requested resource (get pods dns-test-9e4e6696-c703-4273-8381-2be8034cbccb) Mar 8 10:55:44.729: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6929.svc.cluster.local from pod dns-6929/dns-test-9e4e6696-c703-4273-8381-2be8034cbccb: the server could not find the requested resource (get pods dns-test-9e4e6696-c703-4273-8381-2be8034cbccb) Mar 8 10:55:44.745: INFO: Lookups using dns-6929/dns-test-9e4e6696-c703-4273-8381-2be8034cbccb failed for: [wheezy_udp@dns-test-service.dns-6929.svc.cluster.local wheezy_tcp@dns-test-service.dns-6929.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6929.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6929.svc.cluster.local jessie_udp@dns-test-service.dns-6929.svc.cluster.local jessie_tcp@dns-test-service.dns-6929.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6929.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6929.svc.cluster.local] Mar 8 10:55:49.691: INFO: Unable to read wheezy_udp@dns-test-service.dns-6929.svc.cluster.local from pod dns-6929/dns-test-9e4e6696-c703-4273-8381-2be8034cbccb: the server could not find the requested resource (get pods dns-test-9e4e6696-c703-4273-8381-2be8034cbccb) Mar 8 10:55:49.694: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6929.svc.cluster.local from pod dns-6929/dns-test-9e4e6696-c703-4273-8381-2be8034cbccb: the server could not find the requested resource (get pods dns-test-9e4e6696-c703-4273-8381-2be8034cbccb) Mar 8 10:55:49.698: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6929.svc.cluster.local from pod dns-6929/dns-test-9e4e6696-c703-4273-8381-2be8034cbccb: the server could not find the requested resource (get pods dns-test-9e4e6696-c703-4273-8381-2be8034cbccb) Mar 8 10:55:49.701: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6929.svc.cluster.local from pod dns-6929/dns-test-9e4e6696-c703-4273-8381-2be8034cbccb: the server could not find the requested resource (get pods dns-test-9e4e6696-c703-4273-8381-2be8034cbccb) Mar 8 10:55:49.727: INFO: Unable to read jessie_udp@dns-test-service.dns-6929.svc.cluster.local from pod dns-6929/dns-test-9e4e6696-c703-4273-8381-2be8034cbccb: the server could not find the requested resource (get pods dns-test-9e4e6696-c703-4273-8381-2be8034cbccb) Mar 8 10:55:49.730: INFO: Unable to read jessie_tcp@dns-test-service.dns-6929.svc.cluster.local from pod dns-6929/dns-test-9e4e6696-c703-4273-8381-2be8034cbccb: the server could not find the requested resource (get pods dns-test-9e4e6696-c703-4273-8381-2be8034cbccb) Mar 8 10:55:49.732: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6929.svc.cluster.local from pod dns-6929/dns-test-9e4e6696-c703-4273-8381-2be8034cbccb: the server could not find the requested resource (get pods dns-test-9e4e6696-c703-4273-8381-2be8034cbccb) Mar 8 10:55:49.735: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6929.svc.cluster.local from pod dns-6929/dns-test-9e4e6696-c703-4273-8381-2be8034cbccb: the server could not find the requested resource (get pods dns-test-9e4e6696-c703-4273-8381-2be8034cbccb) Mar 8 10:55:49.751: INFO: Lookups using dns-6929/dns-test-9e4e6696-c703-4273-8381-2be8034cbccb failed for: [wheezy_udp@dns-test-service.dns-6929.svc.cluster.local wheezy_tcp@dns-test-service.dns-6929.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6929.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6929.svc.cluster.local jessie_udp@dns-test-service.dns-6929.svc.cluster.local jessie_tcp@dns-test-service.dns-6929.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6929.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6929.svc.cluster.local] Mar 8 10:55:54.691: INFO: Unable to read wheezy_udp@dns-test-service.dns-6929.svc.cluster.local from pod dns-6929/dns-test-9e4e6696-c703-4273-8381-2be8034cbccb: the server could not find the requested resource (get pods dns-test-9e4e6696-c703-4273-8381-2be8034cbccb) Mar 8 10:55:54.695: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6929.svc.cluster.local from pod dns-6929/dns-test-9e4e6696-c703-4273-8381-2be8034cbccb: the server could not find the requested resource (get pods dns-test-9e4e6696-c703-4273-8381-2be8034cbccb) Mar 8 10:55:54.698: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6929.svc.cluster.local from pod dns-6929/dns-test-9e4e6696-c703-4273-8381-2be8034cbccb: the server could not find the requested resource (get pods dns-test-9e4e6696-c703-4273-8381-2be8034cbccb) Mar 8 10:55:54.702: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6929.svc.cluster.local from pod dns-6929/dns-test-9e4e6696-c703-4273-8381-2be8034cbccb: the server could not find the requested resource (get pods dns-test-9e4e6696-c703-4273-8381-2be8034cbccb) Mar 8 10:55:54.723: INFO: Unable to read jessie_udp@dns-test-service.dns-6929.svc.cluster.local from pod dns-6929/dns-test-9e4e6696-c703-4273-8381-2be8034cbccb: the server could not find the requested resource (get pods dns-test-9e4e6696-c703-4273-8381-2be8034cbccb) Mar 8 10:55:54.726: INFO: Unable to read jessie_tcp@dns-test-service.dns-6929.svc.cluster.local from pod dns-6929/dns-test-9e4e6696-c703-4273-8381-2be8034cbccb: the server could not find the requested resource (get pods dns-test-9e4e6696-c703-4273-8381-2be8034cbccb) Mar 8 10:55:54.729: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6929.svc.cluster.local from pod dns-6929/dns-test-9e4e6696-c703-4273-8381-2be8034cbccb: the server could not find the requested resource (get pods dns-test-9e4e6696-c703-4273-8381-2be8034cbccb) Mar 8 10:55:54.731: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6929.svc.cluster.local from pod dns-6929/dns-test-9e4e6696-c703-4273-8381-2be8034cbccb: the server could not find the requested resource (get pods dns-test-9e4e6696-c703-4273-8381-2be8034cbccb) Mar 8 10:55:54.747: INFO: Lookups using dns-6929/dns-test-9e4e6696-c703-4273-8381-2be8034cbccb failed for: [wheezy_udp@dns-test-service.dns-6929.svc.cluster.local wheezy_tcp@dns-test-service.dns-6929.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6929.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6929.svc.cluster.local jessie_udp@dns-test-service.dns-6929.svc.cluster.local jessie_tcp@dns-test-service.dns-6929.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6929.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6929.svc.cluster.local] Mar 8 10:55:59.691: INFO: Unable to read wheezy_udp@dns-test-service.dns-6929.svc.cluster.local from pod dns-6929/dns-test-9e4e6696-c703-4273-8381-2be8034cbccb: the server could not find the requested resource (get pods dns-test-9e4e6696-c703-4273-8381-2be8034cbccb) Mar 8 10:55:59.695: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6929.svc.cluster.local from pod dns-6929/dns-test-9e4e6696-c703-4273-8381-2be8034cbccb: the server could not find the requested resource (get pods dns-test-9e4e6696-c703-4273-8381-2be8034cbccb) Mar 8 10:55:59.698: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6929.svc.cluster.local from pod dns-6929/dns-test-9e4e6696-c703-4273-8381-2be8034cbccb: the server could not find the requested resource (get pods dns-test-9e4e6696-c703-4273-8381-2be8034cbccb) Mar 8 10:55:59.701: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6929.svc.cluster.local from pod dns-6929/dns-test-9e4e6696-c703-4273-8381-2be8034cbccb: the server could not find the requested resource (get pods dns-test-9e4e6696-c703-4273-8381-2be8034cbccb) Mar 8 10:55:59.725: INFO: Unable to read jessie_udp@dns-test-service.dns-6929.svc.cluster.local from pod dns-6929/dns-test-9e4e6696-c703-4273-8381-2be8034cbccb: the server could not find the requested resource (get pods dns-test-9e4e6696-c703-4273-8381-2be8034cbccb) Mar 8 10:55:59.728: INFO: Unable to read jessie_tcp@dns-test-service.dns-6929.svc.cluster.local from pod dns-6929/dns-test-9e4e6696-c703-4273-8381-2be8034cbccb: the server could not find the requested resource (get pods dns-test-9e4e6696-c703-4273-8381-2be8034cbccb) Mar 8 10:55:59.731: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6929.svc.cluster.local from pod dns-6929/dns-test-9e4e6696-c703-4273-8381-2be8034cbccb: the server could not find the requested resource (get pods dns-test-9e4e6696-c703-4273-8381-2be8034cbccb) Mar 8 10:55:59.733: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6929.svc.cluster.local from pod dns-6929/dns-test-9e4e6696-c703-4273-8381-2be8034cbccb: the server could not find the requested resource (get pods dns-test-9e4e6696-c703-4273-8381-2be8034cbccb) Mar 8 10:55:59.748: INFO: Lookups using dns-6929/dns-test-9e4e6696-c703-4273-8381-2be8034cbccb failed for: [wheezy_udp@dns-test-service.dns-6929.svc.cluster.local wheezy_tcp@dns-test-service.dns-6929.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6929.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6929.svc.cluster.local jessie_udp@dns-test-service.dns-6929.svc.cluster.local jessie_tcp@dns-test-service.dns-6929.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6929.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6929.svc.cluster.local] Mar 8 10:56:04.756: INFO: DNS probes using dns-6929/dns-test-9e4e6696-c703-4273-8381-2be8034cbccb succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 10:56:04.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6929" for this suite. • [SLOW TEST:34.432 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":278,"completed":66,"skipped":974,"failed":0} SSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 10:56:04.914: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-2369 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-2369 STEP: creating replication controller externalsvc in namespace services-2369 I0308 10:56:05.057492 6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-2369, replica count: 2 I0308 10:56:08.107992 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Mar 8 10:56:08.137: INFO: Creating new exec pod Mar 8 10:56:10.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-2369 execpod868nk -- /bin/sh -x -c nslookup clusterip-service' Mar 8 10:56:12.091: INFO: stderr: "I0308 10:56:12.024126 1511 log.go:172] (0xc000324a50) (0xc000639e00) Create stream\nI0308 10:56:12.024169 1511 log.go:172] (0xc000324a50) (0xc000639e00) Stream added, broadcasting: 1\nI0308 10:56:12.026933 1511 log.go:172] (0xc000324a50) Reply frame received for 1\nI0308 10:56:12.026974 1511 log.go:172] (0xc000324a50) (0xc000546640) Create stream\nI0308 10:56:12.026986 1511 log.go:172] (0xc000324a50) (0xc000546640) Stream added, broadcasting: 3\nI0308 10:56:12.028052 1511 log.go:172] (0xc000324a50) Reply frame received for 3\nI0308 10:56:12.028091 1511 log.go:172] (0xc000324a50) (0xc0008f1400) Create stream\nI0308 10:56:12.028104 1511 log.go:172] (0xc000324a50) (0xc0008f1400) Stream added, broadcasting: 5\nI0308 10:56:12.029063 1511 log.go:172] (0xc000324a50) Reply frame received for 5\nI0308 10:56:12.077245 1511 log.go:172] (0xc000324a50) Data frame received for 5\nI0308 10:56:12.077265 1511 log.go:172] (0xc0008f1400) (5) Data frame handling\nI0308 10:56:12.077282 1511 log.go:172] (0xc0008f1400) (5) Data frame sent\n+ nslookup clusterip-service\nI0308 10:56:12.084788 1511 log.go:172] (0xc000324a50) Data frame received for 3\nI0308 10:56:12.084824 1511 log.go:172] (0xc000546640) (3) Data frame handling\nI0308 10:56:12.084844 1511 log.go:172] (0xc000546640) (3) Data frame sent\nI0308 10:56:12.085760 1511 log.go:172] (0xc000324a50) Data frame received for 3\nI0308 10:56:12.085777 1511 log.go:172] (0xc000546640) (3) Data frame handling\nI0308 10:56:12.085786 1511 log.go:172] (0xc000546640) (3) Data frame sent\nI0308 10:56:12.086141 1511 log.go:172] (0xc000324a50) Data frame received for 5\nI0308 10:56:12.086171 1511 log.go:172] (0xc0008f1400) (5) Data frame handling\nI0308 10:56:12.086437 1511 log.go:172] (0xc000324a50) Data frame received for 3\nI0308 10:56:12.086458 1511 log.go:172] (0xc000546640) (3) Data frame handling\nI0308 10:56:12.087880 1511 log.go:172] (0xc000324a50) Data frame received for 1\nI0308 10:56:12.087902 1511 log.go:172] (0xc000639e00) (1) Data frame handling\nI0308 10:56:12.087913 1511 log.go:172] (0xc000639e00) (1) Data frame sent\nI0308 10:56:12.087925 1511 log.go:172] (0xc000324a50) (0xc000639e00) Stream removed, broadcasting: 1\nI0308 10:56:12.087944 1511 log.go:172] (0xc000324a50) Go away received\nI0308 10:56:12.088290 1511 log.go:172] (0xc000324a50) (0xc000639e00) Stream removed, broadcasting: 1\nI0308 10:56:12.088310 1511 log.go:172] (0xc000324a50) (0xc000546640) Stream removed, broadcasting: 3\nI0308 10:56:12.088319 1511 log.go:172] (0xc000324a50) (0xc0008f1400) Stream removed, broadcasting: 5\n" Mar 8 10:56:12.091: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-2369.svc.cluster.local\tcanonical name = externalsvc.services-2369.svc.cluster.local.\nName:\texternalsvc.services-2369.svc.cluster.local\nAddress: 10.96.79.109\n\n" STEP: deleting ReplicationController externalsvc in namespace services-2369, will wait for the garbage collector to delete the pods Mar 8 10:56:12.156: INFO: Deleting ReplicationController externalsvc took: 5.487802ms Mar 8 10:56:12.456: INFO: Terminating ReplicationController externalsvc pods took: 300.272987ms Mar 8 10:56:16.390: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 10:56:16.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2369" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:11.532 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":278,"completed":67,"skipped":978,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 10:56:16.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-bfbafadc-54b9-4314-b02a-3abc384d5102 STEP: Creating a pod to test consume secrets Mar 8 10:56:16.530: INFO: Waiting up to 5m0s for pod "pod-secrets-d6cfba3f-7e0e-450e-bd14-96c8fd48ccce" in namespace "secrets-9680" to be "success or failure" Mar 8 10:56:16.551: INFO: Pod "pod-secrets-d6cfba3f-7e0e-450e-bd14-96c8fd48ccce": Phase="Pending", Reason="", readiness=false. Elapsed: 20.424706ms Mar 8 10:56:18.555: INFO: Pod "pod-secrets-d6cfba3f-7e0e-450e-bd14-96c8fd48ccce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.024452367s STEP: Saw pod success Mar 8 10:56:18.555: INFO: Pod "pod-secrets-d6cfba3f-7e0e-450e-bd14-96c8fd48ccce" satisfied condition "success or failure" Mar 8 10:56:18.558: INFO: Trying to get logs from node kind-control-plane pod pod-secrets-d6cfba3f-7e0e-450e-bd14-96c8fd48ccce container secret-env-test: STEP: delete the pod Mar 8 10:56:18.598: INFO: Waiting for pod pod-secrets-d6cfba3f-7e0e-450e-bd14-96c8fd48ccce to disappear Mar 8 10:56:18.606: INFO: Pod pod-secrets-d6cfba3f-7e0e-450e-bd14-96c8fd48ccce no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 10:56:18.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9680" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":278,"completed":68,"skipped":1046,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 10:56:18.613: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1768 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 8 10:56:18.657: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-3643' Mar 8 10:56:18.787: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 8 10:56:18.787: INFO: stdout: "job.batch/e2e-test-httpd-job created\n" STEP: verifying the job e2e-test-httpd-job was created [AfterEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1773 Mar 8 10:56:18.813: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-3643' Mar 8 10:56:18.971: INFO: stderr: "" Mar 8 10:56:18.971: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 10:56:18.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3643" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance]","total":278,"completed":69,"skipped":1076,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 10:56:18.984: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 8 10:56:19.391: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 8 10:56:21.416: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719261779, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719261779, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719261779, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719261779, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 8 10:56:24.434: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 8 10:56:24.438: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 10:56:25.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8563" for this suite. STEP: Destroying namespace "webhook-8563-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.732 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":278,"completed":70,"skipped":1099,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 10:56:25.717: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 10:56:42.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7049" for this suite. • [SLOW TEST:17.142 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":278,"completed":71,"skipped":1106,"failed":0} SSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 10:56:42.859: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 8 10:56:42.963: INFO: Waiting up to 5m0s for pod "downwardapi-volume-87576489-9770-49c8-ac0b-8dce2757d9a5" in namespace "downward-api-2623" to be "success or failure" Mar 8 10:56:42.966: INFO: Pod "downwardapi-volume-87576489-9770-49c8-ac0b-8dce2757d9a5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.712272ms Mar 8 10:56:44.970: INFO: Pod "downwardapi-volume-87576489-9770-49c8-ac0b-8dce2757d9a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006356984s STEP: Saw pod success Mar 8 10:56:44.970: INFO: Pod "downwardapi-volume-87576489-9770-49c8-ac0b-8dce2757d9a5" satisfied condition "success or failure" Mar 8 10:56:44.973: INFO: Trying to get logs from node kind-control-plane pod downwardapi-volume-87576489-9770-49c8-ac0b-8dce2757d9a5 container client-container: STEP: delete the pod Mar 8 10:56:45.011: INFO: Waiting for pod downwardapi-volume-87576489-9770-49c8-ac0b-8dce2757d9a5 to disappear Mar 8 10:56:45.021: INFO: Pod downwardapi-volume-87576489-9770-49c8-ac0b-8dce2757d9a5 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 10:56:45.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2623" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":72,"skipped":1113,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 10:56:45.029: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 8 10:56:46.132: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 8 10:56:49.146: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 10:56:49.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-507" for this suite. STEP: Destroying namespace "webhook-507-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 •{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":278,"completed":73,"skipped":1122,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 10:56:49.394: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-d9532baa-c19d-4de6-a908-20d91620e53a STEP: Creating a pod to test consume secrets Mar 8 10:56:49.531: INFO: Waiting up to 5m0s for pod "pod-secrets-087ba390-bcb5-4581-bafd-f2d1a3edd7e9" in namespace "secrets-1251" to be "success or failure" Mar 8 10:56:49.580: INFO: Pod "pod-secrets-087ba390-bcb5-4581-bafd-f2d1a3edd7e9": Phase="Pending", Reason="", readiness=false. Elapsed: 48.745411ms Mar 8 10:56:51.584: INFO: Pod "pod-secrets-087ba390-bcb5-4581-bafd-f2d1a3edd7e9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.052540442s STEP: Saw pod success Mar 8 10:56:51.584: INFO: Pod "pod-secrets-087ba390-bcb5-4581-bafd-f2d1a3edd7e9" satisfied condition "success or failure" Mar 8 10:56:51.587: INFO: Trying to get logs from node kind-control-plane pod pod-secrets-087ba390-bcb5-4581-bafd-f2d1a3edd7e9 container secret-volume-test: STEP: delete the pod Mar 8 10:56:51.620: INFO: Waiting for pod pod-secrets-087ba390-bcb5-4581-bafd-f2d1a3edd7e9 to disappear Mar 8 10:56:51.626: INFO: Pod pod-secrets-087ba390-bcb5-4581-bafd-f2d1a3edd7e9 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 10:56:51.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1251" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":74,"skipped":1137,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 10:56:51.633: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 8 10:56:52.091: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 8 10:56:55.127: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 10:56:55.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5888" for this suite. STEP: Destroying namespace "webhook-5888-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 •{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":278,"completed":75,"skipped":1180,"failed":0} SSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 10:56:55.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Mar 8 10:56:55.712: INFO: Created pod &Pod{ObjectMeta:{dns-4425 dns-4425 /api/v1/namespaces/dns-4425/pods/dns-4425 669a5e14-0c06-40bd-98b6-d4a452bf401b 14348 0 2020-03-08 10:56:55 +0000 UTC map[] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-x5bqq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-x5bqq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-x5bqq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: Verifying customized DNS suffix list is configured on pod... Mar 8 10:56:57.731: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-4425 PodName:dns-4425 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 10:56:57.731: INFO: >>> kubeConfig: /root/.kube/config I0308 10:56:57.771007 6 log.go:172] (0xc002ad9b80) (0xc000fa1cc0) Create stream I0308 10:56:57.771042 6 log.go:172] (0xc002ad9b80) (0xc000fa1cc0) Stream added, broadcasting: 1 I0308 10:56:57.772942 6 log.go:172] (0xc002ad9b80) Reply frame received for 1 I0308 10:56:57.772993 6 log.go:172] (0xc002ad9b80) (0xc000fa1e00) Create stream I0308 10:56:57.773010 6 log.go:172] (0xc002ad9b80) (0xc000fa1e00) Stream added, broadcasting: 3 I0308 10:56:57.774264 6 log.go:172] (0xc002ad9b80) Reply frame received for 3 I0308 10:56:57.774318 6 log.go:172] (0xc002ad9b80) (0xc001de0960) Create stream I0308 10:56:57.774334 6 log.go:172] (0xc002ad9b80) (0xc001de0960) Stream added, broadcasting: 5 I0308 10:56:57.775387 6 log.go:172] (0xc002ad9b80) Reply frame received for 5 I0308 10:56:57.852997 6 log.go:172] (0xc002ad9b80) Data frame received for 3 I0308 10:56:57.853025 6 log.go:172] (0xc000fa1e00) (3) Data frame handling I0308 10:56:57.853043 6 log.go:172] (0xc000fa1e00) (3) Data frame sent I0308 10:56:57.853576 6 log.go:172] (0xc002ad9b80) Data frame received for 3 I0308 10:56:57.853600 6 log.go:172] (0xc000fa1e00) (3) Data frame handling I0308 10:56:57.853939 6 log.go:172] (0xc002ad9b80) Data frame received for 5 I0308 10:56:57.853964 6 log.go:172] (0xc001de0960) (5) Data frame handling I0308 10:56:57.855616 6 log.go:172] (0xc002ad9b80) Data frame received for 1 I0308 10:56:57.855652 6 log.go:172] (0xc000fa1cc0) (1) Data frame handling I0308 10:56:57.855682 6 log.go:172] (0xc000fa1cc0) (1) Data frame sent I0308 10:56:57.855703 6 log.go:172] (0xc002ad9b80) (0xc000fa1cc0) Stream removed, broadcasting: 1 I0308 10:56:57.855727 6 log.go:172] (0xc002ad9b80) Go away received I0308 10:56:57.855867 6 log.go:172] (0xc002ad9b80) (0xc000fa1cc0) Stream removed, broadcasting: 1 I0308 10:56:57.855889 6 log.go:172] (0xc002ad9b80) (0xc000fa1e00) Stream removed, broadcasting: 3 I0308 10:56:57.855900 6 log.go:172] (0xc002ad9b80) (0xc001de0960) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Mar 8 10:56:57.855: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-4425 PodName:dns-4425 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 10:56:57.855: INFO: >>> kubeConfig: /root/.kube/config I0308 10:56:57.890295 6 log.go:172] (0xc002a904d0) (0xc001058960) Create stream I0308 10:56:57.890326 6 log.go:172] (0xc002a904d0) (0xc001058960) Stream added, broadcasting: 1 I0308 10:56:57.893215 6 log.go:172] (0xc002a904d0) Reply frame received for 1 I0308 10:56:57.893264 6 log.go:172] (0xc002a904d0) (0xc000fa1f40) Create stream I0308 10:56:57.893280 6 log.go:172] (0xc002a904d0) (0xc000fa1f40) Stream added, broadcasting: 3 I0308 10:56:57.896167 6 log.go:172] (0xc002a904d0) Reply frame received for 3 I0308 10:56:57.896223 6 log.go:172] (0xc002a904d0) (0xc001de0a00) Create stream I0308 10:56:57.896243 6 log.go:172] (0xc002a904d0) (0xc001de0a00) Stream added, broadcasting: 5 I0308 10:56:57.897136 6 log.go:172] (0xc002a904d0) Reply frame received for 5 I0308 10:56:57.963951 6 log.go:172] (0xc002a904d0) Data frame received for 3 I0308 10:56:57.963973 6 log.go:172] (0xc000fa1f40) (3) Data frame handling I0308 10:56:57.964005 6 log.go:172] (0xc000fa1f40) (3) Data frame sent I0308 10:56:57.964663 6 log.go:172] (0xc002a904d0) Data frame received for 3 I0308 10:56:57.964683 6 log.go:172] (0xc000fa1f40) (3) Data frame handling I0308 10:56:57.964930 6 log.go:172] (0xc002a904d0) Data frame received for 5 I0308 10:56:57.964945 6 log.go:172] (0xc001de0a00) (5) Data frame handling I0308 10:56:57.966473 6 log.go:172] (0xc002a904d0) Data frame received for 1 I0308 10:56:57.966505 6 log.go:172] (0xc001058960) (1) Data frame handling I0308 10:56:57.966545 6 log.go:172] (0xc001058960) (1) Data frame sent I0308 10:56:57.966572 6 log.go:172] (0xc002a904d0) (0xc001058960) Stream removed, broadcasting: 1 I0308 10:56:57.966595 6 log.go:172] (0xc002a904d0) Go away received I0308 10:56:57.966663 6 log.go:172] (0xc002a904d0) (0xc001058960) Stream removed, broadcasting: 1 I0308 10:56:57.966686 6 log.go:172] (0xc002a904d0) (0xc000fa1f40) Stream removed, broadcasting: 3 I0308 10:56:57.966699 6 log.go:172] (0xc002a904d0) (0xc001de0a00) Stream removed, broadcasting: 5 Mar 8 10:56:57.966: INFO: Deleting pod dns-4425... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 10:56:57.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4425" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":278,"completed":76,"skipped":1186,"failed":0} SSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 10:56:57.992: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 8 10:56:58.065: INFO: (0) /api/v1/nodes/kind-control-plane/proxy/logs/:
containers/
pods/
(200; 7.549822ms) Mar 8 10:56:58.068: INFO: (1) /api/v1/nodes/kind-control-plane/proxy/logs/:
containers/
pods/
(200; 3.052129ms) Mar 8 10:56:58.071: INFO: (2) /api/v1/nodes/kind-control-plane/proxy/logs/:
containers/
pods/
(200; 2.989481ms) Mar 8 10:56:58.074: INFO: (3) /api/v1/nodes/kind-control-plane/proxy/logs/:
containers/
pods/
(200; 2.876509ms) Mar 8 10:56:58.077: INFO: (4) /api/v1/nodes/kind-control-plane/proxy/logs/:
containers/
pods/
(200; 3.10317ms) Mar 8 10:56:58.080: INFO: (5) /api/v1/nodes/kind-control-plane/proxy/logs/:
containers/
pods/
(200; 3.025803ms) Mar 8 10:56:58.083: INFO: (6) /api/v1/nodes/kind-control-plane/proxy/logs/:
containers/
pods/
(200; 2.874793ms) Mar 8 10:56:58.086: INFO: (7) /api/v1/nodes/kind-control-plane/proxy/logs/:
containers/
pods/
(200; 2.816253ms) Mar 8 10:56:58.089: INFO: (8) /api/v1/nodes/kind-control-plane/proxy/logs/:
containers/
pods/
(200; 2.870455ms) Mar 8 10:56:58.092: INFO: (9) /api/v1/nodes/kind-control-plane/proxy/logs/:
containers/
pods/
(200; 2.790659ms) Mar 8 10:56:58.095: INFO: (10) /api/v1/nodes/kind-control-plane/proxy/logs/:
containers/
pods/
(200; 2.741634ms) Mar 8 10:56:58.097: INFO: (11) /api/v1/nodes/kind-control-plane/proxy/logs/:
containers/
pods/
(200; 2.684739ms) Mar 8 10:56:58.100: INFO: (12) /api/v1/nodes/kind-control-plane/proxy/logs/:
containers/
pods/
(200; 2.607921ms) Mar 8 10:56:58.103: INFO: (13) /api/v1/nodes/kind-control-plane/proxy/logs/:
containers/
pods/
(200; 3.125952ms) Mar 8 10:56:58.107: INFO: (14) /api/v1/nodes/kind-control-plane/proxy/logs/:
containers/
pods/
(200; 3.586301ms) Mar 8 10:56:58.110: INFO: (15) /api/v1/nodes/kind-control-plane/proxy/logs/:
containers/
pods/
(200; 3.052413ms) Mar 8 10:56:58.113: INFO: (16) /api/v1/nodes/kind-control-plane/proxy/logs/:
containers/
pods/
(200; 2.806101ms) Mar 8 10:56:58.115: INFO: (17) /api/v1/nodes/kind-control-plane/proxy/logs/:
containers/
pods/
(200; 2.747699ms) Mar 8 10:56:58.119: INFO: (18) /api/v1/nodes/kind-control-plane/proxy/logs/:
containers/
pods/
(200; 3.152767ms) Mar 8 10:56:58.121: INFO: (19) /api/v1/nodes/kind-control-plane/proxy/logs/:
containers/
pods/
(200; 2.903359ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 10:56:58.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-1042" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance]","total":278,"completed":77,"skipped":1189,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 10:56:58.128: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 8 10:56:58.544: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 8 10:57:00.554: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719261818, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719261818, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719261818, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719261818, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 8 10:57:03.584: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 10:57:13.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7040" for this suite. STEP: Destroying namespace "webhook-7040-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.728 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":278,"completed":78,"skipped":1195,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 10:57:13.857: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0308 10:57:24.056194 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 8 10:57:24.056: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 10:57:24.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1729" for this suite. • [SLOW TEST:10.207 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":278,"completed":79,"skipped":1200,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 10:57:24.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 10:57:26.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-513" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":278,"completed":80,"skipped":1217,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 10:57:26.190: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-407f048a-671d-4885-8c94-2ad03612aa00 STEP: Creating a pod to test consume configMaps Mar 8 10:57:26.305: INFO: Waiting up to 5m0s for pod "pod-configmaps-4fe2d80d-2453-4ee2-a3d6-ff3fb1ec5abe" in namespace "configmap-3776" to be "success or failure" Mar 8 10:57:26.345: INFO: Pod "pod-configmaps-4fe2d80d-2453-4ee2-a3d6-ff3fb1ec5abe": Phase="Pending", Reason="", readiness=false. Elapsed: 40.869408ms Mar 8 10:57:28.349: INFO: Pod "pod-configmaps-4fe2d80d-2453-4ee2-a3d6-ff3fb1ec5abe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.044611163s STEP: Saw pod success Mar 8 10:57:28.349: INFO: Pod "pod-configmaps-4fe2d80d-2453-4ee2-a3d6-ff3fb1ec5abe" satisfied condition "success or failure" Mar 8 10:57:28.352: INFO: Trying to get logs from node kind-control-plane pod pod-configmaps-4fe2d80d-2453-4ee2-a3d6-ff3fb1ec5abe container configmap-volume-test: STEP: delete the pod Mar 8 10:57:28.395: INFO: Waiting for pod pod-configmaps-4fe2d80d-2453-4ee2-a3d6-ff3fb1ec5abe to disappear Mar 8 10:57:28.398: INFO: Pod pod-configmaps-4fe2d80d-2453-4ee2-a3d6-ff3fb1ec5abe no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 10:57:28.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3776" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":81,"skipped":1248,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 10:57:28.421: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Mar 8 10:57:28.467: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 8 10:57:28.477: INFO: Waiting for terminating namespaces to be deleted... Mar 8 10:57:28.480: INFO: Logging pods the kubelet thinks is on node kind-control-plane before test Mar 8 10:57:28.489: INFO: etcd-kind-control-plane from kube-system started at 2020-03-08 10:17:29 +0000 UTC (1 container statuses recorded) Mar 8 10:57:28.489: INFO: Container etcd ready: true, restart count 0 Mar 8 10:57:28.489: INFO: kube-controller-manager-kind-control-plane from kube-system started at 2020-03-08 10:17:29 +0000 UTC (1 container statuses recorded) Mar 8 10:57:28.489: INFO: Container kube-controller-manager ready: true, restart count 0 Mar 8 10:57:28.489: INFO: kube-proxy-9qrbc from kube-system started at 2020-03-08 10:17:45 +0000 UTC (1 container statuses recorded) Mar 8 10:57:28.489: INFO: Container kube-proxy ready: true, restart count 0 Mar 8 10:57:28.489: INFO: kindnet-rznts from kube-system started at 2020-03-08 10:17:45 +0000 UTC (1 container statuses recorded) Mar 8 10:57:28.489: INFO: Container kindnet-cni ready: true, restart count 0 Mar 8 10:57:28.489: INFO: simpletest-rc-to-be-deleted-gl9sd from gc-1729 started at 2020-03-08 10:57:14 +0000 UTC (1 container statuses recorded) Mar 8 10:57:28.489: INFO: Container nginx ready: true, restart count 0 Mar 8 10:57:28.489: INFO: simpletest-rc-to-be-deleted-lkbxq from gc-1729 started at 2020-03-08 10:57:14 +0000 UTC (1 container statuses recorded) Mar 8 10:57:28.489: INFO: Container nginx ready: true, restart count 0 Mar 8 10:57:28.489: INFO: kube-apiserver-kind-control-plane from kube-system started at 2020-03-08 10:17:29 +0000 UTC (1 container statuses recorded) Mar 8 10:57:28.489: INFO: Container kube-apiserver ready: true, restart count 0 Mar 8 10:57:28.489: INFO: local-path-provisioner-7745554f7f-5f2b8 from local-path-storage started at 2020-03-08 10:17:49 +0000 UTC (1 container statuses recorded) Mar 8 10:57:28.489: INFO: Container local-path-provisioner ready: true, restart count 0 Mar 8 10:57:28.489: INFO: coredns-6955765f44-8lfgq from kube-system started at 2020-03-08 10:17:52 +0000 UTC (1 container statuses recorded) Mar 8 10:57:28.489: INFO: Container coredns ready: true, restart count 0 Mar 8 10:57:28.490: INFO: client-containers-6f2d3ef9-fe94-4ebc-8220-b51228ef6084 from containers-513 started at 2020-03-08 10:57:24 +0000 UTC (1 container statuses recorded) Mar 8 10:57:28.490: INFO: Container test-container ready: true, restart count 0 Mar 8 10:57:28.490: INFO: simpletest-rc-to-be-deleted-2p9f6 from gc-1729 started at 2020-03-08 10:57:14 +0000 UTC (1 container statuses recorded) Mar 8 10:57:28.490: INFO: Container nginx ready: true, restart count 0 Mar 8 10:57:28.490: INFO: kube-scheduler-kind-control-plane from kube-system started at 2020-03-08 10:17:29 +0000 UTC (1 container statuses recorded) Mar 8 10:57:28.490: INFO: Container kube-scheduler ready: true, restart count 0 Mar 8 10:57:28.490: INFO: coredns-6955765f44-2ncc6 from kube-system started at 2020-03-08 10:17:49 +0000 UTC (1 container statuses recorded) Mar 8 10:57:28.490: INFO: Container coredns ready: true, restart count 0 Mar 8 10:57:28.490: INFO: simpletest-rc-to-be-deleted-6tnzc from gc-1729 started at 2020-03-08 10:57:14 +0000 UTC (1 container statuses recorded) Mar 8 10:57:28.490: INFO: Container nginx ready: true, restart count 0 Mar 8 10:57:28.490: INFO: simpletest-rc-to-be-deleted-cbpl5 from gc-1729 started at 2020-03-08 10:57:14 +0000 UTC (1 container statuses recorded) Mar 8 10:57:28.490: INFO: Container nginx ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-d94ffd56-9bf1-4b8f-8bee-a8736c082c69 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-d94ffd56-9bf1-4b8f-8bee-a8736c082c69 off the node kind-control-plane STEP: verifying the node doesn't have the label kubernetes.io/e2e-d94ffd56-9bf1-4b8f-8bee-a8736c082c69 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:02:34.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6541" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:306.278 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":278,"completed":82,"skipped":1285,"failed":0} SSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:02:34.700: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-da6bb790-23d6-4f81-a31d-0e0859f24355 STEP: Creating a pod to test consume configMaps Mar 8 11:02:34.793: INFO: Waiting up to 5m0s for pod "pod-configmaps-56b7444a-6b1f-432d-a2f0-5da58bbe1efb" in namespace "configmap-7625" to be "success or failure" Mar 8 11:02:34.802: INFO: Pod "pod-configmaps-56b7444a-6b1f-432d-a2f0-5da58bbe1efb": Phase="Pending", Reason="", readiness=false. Elapsed: 9.700502ms Mar 8 11:02:36.807: INFO: Pod "pod-configmaps-56b7444a-6b1f-432d-a2f0-5da58bbe1efb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.01398407s STEP: Saw pod success Mar 8 11:02:36.807: INFO: Pod "pod-configmaps-56b7444a-6b1f-432d-a2f0-5da58bbe1efb" satisfied condition "success or failure" Mar 8 11:02:36.810: INFO: Trying to get logs from node kind-control-plane pod pod-configmaps-56b7444a-6b1f-432d-a2f0-5da58bbe1efb container configmap-volume-test: STEP: delete the pod Mar 8 11:02:36.847: INFO: Waiting for pod pod-configmaps-56b7444a-6b1f-432d-a2f0-5da58bbe1efb to disappear Mar 8 11:02:36.850: INFO: Pod pod-configmaps-56b7444a-6b1f-432d-a2f0-5da58bbe1efb no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:02:36.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7625" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":83,"skipped":1292,"failed":0} SSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:02:36.857: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 8 11:02:36.947: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:02:40.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7276" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":278,"completed":84,"skipped":1299,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:02:40.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 8 11:02:41.734: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 8 11:02:44.794: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Mar 8 11:02:44.817: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:02:44.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9974" for this suite. STEP: Destroying namespace "webhook-9974-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 •{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":278,"completed":85,"skipped":1300,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:02:44.913: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:02:45.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6551" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":278,"completed":86,"skipped":1312,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:02:45.042: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 8 11:02:45.095: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2130' Mar 8 11:02:45.349: INFO: stderr: "" Mar 8 11:02:45.349: INFO: stdout: "replicationcontroller/agnhost-master created\n" Mar 8 11:02:45.349: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2130' Mar 8 11:02:45.612: INFO: stderr: "" Mar 8 11:02:45.612: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Mar 8 11:02:46.616: INFO: Selector matched 1 pods for map[app:agnhost] Mar 8 11:02:46.616: INFO: Found 0 / 1 Mar 8 11:02:47.616: INFO: Selector matched 1 pods for map[app:agnhost] Mar 8 11:02:47.616: INFO: Found 1 / 1 Mar 8 11:02:47.616: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 8 11:02:47.620: INFO: Selector matched 1 pods for map[app:agnhost] Mar 8 11:02:47.620: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 8 11:02:47.620: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-5d6sh --namespace=kubectl-2130' Mar 8 11:02:47.756: INFO: stderr: "" Mar 8 11:02:47.757: INFO: stdout: "Name: agnhost-master-5d6sh\nNamespace: kubectl-2130\nPriority: 0\nNode: kind-control-plane/172.17.0.2\nStart Time: Sun, 08 Mar 2020 11:02:45 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.0.169\nIPs:\n IP: 10.244.0.169\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://4d425f5225b9e40f41b1bf9a70d4d47ee8493ba70daaf36aee19dbed4abd57e9\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Image ID: gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Sun, 08 Mar 2020 11:02:46 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-98ggt (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-98ggt:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-98ggt\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 2s default-scheduler Successfully assigned kubectl-2130/agnhost-master-5d6sh to kind-control-plane\n Normal Pulled 1s kubelet, kind-control-plane Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n Normal Created 1s kubelet, kind-control-plane Created container agnhost-master\n Normal Started 1s kubelet, kind-control-plane Started container agnhost-master\n" Mar 8 11:02:47.757: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-2130' Mar 8 11:02:47.887: INFO: stderr: "" Mar 8 11:02:47.887: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-2130\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 2s replication-controller Created pod: agnhost-master-5d6sh\n" Mar 8 11:02:47.887: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-2130' Mar 8 11:02:47.996: INFO: stderr: "" Mar 8 11:02:47.996: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-2130\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.96.127.177\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.0.169:6379\nSession Affinity: None\nEvents: \n" Mar 8 11:02:48.001: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node kind-control-plane' Mar 8 11:02:48.125: INFO: stderr: "" Mar 8 11:02:48.125: INFO: stdout: "Name: kind-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=kind-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 08 Mar 2020 10:17:25 +0000\nTaints: \nUnschedulable: false\nLease:\n HolderIdentity: kind-control-plane\n AcquireTime: \n RenewTime: Sun, 08 Mar 2020 11:02:41 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Sun, 08 Mar 2020 11:02:30 +0000 Sun, 08 Mar 2020 10:17:22 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Sun, 08 Mar 2020 11:02:30 +0000 Sun, 08 Mar 2020 10:17:22 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Sun, 08 Mar 2020 11:02:30 +0000 Sun, 08 Mar 2020 10:17:22 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Sun, 08 Mar 2020 11:02:30 +0000 Sun, 08 Mar 2020 10:17:49 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.2\n Hostname: kind-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131767112Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131767112Ki\n pods: 110\nSystem Info:\n Machine ID: fb15ad632d6d4f17a6c81bd2460561b7\n System UUID: 3413a663-8564-42a4-9d35-4bc84ffe178b\n Boot ID: 3de0b5b8-8b8f-48d3-9705-cabccc881bdb\n Kernel Version: 4.4.0-142-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.17.0\n Kube-Proxy Version: v1.17.0\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (12 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-6955765f44-2ncc6 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 45m\n kube-system coredns-6955765f44-8lfgq 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 45m\n kube-system etcd-kind-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 45m\n kube-system kindnet-rznts 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 45m\n kube-system kube-apiserver-kind-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 45m\n kube-system kube-controller-manager-kind-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 45m\n kube-system kube-proxy-9qrbc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 45m\n kube-system kube-scheduler-kind-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 45m\n kubectl-2130 agnhost-master-5d6sh 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3s\n local-path-storage local-path-provisioner-7745554f7f-5f2b8 0 (0%) 0 (0%) 0 (0%) 0 (0%) 45m\n pods-7276 pod-logs-websocket-54933550-05e1-4d42-a45d-f54e12d08782 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12s\n sched-pred-6541 pod4 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m16s\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Starting 45m kubelet, kind-control-plane Starting kubelet.\n Normal NodeHasSufficientMemory 45m (x3 over 45m) kubelet, kind-control-plane Node kind-control-plane status is now: NodeHasSufficientMemory\n Normal NodeHasNoDiskPressure 45m (x3 over 45m) kubelet, kind-control-plane Node kind-control-plane status is now: NodeHasNoDiskPressure\n Normal NodeHasSufficientPID 45m (x2 over 45m) kubelet, kind-control-plane Node kind-control-plane status is now: NodeHasSufficientPID\n Normal NodeAllocatableEnforced 45m kubelet, kind-control-plane Updated Node Allocatable limit across pods\n Normal NodeAllocatableEnforced 45m kubelet, kind-control-plane Updated Node Allocatable limit across pods\n Normal Starting 45m kubelet, kind-control-plane Starting kubelet.\n Normal NodeHasSufficientMemory 45m kubelet, kind-control-plane Node kind-control-plane status is now: NodeHasSufficientMemory\n Normal NodeHasNoDiskPressure 45m kubelet, kind-control-plane Node kind-control-plane status is now: NodeHasNoDiskPressure\n Normal NodeHasSufficientPID 45m kubelet, kind-control-plane Node kind-control-plane status is now: NodeHasSufficientPID\n Warning readOnlySysFS 45m kube-proxy, kind-control-plane CRI error: /sys is read-only: cannot modify conntrack limits, problems may arise later (If running Docker, see docker issue #24000)\n Normal Starting 45m kube-proxy, kind-control-plane Starting kube-proxy.\n Normal NodeReady 44m kubelet, kind-control-plane Node kind-control-plane status is now: NodeReady\n" Mar 8 11:02:48.126: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-2130' Mar 8 11:02:48.238: INFO: stderr: "" Mar 8 11:02:48.238: INFO: stdout: "Name: kubectl-2130\nLabels: e2e-framework=kubectl\n e2e-run=dbb24fc6-c14c-431f-93aa-3acce1801c6d\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:02:48.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2130" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":278,"completed":87,"skipped":1345,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:02:48.244: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 8 11:02:48.310: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:02:49.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-5830" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":278,"completed":88,"skipped":1359,"failed":0} SSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:02:49.565: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 8 11:02:49.606: INFO: Waiting up to 5m0s for pod "downwardapi-volume-289d75df-bbfe-4af2-af25-a2e253fcd47a" in namespace "downward-api-8443" to be "success or failure" Mar 8 11:02:49.611: INFO: Pod "downwardapi-volume-289d75df-bbfe-4af2-af25-a2e253fcd47a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.82561ms Mar 8 11:02:51.615: INFO: Pod "downwardapi-volume-289d75df-bbfe-4af2-af25-a2e253fcd47a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008640734s STEP: Saw pod success Mar 8 11:02:51.615: INFO: Pod "downwardapi-volume-289d75df-bbfe-4af2-af25-a2e253fcd47a" satisfied condition "success or failure" Mar 8 11:02:51.617: INFO: Trying to get logs from node kind-control-plane pod downwardapi-volume-289d75df-bbfe-4af2-af25-a2e253fcd47a container client-container: STEP: delete the pod Mar 8 11:02:51.641: INFO: Waiting for pod downwardapi-volume-289d75df-bbfe-4af2-af25-a2e253fcd47a to disappear Mar 8 11:02:51.661: INFO: Pod downwardapi-volume-289d75df-bbfe-4af2-af25-a2e253fcd47a no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:02:51.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8443" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":278,"completed":89,"skipped":1366,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:02:51.705: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: Gathering metrics W0308 11:02:57.793241 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 8 11:02:57.793: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:02:57.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8510" for this suite. • [SLOW TEST:6.093 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":278,"completed":90,"skipped":1380,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:02:57.799: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Mar 8 11:02:57.875: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:03:03.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-4972" for this suite. • [SLOW TEST:5.563 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":278,"completed":91,"skipped":1418,"failed":0} SSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:03:03.362: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating pod Mar 8 11:03:05.478: INFO: Pod pod-hostip-7f81013f-7e4a-4870-8fcc-42e112a6df9b has hostIP: 172.17.0.2 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:03:05.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4498" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":278,"completed":92,"skipped":1424,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:03:05.486: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override all Mar 8 11:03:05.543: INFO: Waiting up to 5m0s for pod "client-containers-7d860a0a-1a8a-4f01-b762-b069b3dc5adb" in namespace "containers-1627" to be "success or failure" Mar 8 11:03:05.547: INFO: Pod "client-containers-7d860a0a-1a8a-4f01-b762-b069b3dc5adb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.004904ms Mar 8 11:03:07.549: INFO: Pod "client-containers-7d860a0a-1a8a-4f01-b762-b069b3dc5adb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006172877s STEP: Saw pod success Mar 8 11:03:07.549: INFO: Pod "client-containers-7d860a0a-1a8a-4f01-b762-b069b3dc5adb" satisfied condition "success or failure" Mar 8 11:03:07.551: INFO: Trying to get logs from node kind-control-plane pod client-containers-7d860a0a-1a8a-4f01-b762-b069b3dc5adb container test-container: STEP: delete the pod Mar 8 11:03:07.563: INFO: Waiting for pod client-containers-7d860a0a-1a8a-4f01-b762-b069b3dc5adb to disappear Mar 8 11:03:07.568: INFO: Pod client-containers-7d860a0a-1a8a-4f01-b762-b069b3dc5adb no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:03:07.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-1627" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":278,"completed":93,"skipped":1463,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:03:07.581: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override command Mar 8 11:03:07.674: INFO: Waiting up to 5m0s for pod "client-containers-5e174835-7d7d-4aeb-8428-78f28b937720" in namespace "containers-679" to be "success or failure" Mar 8 11:03:07.694: INFO: Pod "client-containers-5e174835-7d7d-4aeb-8428-78f28b937720": Phase="Pending", Reason="", readiness=false. Elapsed: 19.710694ms Mar 8 11:03:09.698: INFO: Pod "client-containers-5e174835-7d7d-4aeb-8428-78f28b937720": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.02341275s STEP: Saw pod success Mar 8 11:03:09.698: INFO: Pod "client-containers-5e174835-7d7d-4aeb-8428-78f28b937720" satisfied condition "success or failure" Mar 8 11:03:09.700: INFO: Trying to get logs from node kind-control-plane pod client-containers-5e174835-7d7d-4aeb-8428-78f28b937720 container test-container: STEP: delete the pod Mar 8 11:03:09.748: INFO: Waiting for pod client-containers-5e174835-7d7d-4aeb-8428-78f28b937720 to disappear Mar 8 11:03:09.755: INFO: Pod client-containers-5e174835-7d7d-4aeb-8428-78f28b937720 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:03:09.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-679" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":278,"completed":94,"skipped":1473,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:03:09.762: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 8 11:03:09.814: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:03:10.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-1001" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":278,"completed":95,"skipped":1476,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:03:10.845: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating secret secrets-7612/secret-test-afd9897c-1c0c-4109-bd8a-b0f443fc6536 STEP: Creating a pod to test consume secrets Mar 8 11:03:10.907: INFO: Waiting up to 5m0s for pod "pod-configmaps-96f25e26-9f5d-41c4-9a3e-a2300ccffe4e" in namespace "secrets-7612" to be "success or failure" Mar 8 11:03:10.910: INFO: Pod "pod-configmaps-96f25e26-9f5d-41c4-9a3e-a2300ccffe4e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.697708ms Mar 8 11:03:12.913: INFO: Pod "pod-configmaps-96f25e26-9f5d-41c4-9a3e-a2300ccffe4e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00679953s STEP: Saw pod success Mar 8 11:03:12.913: INFO: Pod "pod-configmaps-96f25e26-9f5d-41c4-9a3e-a2300ccffe4e" satisfied condition "success or failure" Mar 8 11:03:12.916: INFO: Trying to get logs from node kind-control-plane pod pod-configmaps-96f25e26-9f5d-41c4-9a3e-a2300ccffe4e container env-test: STEP: delete the pod Mar 8 11:03:12.947: INFO: Waiting for pod pod-configmaps-96f25e26-9f5d-41c4-9a3e-a2300ccffe4e to disappear Mar 8 11:03:12.954: INFO: Pod pod-configmaps-96f25e26-9f5d-41c4-9a3e-a2300ccffe4e no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:03:12.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7612" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":96,"skipped":1481,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:03:12.960: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 8 11:03:13.024: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c37cdbe8-12ce-4349-aa7f-7befd2f8891f" in namespace "downward-api-3969" to be "success or failure" Mar 8 11:03:13.032: INFO: Pod "downwardapi-volume-c37cdbe8-12ce-4349-aa7f-7befd2f8891f": Phase="Pending", Reason="", readiness=false. Elapsed: 7.749383ms Mar 8 11:03:15.035: INFO: Pod "downwardapi-volume-c37cdbe8-12ce-4349-aa7f-7befd2f8891f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010993442s STEP: Saw pod success Mar 8 11:03:15.035: INFO: Pod "downwardapi-volume-c37cdbe8-12ce-4349-aa7f-7befd2f8891f" satisfied condition "success or failure" Mar 8 11:03:15.037: INFO: Trying to get logs from node kind-control-plane pod downwardapi-volume-c37cdbe8-12ce-4349-aa7f-7befd2f8891f container client-container: STEP: delete the pod Mar 8 11:03:15.069: INFO: Waiting for pod downwardapi-volume-c37cdbe8-12ce-4349-aa7f-7befd2f8891f to disappear Mar 8 11:03:15.074: INFO: Pod downwardapi-volume-c37cdbe8-12ce-4349-aa7f-7befd2f8891f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:03:15.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3969" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":97,"skipped":1501,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:03:15.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 8 11:03:17.189: INFO: Waiting up to 5m0s for pod "client-envvars-e382fa87-5ee2-421f-98eb-6b45f3a9b96f" in namespace "pods-159" to be "success or failure" Mar 8 11:03:17.194: INFO: Pod "client-envvars-e382fa87-5ee2-421f-98eb-6b45f3a9b96f": Phase="Pending", Reason="", readiness=false. Elapsed: 5.51406ms Mar 8 11:03:19.198: INFO: Pod "client-envvars-e382fa87-5ee2-421f-98eb-6b45f3a9b96f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008836738s STEP: Saw pod success Mar 8 11:03:19.198: INFO: Pod "client-envvars-e382fa87-5ee2-421f-98eb-6b45f3a9b96f" satisfied condition "success or failure" Mar 8 11:03:19.200: INFO: Trying to get logs from node kind-control-plane pod client-envvars-e382fa87-5ee2-421f-98eb-6b45f3a9b96f container env3cont: STEP: delete the pod Mar 8 11:03:19.230: INFO: Waiting for pod client-envvars-e382fa87-5ee2-421f-98eb-6b45f3a9b96f to disappear Mar 8 11:03:19.234: INFO: Pod client-envvars-e382fa87-5ee2-421f-98eb-6b45f3a9b96f no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:03:19.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-159" for this suite. •{"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":278,"completed":98,"skipped":1522,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:03:19.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:03:23.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7500" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":99,"skipped":1533,"failed":0} SSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:03:23.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-3514 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-3514 I0308 11:03:23.468578 6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-3514, replica count: 2 I0308 11:03:26.518963 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 8 11:03:26.518: INFO: Creating new exec pod Mar 8 11:03:29.570: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3514 execpodgh679 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Mar 8 11:03:29.776: INFO: stderr: "I0308 11:03:29.711310 1725 log.go:172] (0xc000b38000) (0xc00064fc20) Create stream\nI0308 11:03:29.711374 1725 log.go:172] (0xc000b38000) (0xc00064fc20) Stream added, broadcasting: 1\nI0308 11:03:29.713962 1725 log.go:172] (0xc000b38000) Reply frame received for 1\nI0308 11:03:29.714003 1725 log.go:172] (0xc000b38000) (0xc00064fe00) Create stream\nI0308 11:03:29.714021 1725 log.go:172] (0xc000b38000) (0xc00064fe00) Stream added, broadcasting: 3\nI0308 11:03:29.714976 1725 log.go:172] (0xc000b38000) Reply frame received for 3\nI0308 11:03:29.715013 1725 log.go:172] (0xc000b38000) (0xc000b74000) Create stream\nI0308 11:03:29.715026 1725 log.go:172] (0xc000b38000) (0xc000b74000) Stream added, broadcasting: 5\nI0308 11:03:29.715942 1725 log.go:172] (0xc000b38000) Reply frame received for 5\nI0308 11:03:29.769812 1725 log.go:172] (0xc000b38000) Data frame received for 5\nI0308 11:03:29.769838 1725 log.go:172] (0xc000b74000) (5) Data frame handling\nI0308 11:03:29.769856 1725 log.go:172] (0xc000b74000) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0308 11:03:29.770933 1725 log.go:172] (0xc000b38000) Data frame received for 5\nI0308 11:03:29.770964 1725 log.go:172] (0xc000b74000) (5) Data frame handling\nI0308 11:03:29.770987 1725 log.go:172] (0xc000b74000) (5) Data frame sent\nI0308 11:03:29.771012 1725 log.go:172] (0xc000b38000) Data frame received for 5\nI0308 11:03:29.771025 1725 log.go:172] (0xc000b74000) (5) Data frame handling\nI0308 11:03:29.771043 1725 log.go:172] (0xc000b38000) Data frame received for 3\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0308 11:03:29.771056 1725 log.go:172] (0xc00064fe00) (3) Data frame handling\nI0308 11:03:29.773197 1725 log.go:172] (0xc000b38000) Data frame received for 1\nI0308 11:03:29.773268 1725 log.go:172] (0xc00064fc20) (1) Data frame handling\nI0308 11:03:29.773290 1725 log.go:172] (0xc00064fc20) (1) Data frame sent\nI0308 11:03:29.773306 1725 log.go:172] (0xc000b38000) (0xc00064fc20) Stream removed, broadcasting: 1\nI0308 11:03:29.773578 1725 log.go:172] (0xc000b38000) (0xc00064fc20) Stream removed, broadcasting: 1\nI0308 11:03:29.773596 1725 log.go:172] (0xc000b38000) (0xc00064fe00) Stream removed, broadcasting: 3\nI0308 11:03:29.773719 1725 log.go:172] (0xc000b38000) (0xc000b74000) Stream removed, broadcasting: 5\n" Mar 8 11:03:29.776: INFO: stdout: "" Mar 8 11:03:29.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3514 execpodgh679 -- /bin/sh -x -c nc -zv -t -w 2 10.96.213.159 80' Mar 8 11:03:30.017: INFO: stderr: "I0308 11:03:29.956546 1746 log.go:172] (0xc0000f42c0) (0xc00062fb80) Create stream\nI0308 11:03:29.956595 1746 log.go:172] (0xc0000f42c0) (0xc00062fb80) Stream added, broadcasting: 1\nI0308 11:03:29.960211 1746 log.go:172] (0xc0000f42c0) Reply frame received for 1\nI0308 11:03:29.960290 1746 log.go:172] (0xc0000f42c0) (0xc0006c4000) Create stream\nI0308 11:03:29.960313 1746 log.go:172] (0xc0000f42c0) (0xc0006c4000) Stream added, broadcasting: 3\nI0308 11:03:29.962981 1746 log.go:172] (0xc0000f42c0) Reply frame received for 3\nI0308 11:03:29.963007 1746 log.go:172] (0xc0000f42c0) (0xc00062fc20) Create stream\nI0308 11:03:29.963014 1746 log.go:172] (0xc0000f42c0) (0xc00062fc20) Stream added, broadcasting: 5\nI0308 11:03:29.963893 1746 log.go:172] (0xc0000f42c0) Reply frame received for 5\nI0308 11:03:30.014977 1746 log.go:172] (0xc0000f42c0) Data frame received for 3\nI0308 11:03:30.015003 1746 log.go:172] (0xc0006c4000) (3) Data frame handling\nI0308 11:03:30.015022 1746 log.go:172] (0xc0000f42c0) Data frame received for 5\nI0308 11:03:30.015028 1746 log.go:172] (0xc00062fc20) (5) Data frame handling\nI0308 11:03:30.015035 1746 log.go:172] (0xc00062fc20) (5) Data frame sent\nI0308 11:03:30.015040 1746 log.go:172] (0xc0000f42c0) Data frame received for 5\nI0308 11:03:30.015045 1746 log.go:172] (0xc00062fc20) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.213.159 80\nConnection to 10.96.213.159 80 port [tcp/http] succeeded!\nI0308 11:03:30.015898 1746 log.go:172] (0xc0000f42c0) Data frame received for 1\nI0308 11:03:30.015915 1746 log.go:172] (0xc00062fb80) (1) Data frame handling\nI0308 11:03:30.015932 1746 log.go:172] (0xc00062fb80) (1) Data frame sent\nI0308 11:03:30.015943 1746 log.go:172] (0xc0000f42c0) (0xc00062fb80) Stream removed, broadcasting: 1\nI0308 11:03:30.015958 1746 log.go:172] (0xc0000f42c0) Go away received\nI0308 11:03:30.016231 1746 log.go:172] (0xc0000f42c0) (0xc00062fb80) Stream removed, broadcasting: 1\nI0308 11:03:30.016256 1746 log.go:172] (0xc0000f42c0) (0xc0006c4000) Stream removed, broadcasting: 3\nI0308 11:03:30.016264 1746 log.go:172] (0xc0000f42c0) (0xc00062fc20) Stream removed, broadcasting: 5\n" Mar 8 11:03:30.017: INFO: stdout: "" Mar 8 11:03:30.017: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3514 execpodgh679 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.2 30986' Mar 8 11:03:30.169: INFO: stderr: "I0308 11:03:30.110775 1765 log.go:172] (0xc0008e4790) (0xc000a4a000) Create stream\nI0308 11:03:30.110806 1765 log.go:172] (0xc0008e4790) (0xc000a4a000) Stream added, broadcasting: 1\nI0308 11:03:30.111987 1765 log.go:172] (0xc0008e4790) Reply frame received for 1\nI0308 11:03:30.112015 1765 log.go:172] (0xc0008e4790) (0xc0005f1ae0) Create stream\nI0308 11:03:30.112025 1765 log.go:172] (0xc0008e4790) (0xc0005f1ae0) Stream added, broadcasting: 3\nI0308 11:03:30.112491 1765 log.go:172] (0xc0008e4790) Reply frame received for 3\nI0308 11:03:30.112506 1765 log.go:172] (0xc0008e4790) (0xc0005f1cc0) Create stream\nI0308 11:03:30.112511 1765 log.go:172] (0xc0008e4790) (0xc0005f1cc0) Stream added, broadcasting: 5\nI0308 11:03:30.112958 1765 log.go:172] (0xc0008e4790) Reply frame received for 5\nI0308 11:03:30.166108 1765 log.go:172] (0xc0008e4790) Data frame received for 5\nI0308 11:03:30.166139 1765 log.go:172] (0xc0005f1cc0) (5) Data frame handling\nI0308 11:03:30.166157 1765 log.go:172] (0xc0005f1cc0) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.2 30986\nI0308 11:03:30.166473 1765 log.go:172] (0xc0008e4790) Data frame received for 5\nI0308 11:03:30.166494 1765 log.go:172] (0xc0005f1cc0) (5) Data frame handling\nI0308 11:03:30.166503 1765 log.go:172] (0xc0005f1cc0) (5) Data frame sent\nI0308 11:03:30.166511 1765 log.go:172] (0xc0008e4790) Data frame received for 5\nI0308 11:03:30.166516 1765 log.go:172] (0xc0005f1cc0) (5) Data frame handling\nConnection to 172.17.0.2 30986 port [tcp/30986] succeeded!\nI0308 11:03:30.166526 1765 log.go:172] (0xc0008e4790) Data frame received for 3\nI0308 11:03:30.166531 1765 log.go:172] (0xc0005f1ae0) (3) Data frame handling\nI0308 11:03:30.167644 1765 log.go:172] (0xc0008e4790) Data frame received for 1\nI0308 11:03:30.167659 1765 log.go:172] (0xc000a4a000) (1) Data frame handling\nI0308 11:03:30.167673 1765 log.go:172] (0xc000a4a000) (1) Data frame sent\nI0308 11:03:30.167686 1765 log.go:172] (0xc0008e4790) (0xc000a4a000) Stream removed, broadcasting: 1\nI0308 11:03:30.167745 1765 log.go:172] (0xc0008e4790) Go away received\nI0308 11:03:30.167941 1765 log.go:172] (0xc0008e4790) (0xc000a4a000) Stream removed, broadcasting: 1\nI0308 11:03:30.167952 1765 log.go:172] (0xc0008e4790) (0xc0005f1ae0) Stream removed, broadcasting: 3\nI0308 11:03:30.167957 1765 log.go:172] (0xc0008e4790) (0xc0005f1cc0) Stream removed, broadcasting: 5\n" Mar 8 11:03:30.169: INFO: stdout: "" Mar 8 11:03:30.169: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:03:30.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3514" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:6.865 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":278,"completed":100,"skipped":1541,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:03:30.203: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 8 11:03:30.478: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"9c5db3f0-d13f-4e97-9fb6-c98e139e32d3", Controller:(*bool)(0xc00409b436), BlockOwnerDeletion:(*bool)(0xc00409b437)}} Mar 8 11:03:30.486: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"f7077153-e5f1-4d78-9224-4c7469e014f4", Controller:(*bool)(0xc002d1e2a6), BlockOwnerDeletion:(*bool)(0xc002d1e2a7)}} Mar 8 11:03:30.526: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"3c9cedd8-5238-48c6-ab77-7fffcc272ee8", Controller:(*bool)(0xc00409b5c6), BlockOwnerDeletion:(*bool)(0xc00409b5c7)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:03:35.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7506" for this suite. • [SLOW TEST:5.347 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":278,"completed":101,"skipped":1581,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:03:35.551: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0308 11:04:15.686217 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 8 11:04:15.686: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:04:15.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4979" for this suite. • [SLOW TEST:40.142 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":278,"completed":102,"skipped":1614,"failed":0} [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:04:15.694: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 8 11:04:15.744: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1d125ff6-d11d-4d9e-a904-3a60126d06ad" in namespace "projected-896" to be "success or failure" Mar 8 11:04:15.764: INFO: Pod "downwardapi-volume-1d125ff6-d11d-4d9e-a904-3a60126d06ad": Phase="Pending", Reason="", readiness=false. Elapsed: 20.21719ms Mar 8 11:04:17.844: INFO: Pod "downwardapi-volume-1d125ff6-d11d-4d9e-a904-3a60126d06ad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.099979666s STEP: Saw pod success Mar 8 11:04:17.844: INFO: Pod "downwardapi-volume-1d125ff6-d11d-4d9e-a904-3a60126d06ad" satisfied condition "success or failure" Mar 8 11:04:17.847: INFO: Trying to get logs from node kind-control-plane pod downwardapi-volume-1d125ff6-d11d-4d9e-a904-3a60126d06ad container client-container: STEP: delete the pod Mar 8 11:04:17.876: INFO: Waiting for pod downwardapi-volume-1d125ff6-d11d-4d9e-a904-3a60126d06ad to disappear Mar 8 11:04:17.892: INFO: Pod downwardapi-volume-1d125ff6-d11d-4d9e-a904-3a60126d06ad no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:04:17.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-896" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":103,"skipped":1614,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:04:17.899: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-1729/configmap-test-33ac3752-00f6-4516-b98c-290fcebd14f0 STEP: Creating a pod to test consume configMaps Mar 8 11:04:18.052: INFO: Waiting up to 5m0s for pod "pod-configmaps-172b05e3-362f-450e-b25c-de48b69ef263" in namespace "configmap-1729" to be "success or failure" Mar 8 11:04:18.056: INFO: Pod "pod-configmaps-172b05e3-362f-450e-b25c-de48b69ef263": Phase="Pending", Reason="", readiness=false. Elapsed: 4.310084ms Mar 8 11:04:20.060: INFO: Pod "pod-configmaps-172b05e3-362f-450e-b25c-de48b69ef263": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008386241s STEP: Saw pod success Mar 8 11:04:20.060: INFO: Pod "pod-configmaps-172b05e3-362f-450e-b25c-de48b69ef263" satisfied condition "success or failure" Mar 8 11:04:20.063: INFO: Trying to get logs from node kind-control-plane pod pod-configmaps-172b05e3-362f-450e-b25c-de48b69ef263 container env-test: STEP: delete the pod Mar 8 11:04:20.173: INFO: Waiting for pod pod-configmaps-172b05e3-362f-450e-b25c-de48b69ef263 to disappear Mar 8 11:04:20.182: INFO: Pod pod-configmaps-172b05e3-362f-450e-b25c-de48b69ef263 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:04:20.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1729" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":278,"completed":104,"skipped":1627,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:04:20.191: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1672 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 8 11:04:20.251: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-4433' Mar 8 11:04:20.364: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 8 11:04:20.364: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created Mar 8 11:04:20.370: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 Mar 8 11:04:20.389: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Mar 8 11:04:20.400: INFO: scanned /root for discovery docs: Mar 8 11:04:20.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-4433' Mar 8 11:04:36.241: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Mar 8 11:04:36.241: INFO: stdout: "Created e2e-test-httpd-rc-dd5a1b6fbc16ffa4114a1a6858e82fcb\nScaling up e2e-test-httpd-rc-dd5a1b6fbc16ffa4114a1a6858e82fcb from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-dd5a1b6fbc16ffa4114a1a6858e82fcb up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-dd5a1b6fbc16ffa4114a1a6858e82fcb to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" Mar 8 11:04:36.241: INFO: stdout: "Created e2e-test-httpd-rc-dd5a1b6fbc16ffa4114a1a6858e82fcb\nScaling up e2e-test-httpd-rc-dd5a1b6fbc16ffa4114a1a6858e82fcb from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-dd5a1b6fbc16ffa4114a1a6858e82fcb up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-dd5a1b6fbc16ffa4114a1a6858e82fcb to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up. Mar 8 11:04:36.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-4433' Mar 8 11:04:36.377: INFO: stderr: "" Mar 8 11:04:36.377: INFO: stdout: "e2e-test-httpd-rc-dd5a1b6fbc16ffa4114a1a6858e82fcb-2clgb " Mar 8 11:04:36.377: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-dd5a1b6fbc16ffa4114a1a6858e82fcb-2clgb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4433' Mar 8 11:04:36.479: INFO: stderr: "" Mar 8 11:04:36.479: INFO: stdout: "true" Mar 8 11:04:36.479: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-dd5a1b6fbc16ffa4114a1a6858e82fcb-2clgb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4433' Mar 8 11:04:36.572: INFO: stderr: "" Mar 8 11:04:36.572: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine" Mar 8 11:04:36.572: INFO: e2e-test-httpd-rc-dd5a1b6fbc16ffa4114a1a6858e82fcb-2clgb is verified up and running [AfterEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1678 Mar 8 11:04:36.572: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-4433' Mar 8 11:04:36.695: INFO: stderr: "" Mar 8 11:04:36.695: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:04:36.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4433" for this suite. • [SLOW TEST:16.554 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1667 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance]","total":278,"completed":105,"skipped":1657,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:04:36.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Mar 8 11:04:36.799: INFO: Number of nodes with available pods: 0 Mar 8 11:04:36.799: INFO: Node kind-control-plane is running more than one daemon pod Mar 8 11:04:37.807: INFO: Number of nodes with available pods: 0 Mar 8 11:04:37.807: INFO: Node kind-control-plane is running more than one daemon pod Mar 8 11:04:38.813: INFO: Number of nodes with available pods: 1 Mar 8 11:04:38.813: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Stop a daemon pod, check that the daemon pod is revived. Mar 8 11:04:38.842: INFO: Number of nodes with available pods: 0 Mar 8 11:04:38.842: INFO: Node kind-control-plane is running more than one daemon pod Mar 8 11:04:39.850: INFO: Number of nodes with available pods: 0 Mar 8 11:04:39.850: INFO: Node kind-control-plane is running more than one daemon pod Mar 8 11:04:40.859: INFO: Number of nodes with available pods: 0 Mar 8 11:04:40.859: INFO: Node kind-control-plane is running more than one daemon pod Mar 8 11:04:41.870: INFO: Number of nodes with available pods: 0 Mar 8 11:04:41.870: INFO: Node kind-control-plane is running more than one daemon pod Mar 8 11:04:42.848: INFO: Number of nodes with available pods: 0 Mar 8 11:04:42.848: INFO: Node kind-control-plane is running more than one daemon pod Mar 8 11:04:43.848: INFO: Number of nodes with available pods: 0 Mar 8 11:04:43.848: INFO: Node kind-control-plane is running more than one daemon pod Mar 8 11:04:44.849: INFO: Number of nodes with available pods: 0 Mar 8 11:04:44.849: INFO: Node kind-control-plane is running more than one daemon pod Mar 8 11:04:45.848: INFO: Number of nodes with available pods: 0 Mar 8 11:04:45.848: INFO: Node kind-control-plane is running more than one daemon pod Mar 8 11:04:46.849: INFO: Number of nodes with available pods: 0 Mar 8 11:04:46.849: INFO: Node kind-control-plane is running more than one daemon pod Mar 8 11:04:47.848: INFO: Number of nodes with available pods: 0 Mar 8 11:04:47.848: INFO: Node kind-control-plane is running more than one daemon pod Mar 8 11:04:48.848: INFO: Number of nodes with available pods: 0 Mar 8 11:04:48.848: INFO: Node kind-control-plane is running more than one daemon pod Mar 8 11:04:49.849: INFO: Number of nodes with available pods: 0 Mar 8 11:04:49.849: INFO: Node kind-control-plane is running more than one daemon pod Mar 8 11:04:50.865: INFO: Number of nodes with available pods: 1 Mar 8 11:04:50.865: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9922, will wait for the garbage collector to delete the pods Mar 8 11:04:50.924: INFO: Deleting DaemonSet.extensions daemon-set took: 4.750928ms Mar 8 11:04:51.025: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.192561ms Mar 8 11:04:59.528: INFO: Number of nodes with available pods: 0 Mar 8 11:04:59.528: INFO: Number of running nodes: 0, number of available pods: 0 Mar 8 11:04:59.534: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9922/daemonsets","resourceVersion":"17254"},"items":null} Mar 8 11:04:59.537: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9922/pods","resourceVersion":"17254"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:04:59.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9922" for this suite. • [SLOW TEST:22.807 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":278,"completed":106,"skipped":1671,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:04:59.552: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 8 11:04:59.619: INFO: Creating deployment "test-recreate-deployment" Mar 8 11:04:59.634: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Mar 8 11:04:59.642: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Mar 8 11:05:01.667: INFO: Waiting deployment "test-recreate-deployment" to complete Mar 8 11:05:01.670: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Mar 8 11:05:01.677: INFO: Updating deployment test-recreate-deployment Mar 8 11:05:01.677: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Mar 8 11:05:01.993: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-1059 /apis/apps/v1/namespaces/deployment-1059/deployments/test-recreate-deployment f0565dbb-c59b-4acd-b64e-d16812105528 17310 2 2020-03-08 11:04:59 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001166428 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-03-08 11:05:01 +0000 UTC,LastTransitionTime:2020-03-08 11:05:01 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-03-08 11:05:01 +0000 UTC,LastTransitionTime:2020-03-08 11:04:59 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Mar 8 11:05:01.999: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff deployment-1059 /apis/apps/v1/namespaces/deployment-1059/replicasets/test-recreate-deployment-5f94c574ff 092683f0-9715-4272-95da-f50e30cf500d 17306 1 2020-03-08 11:05:01 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment f0565dbb-c59b-4acd-b64e-d16812105528 0xc0011667b7 0xc0011667b8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001166828 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 8 11:05:01.999: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Mar 8 11:05:01.999: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856 deployment-1059 /apis/apps/v1/namespaces/deployment-1059/replicasets/test-recreate-deployment-799c574856 716da8df-9621-4b40-bfd0-a21c071e5c33 17295 2 2020-03-08 11:04:59 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment f0565dbb-c59b-4acd-b64e-d16812105528 0xc0011668a7 0xc0011668a8}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001166928 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 8 11:05:02.001: INFO: Pod "test-recreate-deployment-5f94c574ff-ntmr9" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-ntmr9 test-recreate-deployment-5f94c574ff- deployment-1059 /api/v1/namespaces/deployment-1059/pods/test-recreate-deployment-5f94c574ff-ntmr9 0f324867-3370-43b4-a00d-b5f9e6707439 17311 0 2020-03-08 11:05:01 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff 092683f0-9715-4272-95da-f50e30cf500d 0xc001166d67 0xc001166d68}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ztsff,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ztsff,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ztsff,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-control-plane,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:05:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:05:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:05:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:05:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.2,PodIP:,StartTime:2020-03-08 11:05:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:05:02.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1059" for this suite. •{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":107,"skipped":1682,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:05:02.009: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service nodeport-test with type=NodePort in namespace services-9028 STEP: creating replication controller nodeport-test in namespace services-9028 I0308 11:05:02.191829 6 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-9028, replica count: 2 I0308 11:05:05.242240 6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 8 11:05:05.242: INFO: Creating new exec pod Mar 8 11:05:08.350: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9028 execpod76pkw -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Mar 8 11:05:08.593: INFO: stderr: "I0308 11:05:08.514021 1918 log.go:172] (0xc000a10580) (0xc0009f2140) Create stream\nI0308 11:05:08.514098 1918 log.go:172] (0xc000a10580) (0xc0009f2140) Stream added, broadcasting: 1\nI0308 11:05:08.516600 1918 log.go:172] (0xc000a10580) Reply frame received for 1\nI0308 11:05:08.516647 1918 log.go:172] (0xc000a10580) (0xc000607a40) Create stream\nI0308 11:05:08.516660 1918 log.go:172] (0xc000a10580) (0xc000607a40) Stream added, broadcasting: 3\nI0308 11:05:08.517611 1918 log.go:172] (0xc000a10580) Reply frame received for 3\nI0308 11:05:08.517642 1918 log.go:172] (0xc000a10580) (0xc0009f21e0) Create stream\nI0308 11:05:08.517655 1918 log.go:172] (0xc000a10580) (0xc0009f21e0) Stream added, broadcasting: 5\nI0308 11:05:08.518711 1918 log.go:172] (0xc000a10580) Reply frame received for 5\nI0308 11:05:08.586977 1918 log.go:172] (0xc000a10580) Data frame received for 5\nI0308 11:05:08.587002 1918 log.go:172] (0xc0009f21e0) (5) Data frame handling\nI0308 11:05:08.587020 1918 log.go:172] (0xc0009f21e0) (5) Data frame sent\nI0308 11:05:08.587031 1918 log.go:172] (0xc000a10580) Data frame received for 5\nI0308 11:05:08.587040 1918 log.go:172] (0xc0009f21e0) (5) Data frame handling\n+ nc -zv -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0308 11:05:08.587060 1918 log.go:172] (0xc0009f21e0) (5) Data frame sent\nI0308 11:05:08.587779 1918 log.go:172] (0xc000a10580) Data frame received for 5\nI0308 11:05:08.587815 1918 log.go:172] (0xc0009f21e0) (5) Data frame handling\nI0308 11:05:08.588505 1918 log.go:172] (0xc000a10580) Data frame received for 3\nI0308 11:05:08.588534 1918 log.go:172] (0xc000607a40) (3) Data frame handling\nI0308 11:05:08.590206 1918 log.go:172] (0xc000a10580) Data frame received for 1\nI0308 11:05:08.590238 1918 log.go:172] (0xc0009f2140) (1) Data frame handling\nI0308 11:05:08.590265 1918 log.go:172] (0xc0009f2140) (1) Data frame sent\nI0308 11:05:08.590287 1918 log.go:172] (0xc000a10580) (0xc0009f2140) Stream removed, broadcasting: 1\nI0308 11:05:08.590311 1918 log.go:172] (0xc000a10580) Go away received\nI0308 11:05:08.590777 1918 log.go:172] (0xc000a10580) (0xc0009f2140) Stream removed, broadcasting: 1\nI0308 11:05:08.590802 1918 log.go:172] (0xc000a10580) (0xc000607a40) Stream removed, broadcasting: 3\nI0308 11:05:08.590819 1918 log.go:172] (0xc000a10580) (0xc0009f21e0) Stream removed, broadcasting: 5\n" Mar 8 11:05:08.593: INFO: stdout: "" Mar 8 11:05:08.594: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9028 execpod76pkw -- /bin/sh -x -c nc -zv -t -w 2 10.96.130.177 80' Mar 8 11:05:08.797: INFO: stderr: "I0308 11:05:08.729586 1938 log.go:172] (0xc00009b6b0) (0xc0008d2280) Create stream\nI0308 11:05:08.729634 1938 log.go:172] (0xc00009b6b0) (0xc0008d2280) Stream added, broadcasting: 1\nI0308 11:05:08.736508 1938 log.go:172] (0xc00009b6b0) Reply frame received for 1\nI0308 11:05:08.736540 1938 log.go:172] (0xc00009b6b0) (0xc000695ae0) Create stream\nI0308 11:05:08.736548 1938 log.go:172] (0xc00009b6b0) (0xc000695ae0) Stream added, broadcasting: 3\nI0308 11:05:08.737319 1938 log.go:172] (0xc00009b6b0) Reply frame received for 3\nI0308 11:05:08.737351 1938 log.go:172] (0xc00009b6b0) (0xc00063c6e0) Create stream\nI0308 11:05:08.737363 1938 log.go:172] (0xc00009b6b0) (0xc00063c6e0) Stream added, broadcasting: 5\nI0308 11:05:08.737971 1938 log.go:172] (0xc00009b6b0) Reply frame received for 5\nI0308 11:05:08.793197 1938 log.go:172] (0xc00009b6b0) Data frame received for 5\nI0308 11:05:08.793224 1938 log.go:172] (0xc00063c6e0) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.130.177 80\nConnection to 10.96.130.177 80 port [tcp/http] succeeded!\nI0308 11:05:08.793248 1938 log.go:172] (0xc00009b6b0) Data frame received for 3\nI0308 11:05:08.793290 1938 log.go:172] (0xc000695ae0) (3) Data frame handling\nI0308 11:05:08.793320 1938 log.go:172] (0xc00063c6e0) (5) Data frame sent\nI0308 11:05:08.793345 1938 log.go:172] (0xc00009b6b0) Data frame received for 5\nI0308 11:05:08.793365 1938 log.go:172] (0xc00063c6e0) (5) Data frame handling\nI0308 11:05:08.794632 1938 log.go:172] (0xc00009b6b0) Data frame received for 1\nI0308 11:05:08.794659 1938 log.go:172] (0xc0008d2280) (1) Data frame handling\nI0308 11:05:08.794671 1938 log.go:172] (0xc0008d2280) (1) Data frame sent\nI0308 11:05:08.794683 1938 log.go:172] (0xc00009b6b0) (0xc0008d2280) Stream removed, broadcasting: 1\nI0308 11:05:08.794698 1938 log.go:172] (0xc00009b6b0) Go away received\nI0308 11:05:08.795044 1938 log.go:172] (0xc00009b6b0) (0xc0008d2280) Stream removed, broadcasting: 1\nI0308 11:05:08.795064 1938 log.go:172] (0xc00009b6b0) (0xc000695ae0) Stream removed, broadcasting: 3\nI0308 11:05:08.795074 1938 log.go:172] (0xc00009b6b0) (0xc00063c6e0) Stream removed, broadcasting: 5\n" Mar 8 11:05:08.797: INFO: stdout: "" Mar 8 11:05:08.797: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9028 execpod76pkw -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.2 32271' Mar 8 11:05:09.024: INFO: stderr: "I0308 11:05:08.947930 1957 log.go:172] (0xc000591080) (0xc0006279a0) Create stream\nI0308 11:05:08.947992 1957 log.go:172] (0xc000591080) (0xc0006279a0) Stream added, broadcasting: 1\nI0308 11:05:08.950545 1957 log.go:172] (0xc000591080) Reply frame received for 1\nI0308 11:05:08.950585 1957 log.go:172] (0xc000591080) (0xc0008d4000) Create stream\nI0308 11:05:08.950597 1957 log.go:172] (0xc000591080) (0xc0008d4000) Stream added, broadcasting: 3\nI0308 11:05:08.951383 1957 log.go:172] (0xc000591080) Reply frame received for 3\nI0308 11:05:08.951414 1957 log.go:172] (0xc000591080) (0xc000627b80) Create stream\nI0308 11:05:08.951437 1957 log.go:172] (0xc000591080) (0xc000627b80) Stream added, broadcasting: 5\nI0308 11:05:08.952232 1957 log.go:172] (0xc000591080) Reply frame received for 5\nI0308 11:05:09.020551 1957 log.go:172] (0xc000591080) Data frame received for 5\nI0308 11:05:09.020590 1957 log.go:172] (0xc000627b80) (5) Data frame handling\nI0308 11:05:09.020605 1957 log.go:172] (0xc000627b80) (5) Data frame sent\nI0308 11:05:09.020617 1957 log.go:172] (0xc000591080) Data frame received for 5\nI0308 11:05:09.020627 1957 log.go:172] (0xc000627b80) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.2 32271\nConnection to 172.17.0.2 32271 port [tcp/32271] succeeded!\nI0308 11:05:09.020662 1957 log.go:172] (0xc000591080) Data frame received for 3\nI0308 11:05:09.020682 1957 log.go:172] (0xc0008d4000) (3) Data frame handling\nI0308 11:05:09.021619 1957 log.go:172] (0xc000591080) Data frame received for 1\nI0308 11:05:09.021636 1957 log.go:172] (0xc0006279a0) (1) Data frame handling\nI0308 11:05:09.021665 1957 log.go:172] (0xc0006279a0) (1) Data frame sent\nI0308 11:05:09.021690 1957 log.go:172] (0xc000591080) (0xc0006279a0) Stream removed, broadcasting: 1\nI0308 11:05:09.021710 1957 log.go:172] (0xc000591080) Go away received\nI0308 11:05:09.022010 1957 log.go:172] (0xc000591080) (0xc0006279a0) Stream removed, broadcasting: 1\nI0308 11:05:09.022032 1957 log.go:172] (0xc000591080) (0xc0008d4000) Stream removed, broadcasting: 3\nI0308 11:05:09.022039 1957 log.go:172] (0xc000591080) (0xc000627b80) Stream removed, broadcasting: 5\n" Mar 8 11:05:09.024: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:05:09.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9028" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:7.022 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":278,"completed":108,"skipped":1701,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:05:09.031: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs Mar 8 11:05:09.092: INFO: Waiting up to 5m0s for pod "pod-d9b53ae2-3b68-464e-9ed0-f88329e06914" in namespace "emptydir-9991" to be "success or failure" Mar 8 11:05:09.097: INFO: Pod "pod-d9b53ae2-3b68-464e-9ed0-f88329e06914": Phase="Pending", Reason="", readiness=false. Elapsed: 4.891341ms Mar 8 11:05:11.101: INFO: Pod "pod-d9b53ae2-3b68-464e-9ed0-f88329e06914": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008673806s Mar 8 11:05:13.105: INFO: Pod "pod-d9b53ae2-3b68-464e-9ed0-f88329e06914": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012811171s STEP: Saw pod success Mar 8 11:05:13.105: INFO: Pod "pod-d9b53ae2-3b68-464e-9ed0-f88329e06914" satisfied condition "success or failure" Mar 8 11:05:13.108: INFO: Trying to get logs from node kind-control-plane pod pod-d9b53ae2-3b68-464e-9ed0-f88329e06914 container test-container: STEP: delete the pod Mar 8 11:05:13.135: INFO: Waiting for pod pod-d9b53ae2-3b68-464e-9ed0-f88329e06914 to disappear Mar 8 11:05:13.151: INFO: Pod pod-d9b53ae2-3b68-464e-9ed0-f88329e06914 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:05:13.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9991" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":109,"skipped":1705,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:05:13.159: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:05:19.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-8033" for this suite. STEP: Destroying namespace "nsdeletetest-6789" for this suite. Mar 8 11:05:19.406: INFO: Namespace nsdeletetest-6789 was already deleted STEP: Destroying namespace "nsdeletetest-808" for this suite. • [SLOW TEST:6.250 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":278,"completed":110,"skipped":1738,"failed":0} SSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:05:19.410: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 8 11:05:19.497: INFO: Pod name rollover-pod: Found 0 pods out of 1 Mar 8 11:05:24.500: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 8 11:05:24.500: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Mar 8 11:05:26.503: INFO: Creating deployment "test-rollover-deployment" Mar 8 11:05:26.511: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Mar 8 11:05:28.517: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Mar 8 11:05:28.523: INFO: Ensure that both replica sets have 1 created replica Mar 8 11:05:28.528: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Mar 8 11:05:28.534: INFO: Updating deployment test-rollover-deployment Mar 8 11:05:28.534: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Mar 8 11:05:30.683: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Mar 8 11:05:30.689: INFO: Make sure deployment "test-rollover-deployment" is complete Mar 8 11:05:30.694: INFO: all replica sets need to contain the pod-template-hash label Mar 8 11:05:30.694: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719262326, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719262326, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719262329, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719262326, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 8 11:05:32.704: INFO: all replica sets need to contain the pod-template-hash label Mar 8 11:05:32.705: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719262326, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719262326, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719262329, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719262326, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 8 11:05:34.702: INFO: all replica sets need to contain the pod-template-hash label Mar 8 11:05:34.702: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719262326, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719262326, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719262329, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719262326, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 8 11:05:36.701: INFO: all replica sets need to contain the pod-template-hash label Mar 8 11:05:36.701: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719262326, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719262326, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719262329, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719262326, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 8 11:05:38.702: INFO: all replica sets need to contain the pod-template-hash label Mar 8 11:05:38.702: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719262326, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719262326, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719262329, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719262326, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 8 11:05:40.702: INFO: Mar 8 11:05:40.702: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Mar 8 11:05:40.710: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-6772 /apis/apps/v1/namespaces/deployment-6772/deployments/test-rollover-deployment 68137c3b-b687-41a5-ba97-9e775e1c8cee 17671 2 2020-03-08 11:05:26 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002ba0d28 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-03-08 11:05:26 +0000 UTC,LastTransitionTime:2020-03-08 11:05:26 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-03-08 11:05:40 +0000 UTC,LastTransitionTime:2020-03-08 11:05:26 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Mar 8 11:05:40.714: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff deployment-6772 /apis/apps/v1/namespaces/deployment-6772/replicasets/test-rollover-deployment-574d6dfbff 54600c41-2045-4470-a130-279026c3839b 17660 2 2020-03-08 11:05:28 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 68137c3b-b687-41a5-ba97-9e775e1c8cee 0xc00409ba37 0xc00409ba38}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00409baa8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Mar 8 11:05:40.714: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Mar 8 11:05:40.714: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-6772 /apis/apps/v1/namespaces/deployment-6772/replicasets/test-rollover-controller b7f57784-9a05-469b-8b66-877466e014c3 17669 2 2020-03-08 11:05:19 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 68137c3b-b687-41a5-ba97-9e775e1c8cee 0xc00409b967 0xc00409b968}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00409b9c8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 8 11:05:40.714: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c deployment-6772 /apis/apps/v1/namespaces/deployment-6772/replicasets/test-rollover-deployment-f6c94f66c 5b852e89-83f3-4e25-8fba-3558ad496514 17619 2 2020-03-08 11:05:26 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 68137c3b-b687-41a5-ba97-9e775e1c8cee 0xc00409bb20 0xc00409bb21}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00409bb98 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 8 11:05:40.717: INFO: Pod "test-rollover-deployment-574d6dfbff-77snz" is available: &Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-77snz test-rollover-deployment-574d6dfbff- deployment-6772 /api/v1/namespaces/deployment-6772/pods/test-rollover-deployment-574d6dfbff-77snz cd776324-ca46-4140-9baa-77b7806dd038 17629 0 2020-03-08 11:05:28 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff 54600c41-2045-4470-a130-279026c3839b 0xc002c0a2d7 0xc002c0a2d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5lh64,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5lh64,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5lh64,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-control-plane,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:05:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:05:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:05:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:05:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.2,PodIP:10.244.0.219,StartTime:2020-03-08 11:05:28 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-08 11:05:29 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://2159ab794ac5f222de3dd2e3f6a7d54bc239d2b2bd2da33666c67e45505a6ead,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.0.219,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:05:40.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6772" for this suite. • [SLOW TEST:21.314 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":278,"completed":111,"skipped":1743,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:05:40.725: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 8 11:05:41.395: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 8 11:05:44.432: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:05:44.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8682" for this suite. STEP: Destroying namespace "webhook-8682-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 •{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":278,"completed":112,"skipped":1745,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:05:44.601: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-b80614de-33e1-4496-8a1b-0cc76cf8f709 STEP: Creating a pod to test consume secrets Mar 8 11:05:44.676: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-de57d919-3e76-46e7-b502-8079d70751df" in namespace "projected-3421" to be "success or failure" Mar 8 11:05:44.680: INFO: Pod "pod-projected-secrets-de57d919-3e76-46e7-b502-8079d70751df": Phase="Pending", Reason="", readiness=false. Elapsed: 3.909877ms Mar 8 11:05:46.683: INFO: Pod "pod-projected-secrets-de57d919-3e76-46e7-b502-8079d70751df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007604911s STEP: Saw pod success Mar 8 11:05:46.683: INFO: Pod "pod-projected-secrets-de57d919-3e76-46e7-b502-8079d70751df" satisfied condition "success or failure" Mar 8 11:05:46.686: INFO: Trying to get logs from node kind-control-plane pod pod-projected-secrets-de57d919-3e76-46e7-b502-8079d70751df container projected-secret-volume-test: STEP: delete the pod Mar 8 11:05:46.702: INFO: Waiting for pod pod-projected-secrets-de57d919-3e76-46e7-b502-8079d70751df to disappear Mar 8 11:05:46.706: INFO: Pod pod-projected-secrets-de57d919-3e76-46e7-b502-8079d70751df no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:05:46.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3421" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":113,"skipped":1770,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:05:46.713: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 8 11:05:47.755: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 8 11:05:50.795: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Mar 8 11:05:52.851: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-6121 to-be-attached-pod -i -c=container1' Mar 8 11:05:53.009: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:05:53.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6121" for this suite. STEP: Destroying namespace "webhook-6121-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.399 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":278,"completed":114,"skipped":1772,"failed":0} SSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:05:53.112: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:46 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Mar 8 11:05:55.282: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Mar 8 11:06:10.425: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:06:10.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-97" for this suite. • [SLOW TEST:17.326 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance]","total":278,"completed":115,"skipped":1777,"failed":0} SSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:06:10.438: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-102fce72-a808-4719-9c87-9aceeba8a2b4 STEP: Creating a pod to test consume secrets Mar 8 11:06:10.512: INFO: Waiting up to 5m0s for pod "pod-secrets-09c92f55-4789-4da8-817d-13e3f41f61ba" in namespace "secrets-47" to be "success or failure" Mar 8 11:06:10.534: INFO: Pod "pod-secrets-09c92f55-4789-4da8-817d-13e3f41f61ba": Phase="Pending", Reason="", readiness=false. Elapsed: 22.341195ms Mar 8 11:06:12.539: INFO: Pod "pod-secrets-09c92f55-4789-4da8-817d-13e3f41f61ba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.026736781s STEP: Saw pod success Mar 8 11:06:12.539: INFO: Pod "pod-secrets-09c92f55-4789-4da8-817d-13e3f41f61ba" satisfied condition "success or failure" Mar 8 11:06:12.542: INFO: Trying to get logs from node kind-control-plane pod pod-secrets-09c92f55-4789-4da8-817d-13e3f41f61ba container secret-volume-test: STEP: delete the pod Mar 8 11:06:12.559: INFO: Waiting for pod pod-secrets-09c92f55-4789-4da8-817d-13e3f41f61ba to disappear Mar 8 11:06:12.563: INFO: Pod pod-secrets-09c92f55-4789-4da8-817d-13e3f41f61ba no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:06:12.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-47" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":116,"skipped":1782,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:06:12.570: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-5849 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating statefulset ss in namespace statefulset-5849 Mar 8 11:06:12.675: INFO: Found 0 stateful pods, waiting for 1 Mar 8 11:06:22.679: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Mar 8 11:06:22.693: INFO: Deleting all statefulset in ns statefulset-5849 Mar 8 11:06:22.696: INFO: Scaling statefulset ss to 0 Mar 8 11:06:42.777: INFO: Waiting for statefulset status.replicas updated to 0 Mar 8 11:06:42.780: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:06:42.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5849" for this suite. • [SLOW TEST:30.231 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":278,"completed":117,"skipped":1802,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:06:42.802: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 8 11:06:42.860: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bea3d718-acbf-4d02-9cbc-a74e99ac0474" in namespace "projected-3647" to be "success or failure" Mar 8 11:06:42.864: INFO: Pod "downwardapi-volume-bea3d718-acbf-4d02-9cbc-a74e99ac0474": Phase="Pending", Reason="", readiness=false. Elapsed: 3.18985ms Mar 8 11:06:44.868: INFO: Pod "downwardapi-volume-bea3d718-acbf-4d02-9cbc-a74e99ac0474": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007988476s STEP: Saw pod success Mar 8 11:06:44.868: INFO: Pod "downwardapi-volume-bea3d718-acbf-4d02-9cbc-a74e99ac0474" satisfied condition "success or failure" Mar 8 11:06:44.872: INFO: Trying to get logs from node kind-control-plane pod downwardapi-volume-bea3d718-acbf-4d02-9cbc-a74e99ac0474 container client-container: STEP: delete the pod Mar 8 11:06:44.910: INFO: Waiting for pod downwardapi-volume-bea3d718-acbf-4d02-9cbc-a74e99ac0474 to disappear Mar 8 11:06:44.918: INFO: Pod downwardapi-volume-bea3d718-acbf-4d02-9cbc-a74e99ac0474 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:06:44.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3647" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":118,"skipped":1811,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:06:44.925: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-8665/configmap-test-dc11a465-84d0-45c9-8c2b-04e9e650bbf1 STEP: Creating a pod to test consume configMaps Mar 8 11:06:44.998: INFO: Waiting up to 5m0s for pod "pod-configmaps-744a9daf-f060-47dd-adb9-1d0cb7760af5" in namespace "configmap-8665" to be "success or failure" Mar 8 11:06:45.002: INFO: Pod "pod-configmaps-744a9daf-f060-47dd-adb9-1d0cb7760af5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067021ms Mar 8 11:06:47.006: INFO: Pod "pod-configmaps-744a9daf-f060-47dd-adb9-1d0cb7760af5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007834044s STEP: Saw pod success Mar 8 11:06:47.006: INFO: Pod "pod-configmaps-744a9daf-f060-47dd-adb9-1d0cb7760af5" satisfied condition "success or failure" Mar 8 11:06:47.014: INFO: Trying to get logs from node kind-control-plane pod pod-configmaps-744a9daf-f060-47dd-adb9-1d0cb7760af5 container env-test: STEP: delete the pod Mar 8 11:06:47.046: INFO: Waiting for pod pod-configmaps-744a9daf-f060-47dd-adb9-1d0cb7760af5 to disappear Mar 8 11:06:47.085: INFO: Pod pod-configmaps-744a9daf-f060-47dd-adb9-1d0cb7760af5 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:06:47.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8665" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":119,"skipped":1832,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:06:47.092: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:178 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:06:47.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9441" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":278,"completed":120,"skipped":1850,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:06:47.175: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Mar 8 11:06:47.298: INFO: Pod name pod-release: Found 0 pods out of 1 Mar 8 11:06:52.305: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:06:52.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2043" for this suite. • [SLOW TEST:5.293 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":278,"completed":121,"skipped":1893,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:06:52.469: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 8 11:06:53.383: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 8 11:06:55.393: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719262413, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719262413, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719262413, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719262413, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 8 11:06:58.423: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:06:58.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9453" for this suite. STEP: Destroying namespace "webhook-9453-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.090 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":278,"completed":122,"skipped":1895,"failed":0} [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:06:58.559: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:07:02.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4337" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":278,"completed":123,"skipped":1895,"failed":0} ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:07:02.883: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Mar 8 11:07:06.984: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 8 11:07:06.991: INFO: Pod pod-with-poststart-exec-hook still exists Mar 8 11:07:08.991: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 8 11:07:08.995: INFO: Pod pod-with-poststart-exec-hook still exists Mar 8 11:07:10.991: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 8 11:07:10.995: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:07:10.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4903" for this suite. • [SLOW TEST:8.120 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":278,"completed":124,"skipped":1895,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:07:11.004: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-711483d6-8cde-4375-9217-7fc2ca9d6a7f STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:07:15.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-807" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":125,"skipped":1924,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:07:15.115: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Mar 8 11:07:15.857: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 8 11:07:18.894: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 8 11:07:18.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:07:20.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-7723" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:5.225 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":278,"completed":126,"skipped":1925,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:07:20.340: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium Mar 8 11:07:20.433: INFO: Waiting up to 5m0s for pod "pod-45fb4170-b688-4cfd-94af-64c29481dff7" in namespace "emptydir-8524" to be "success or failure" Mar 8 11:07:20.464: INFO: Pod "pod-45fb4170-b688-4cfd-94af-64c29481dff7": Phase="Pending", Reason="", readiness=false. Elapsed: 31.411237ms Mar 8 11:07:22.467: INFO: Pod "pod-45fb4170-b688-4cfd-94af-64c29481dff7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.034525254s STEP: Saw pod success Mar 8 11:07:22.467: INFO: Pod "pod-45fb4170-b688-4cfd-94af-64c29481dff7" satisfied condition "success or failure" Mar 8 11:07:22.471: INFO: Trying to get logs from node kind-control-plane pod pod-45fb4170-b688-4cfd-94af-64c29481dff7 container test-container: STEP: delete the pod Mar 8 11:07:22.483: INFO: Waiting for pod pod-45fb4170-b688-4cfd-94af-64c29481dff7 to disappear Mar 8 11:07:22.488: INFO: Pod pod-45fb4170-b688-4cfd-94af-64c29481dff7 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:07:22.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8524" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":127,"skipped":1943,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:07:22.494: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1444 STEP: creating an pod Mar 8 11:07:22.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-5523 -- logs-generator --log-lines-total 100 --run-duration 20s' Mar 8 11:07:24.310: INFO: stderr: "" Mar 8 11:07:24.310: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Waiting for log generator to start. Mar 8 11:07:24.310: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Mar 8 11:07:24.310: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-5523" to be "running and ready, or succeeded" Mar 8 11:07:24.333: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 22.524309ms Mar 8 11:07:26.337: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026335479s Mar 8 11:07:28.349: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.038969123s Mar 8 11:07:28.349: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Mar 8 11:07:28.349: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Mar 8 11:07:28.349: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5523' Mar 8 11:07:28.498: INFO: stderr: "" Mar 8 11:07:28.498: INFO: stdout: "I0308 11:07:25.400529 1 logs_generator.go:76] 0 GET /api/v1/namespaces/kube-system/pods/xdx 294\nI0308 11:07:25.600637 1 logs_generator.go:76] 1 POST /api/v1/namespaces/default/pods/wk6 585\nI0308 11:07:25.800637 1 logs_generator.go:76] 2 POST /api/v1/namespaces/default/pods/t7l7 292\nI0308 11:07:26.000688 1 logs_generator.go:76] 3 GET /api/v1/namespaces/kube-system/pods/llg 372\nI0308 11:07:26.200765 1 logs_generator.go:76] 4 GET /api/v1/namespaces/kube-system/pods/b5wq 225\nI0308 11:07:26.400676 1 logs_generator.go:76] 5 GET /api/v1/namespaces/default/pods/ggbv 453\nI0308 11:07:26.600758 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/default/pods/994x 390\nI0308 11:07:26.800733 1 logs_generator.go:76] 7 GET /api/v1/namespaces/kube-system/pods/rw9 574\nI0308 11:07:27.000703 1 logs_generator.go:76] 8 POST /api/v1/namespaces/default/pods/hf6 431\nI0308 11:07:27.200719 1 logs_generator.go:76] 9 POST /api/v1/namespaces/default/pods/pgmr 359\nI0308 11:07:27.400705 1 logs_generator.go:76] 10 GET /api/v1/namespaces/default/pods/djw 252\nI0308 11:07:27.600718 1 logs_generator.go:76] 11 GET /api/v1/namespaces/default/pods/4qr9 483\nI0308 11:07:27.800739 1 logs_generator.go:76] 12 GET /api/v1/namespaces/ns/pods/mj8g 438\nI0308 11:07:28.000698 1 logs_generator.go:76] 13 GET /api/v1/namespaces/kube-system/pods/lplb 508\nI0308 11:07:28.200720 1 logs_generator.go:76] 14 POST /api/v1/namespaces/ns/pods/nxr4 580\nI0308 11:07:28.400681 1 logs_generator.go:76] 15 PUT /api/v1/namespaces/default/pods/pz9 344\n" STEP: limiting log lines Mar 8 11:07:28.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5523 --tail=1' Mar 8 11:07:28.619: INFO: stderr: "" Mar 8 11:07:28.619: INFO: stdout: "I0308 11:07:28.600666 1 logs_generator.go:76] 16 POST /api/v1/namespaces/default/pods/n9wc 216\n" Mar 8 11:07:28.619: INFO: got output "I0308 11:07:28.600666 1 logs_generator.go:76] 16 POST /api/v1/namespaces/default/pods/n9wc 216\n" STEP: limiting log bytes Mar 8 11:07:28.619: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5523 --limit-bytes=1' Mar 8 11:07:28.750: INFO: stderr: "" Mar 8 11:07:28.750: INFO: stdout: "I" Mar 8 11:07:28.750: INFO: got output "I" STEP: exposing timestamps Mar 8 11:07:28.750: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5523 --tail=1 --timestamps' Mar 8 11:07:28.867: INFO: stderr: "" Mar 8 11:07:28.867: INFO: stdout: "2020-03-08T11:07:28.800874985Z I0308 11:07:28.800715 1 logs_generator.go:76] 17 GET /api/v1/namespaces/ns/pods/wtd 589\n" Mar 8 11:07:28.867: INFO: got output "2020-03-08T11:07:28.800874985Z I0308 11:07:28.800715 1 logs_generator.go:76] 17 GET /api/v1/namespaces/ns/pods/wtd 589\n" STEP: restricting to a time range Mar 8 11:07:31.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5523 --since=1s' Mar 8 11:07:31.510: INFO: stderr: "" Mar 8 11:07:31.510: INFO: stdout: "I0308 11:07:30.600723 1 logs_generator.go:76] 26 PUT /api/v1/namespaces/default/pods/pbt 524\nI0308 11:07:30.800678 1 logs_generator.go:76] 27 POST /api/v1/namespaces/default/pods/hbg 245\nI0308 11:07:31.000686 1 logs_generator.go:76] 28 PUT /api/v1/namespaces/ns/pods/w7zj 539\nI0308 11:07:31.200694 1 logs_generator.go:76] 29 PUT /api/v1/namespaces/ns/pods/8fg 428\nI0308 11:07:31.400673 1 logs_generator.go:76] 30 GET /api/v1/namespaces/default/pods/nhz 589\n" Mar 8 11:07:31.510: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5523 --since=24h' Mar 8 11:07:31.653: INFO: stderr: "" Mar 8 11:07:31.653: INFO: stdout: "I0308 11:07:25.400529 1 logs_generator.go:76] 0 GET /api/v1/namespaces/kube-system/pods/xdx 294\nI0308 11:07:25.600637 1 logs_generator.go:76] 1 POST /api/v1/namespaces/default/pods/wk6 585\nI0308 11:07:25.800637 1 logs_generator.go:76] 2 POST /api/v1/namespaces/default/pods/t7l7 292\nI0308 11:07:26.000688 1 logs_generator.go:76] 3 GET /api/v1/namespaces/kube-system/pods/llg 372\nI0308 11:07:26.200765 1 logs_generator.go:76] 4 GET /api/v1/namespaces/kube-system/pods/b5wq 225\nI0308 11:07:26.400676 1 logs_generator.go:76] 5 GET /api/v1/namespaces/default/pods/ggbv 453\nI0308 11:07:26.600758 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/default/pods/994x 390\nI0308 11:07:26.800733 1 logs_generator.go:76] 7 GET /api/v1/namespaces/kube-system/pods/rw9 574\nI0308 11:07:27.000703 1 logs_generator.go:76] 8 POST /api/v1/namespaces/default/pods/hf6 431\nI0308 11:07:27.200719 1 logs_generator.go:76] 9 POST /api/v1/namespaces/default/pods/pgmr 359\nI0308 11:07:27.400705 1 logs_generator.go:76] 10 GET /api/v1/namespaces/default/pods/djw 252\nI0308 11:07:27.600718 1 logs_generator.go:76] 11 GET /api/v1/namespaces/default/pods/4qr9 483\nI0308 11:07:27.800739 1 logs_generator.go:76] 12 GET /api/v1/namespaces/ns/pods/mj8g 438\nI0308 11:07:28.000698 1 logs_generator.go:76] 13 GET /api/v1/namespaces/kube-system/pods/lplb 508\nI0308 11:07:28.200720 1 logs_generator.go:76] 14 POST /api/v1/namespaces/ns/pods/nxr4 580\nI0308 11:07:28.400681 1 logs_generator.go:76] 15 PUT /api/v1/namespaces/default/pods/pz9 344\nI0308 11:07:28.600666 1 logs_generator.go:76] 16 POST /api/v1/namespaces/default/pods/n9wc 216\nI0308 11:07:28.800715 1 logs_generator.go:76] 17 GET /api/v1/namespaces/ns/pods/wtd 589\nI0308 11:07:29.000725 1 logs_generator.go:76] 18 GET /api/v1/namespaces/kube-system/pods/k74l 373\nI0308 11:07:29.200725 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/kube-system/pods/cr2 298\nI0308 11:07:29.400767 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/kube-system/pods/pfm 583\nI0308 11:07:29.600736 1 logs_generator.go:76] 21 POST /api/v1/namespaces/kube-system/pods/89h 515\nI0308 11:07:29.800699 1 logs_generator.go:76] 22 GET /api/v1/namespaces/default/pods/rkm 564\nI0308 11:07:30.000701 1 logs_generator.go:76] 23 GET /api/v1/namespaces/kube-system/pods/675 437\nI0308 11:07:30.200678 1 logs_generator.go:76] 24 POST /api/v1/namespaces/kube-system/pods/cfgx 266\nI0308 11:07:30.400625 1 logs_generator.go:76] 25 PUT /api/v1/namespaces/kube-system/pods/cz9 488\nI0308 11:07:30.600723 1 logs_generator.go:76] 26 PUT /api/v1/namespaces/default/pods/pbt 524\nI0308 11:07:30.800678 1 logs_generator.go:76] 27 POST /api/v1/namespaces/default/pods/hbg 245\nI0308 11:07:31.000686 1 logs_generator.go:76] 28 PUT /api/v1/namespaces/ns/pods/w7zj 539\nI0308 11:07:31.200694 1 logs_generator.go:76] 29 PUT /api/v1/namespaces/ns/pods/8fg 428\nI0308 11:07:31.400673 1 logs_generator.go:76] 30 GET /api/v1/namespaces/default/pods/nhz 589\nI0308 11:07:31.600735 1 logs_generator.go:76] 31 GET /api/v1/namespaces/kube-system/pods/8862 473\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1450 Mar 8 11:07:31.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-5523' Mar 8 11:07:39.484: INFO: stderr: "" Mar 8 11:07:39.484: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:07:39.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5523" for this suite. • [SLOW TEST:16.998 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1440 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":278,"completed":128,"skipped":1955,"failed":0} SSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:07:39.492: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Mar 8 11:07:39.551: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 8 11:07:39.559: INFO: Waiting for terminating namespaces to be deleted... Mar 8 11:07:39.561: INFO: Logging pods the kubelet thinks is on node kind-control-plane before test Mar 8 11:07:39.570: INFO: etcd-kind-control-plane from kube-system started at 2020-03-08 10:17:29 +0000 UTC (1 container statuses recorded) Mar 8 11:07:39.570: INFO: Container etcd ready: true, restart count 0 Mar 8 11:07:39.570: INFO: kube-controller-manager-kind-control-plane from kube-system started at 2020-03-08 10:17:29 +0000 UTC (1 container statuses recorded) Mar 8 11:07:39.570: INFO: Container kube-controller-manager ready: true, restart count 0 Mar 8 11:07:39.570: INFO: kube-proxy-9qrbc from kube-system started at 2020-03-08 10:17:45 +0000 UTC (1 container statuses recorded) Mar 8 11:07:39.570: INFO: Container kube-proxy ready: true, restart count 0 Mar 8 11:07:39.570: INFO: kindnet-rznts from kube-system started at 2020-03-08 10:17:45 +0000 UTC (1 container statuses recorded) Mar 8 11:07:39.570: INFO: Container kindnet-cni ready: true, restart count 0 Mar 8 11:07:39.570: INFO: kube-apiserver-kind-control-plane from kube-system started at 2020-03-08 10:17:29 +0000 UTC (1 container statuses recorded) Mar 8 11:07:39.570: INFO: Container kube-apiserver ready: true, restart count 0 Mar 8 11:07:39.570: INFO: local-path-provisioner-7745554f7f-5f2b8 from local-path-storage started at 2020-03-08 10:17:49 +0000 UTC (1 container statuses recorded) Mar 8 11:07:39.570: INFO: Container local-path-provisioner ready: true, restart count 0 Mar 8 11:07:39.570: INFO: coredns-6955765f44-8lfgq from kube-system started at 2020-03-08 10:17:52 +0000 UTC (1 container statuses recorded) Mar 8 11:07:39.570: INFO: Container coredns ready: true, restart count 0 Mar 8 11:07:39.570: INFO: kube-scheduler-kind-control-plane from kube-system started at 2020-03-08 10:17:29 +0000 UTC (1 container statuses recorded) Mar 8 11:07:39.570: INFO: Container kube-scheduler ready: true, restart count 0 Mar 8 11:07:39.570: INFO: coredns-6955765f44-2ncc6 from kube-system started at 2020-03-08 10:17:49 +0000 UTC (1 container statuses recorded) Mar 8 11:07:39.570: INFO: Container coredns ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.15fa4f86da92bc0e], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:07:40.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9523" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":278,"completed":129,"skipped":1960,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:07:40.597: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:07:56.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5982" for this suite. • [SLOW TEST:16.175 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":278,"completed":130,"skipped":1990,"failed":0} S ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:07:56.771: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-6041 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-6041 I0308 11:07:56.883174 6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-6041, replica count: 2 I0308 11:07:59.933676 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 8 11:07:59.933: INFO: Creating new exec pod Mar 8 11:08:02.998: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6041 execpod5bbfx -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Mar 8 11:08:03.231: INFO: stderr: "I0308 11:08:03.170549 2189 log.go:172] (0xc0000f5290) (0xc000695ae0) Create stream\nI0308 11:08:03.170602 2189 log.go:172] (0xc0000f5290) (0xc000695ae0) Stream added, broadcasting: 1\nI0308 11:08:03.173033 2189 log.go:172] (0xc0000f5290) Reply frame received for 1\nI0308 11:08:03.173076 2189 log.go:172] (0xc0000f5290) (0xc000a9c000) Create stream\nI0308 11:08:03.173093 2189 log.go:172] (0xc0000f5290) (0xc000a9c000) Stream added, broadcasting: 3\nI0308 11:08:03.174088 2189 log.go:172] (0xc0000f5290) Reply frame received for 3\nI0308 11:08:03.174154 2189 log.go:172] (0xc0000f5290) (0xc0001fc000) Create stream\nI0308 11:08:03.174167 2189 log.go:172] (0xc0000f5290) (0xc0001fc000) Stream added, broadcasting: 5\nI0308 11:08:03.175121 2189 log.go:172] (0xc0000f5290) Reply frame received for 5\nI0308 11:08:03.224543 2189 log.go:172] (0xc0000f5290) Data frame received for 5\nI0308 11:08:03.224566 2189 log.go:172] (0xc0001fc000) (5) Data frame handling\nI0308 11:08:03.224580 2189 log.go:172] (0xc0001fc000) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0308 11:08:03.225700 2189 log.go:172] (0xc0000f5290) Data frame received for 5\nI0308 11:08:03.225722 2189 log.go:172] (0xc0001fc000) (5) Data frame handling\nI0308 11:08:03.225734 2189 log.go:172] (0xc0001fc000) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0308 11:08:03.225991 2189 log.go:172] (0xc0000f5290) Data frame received for 5\nI0308 11:08:03.226005 2189 log.go:172] (0xc0001fc000) (5) Data frame handling\nI0308 11:08:03.226055 2189 log.go:172] (0xc0000f5290) Data frame received for 3\nI0308 11:08:03.226082 2189 log.go:172] (0xc000a9c000) (3) Data frame handling\nI0308 11:08:03.227705 2189 log.go:172] (0xc0000f5290) Data frame received for 1\nI0308 11:08:03.227730 2189 log.go:172] (0xc000695ae0) (1) Data frame handling\nI0308 11:08:03.227742 2189 log.go:172] (0xc000695ae0) (1) Data frame sent\nI0308 11:08:03.227756 2189 log.go:172] (0xc0000f5290) (0xc000695ae0) Stream removed, broadcasting: 1\nI0308 11:08:03.227771 2189 log.go:172] (0xc0000f5290) Go away received\nI0308 11:08:03.228101 2189 log.go:172] (0xc0000f5290) (0xc000695ae0) Stream removed, broadcasting: 1\nI0308 11:08:03.228116 2189 log.go:172] (0xc0000f5290) (0xc000a9c000) Stream removed, broadcasting: 3\nI0308 11:08:03.228123 2189 log.go:172] (0xc0000f5290) (0xc0001fc000) Stream removed, broadcasting: 5\n" Mar 8 11:08:03.231: INFO: stdout: "" Mar 8 11:08:03.232: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6041 execpod5bbfx -- /bin/sh -x -c nc -zv -t -w 2 10.96.52.34 80' Mar 8 11:08:03.434: INFO: stderr: "I0308 11:08:03.363193 2209 log.go:172] (0xc0005182c0) (0xc0007455e0) Create stream\nI0308 11:08:03.363263 2209 log.go:172] (0xc0005182c0) (0xc0007455e0) Stream added, broadcasting: 1\nI0308 11:08:03.365421 2209 log.go:172] (0xc0005182c0) Reply frame received for 1\nI0308 11:08:03.365459 2209 log.go:172] (0xc0005182c0) (0xc000a3a000) Create stream\nI0308 11:08:03.365470 2209 log.go:172] (0xc0005182c0) (0xc000a3a000) Stream added, broadcasting: 3\nI0308 11:08:03.366316 2209 log.go:172] (0xc0005182c0) Reply frame received for 3\nI0308 11:08:03.366352 2209 log.go:172] (0xc0005182c0) (0xc000a3a140) Create stream\nI0308 11:08:03.366362 2209 log.go:172] (0xc0005182c0) (0xc000a3a140) Stream added, broadcasting: 5\nI0308 11:08:03.367216 2209 log.go:172] (0xc0005182c0) Reply frame received for 5\nI0308 11:08:03.430105 2209 log.go:172] (0xc0005182c0) Data frame received for 3\nI0308 11:08:03.430188 2209 log.go:172] (0xc000a3a000) (3) Data frame handling\nI0308 11:08:03.430227 2209 log.go:172] (0xc0005182c0) Data frame received for 5\nI0308 11:08:03.430249 2209 log.go:172] (0xc000a3a140) (5) Data frame handling\nI0308 11:08:03.430274 2209 log.go:172] (0xc000a3a140) (5) Data frame sent\nI0308 11:08:03.430291 2209 log.go:172] (0xc0005182c0) Data frame received for 5\nI0308 11:08:03.430306 2209 log.go:172] (0xc000a3a140) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.52.34 80\nConnection to 10.96.52.34 80 port [tcp/http] succeeded!\nI0308 11:08:03.431658 2209 log.go:172] (0xc0005182c0) Data frame received for 1\nI0308 11:08:03.431696 2209 log.go:172] (0xc0007455e0) (1) Data frame handling\nI0308 11:08:03.431723 2209 log.go:172] (0xc0007455e0) (1) Data frame sent\nI0308 11:08:03.431748 2209 log.go:172] (0xc0005182c0) (0xc0007455e0) Stream removed, broadcasting: 1\nI0308 11:08:03.431773 2209 log.go:172] (0xc0005182c0) Go away received\nI0308 11:08:03.432244 2209 log.go:172] (0xc0005182c0) (0xc0007455e0) Stream removed, broadcasting: 1\nI0308 11:08:03.432274 2209 log.go:172] (0xc0005182c0) (0xc000a3a000) Stream removed, broadcasting: 3\nI0308 11:08:03.432289 2209 log.go:172] (0xc0005182c0) (0xc000a3a140) Stream removed, broadcasting: 5\n" Mar 8 11:08:03.434: INFO: stdout: "" Mar 8 11:08:03.434: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:08:03.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6041" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:6.739 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":278,"completed":131,"skipped":1991,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:08:03.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on tmpfs Mar 8 11:08:03.555: INFO: Waiting up to 5m0s for pod "pod-c78cac55-8b45-401e-9bec-d16160161898" in namespace "emptydir-2459" to be "success or failure" Mar 8 11:08:03.559: INFO: Pod "pod-c78cac55-8b45-401e-9bec-d16160161898": Phase="Pending", Reason="", readiness=false. Elapsed: 4.238359ms Mar 8 11:08:05.563: INFO: Pod "pod-c78cac55-8b45-401e-9bec-d16160161898": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008051561s STEP: Saw pod success Mar 8 11:08:05.563: INFO: Pod "pod-c78cac55-8b45-401e-9bec-d16160161898" satisfied condition "success or failure" Mar 8 11:08:05.566: INFO: Trying to get logs from node kind-control-plane pod pod-c78cac55-8b45-401e-9bec-d16160161898 container test-container: STEP: delete the pod Mar 8 11:08:05.586: INFO: Waiting for pod pod-c78cac55-8b45-401e-9bec-d16160161898 to disappear Mar 8 11:08:05.591: INFO: Pod pod-c78cac55-8b45-401e-9bec-d16160161898 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:08:05.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2459" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":132,"skipped":2019,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:08:05.597: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Mar 8 11:08:05.663: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 8 11:08:05.673: INFO: Waiting for terminating namespaces to be deleted... Mar 8 11:08:05.675: INFO: Logging pods the kubelet thinks is on node kind-control-plane before test Mar 8 11:08:05.682: INFO: kube-scheduler-kind-control-plane from kube-system started at 2020-03-08 10:17:29 +0000 UTC (1 container statuses recorded) Mar 8 11:08:05.682: INFO: Container kube-scheduler ready: true, restart count 0 Mar 8 11:08:05.682: INFO: coredns-6955765f44-2ncc6 from kube-system started at 2020-03-08 10:17:49 +0000 UTC (1 container statuses recorded) Mar 8 11:08:05.682: INFO: Container coredns ready: true, restart count 0 Mar 8 11:08:05.682: INFO: externalname-service-257zm from services-6041 started at 2020-03-08 11:07:56 +0000 UTC (1 container statuses recorded) Mar 8 11:08:05.682: INFO: Container externalname-service ready: true, restart count 0 Mar 8 11:08:05.682: INFO: etcd-kind-control-plane from kube-system started at 2020-03-08 10:17:29 +0000 UTC (1 container statuses recorded) Mar 8 11:08:05.682: INFO: Container etcd ready: true, restart count 0 Mar 8 11:08:05.682: INFO: kube-controller-manager-kind-control-plane from kube-system started at 2020-03-08 10:17:29 +0000 UTC (1 container statuses recorded) Mar 8 11:08:05.682: INFO: Container kube-controller-manager ready: true, restart count 0 Mar 8 11:08:05.682: INFO: kube-proxy-9qrbc from kube-system started at 2020-03-08 10:17:45 +0000 UTC (1 container statuses recorded) Mar 8 11:08:05.682: INFO: Container kube-proxy ready: true, restart count 0 Mar 8 11:08:05.682: INFO: kindnet-rznts from kube-system started at 2020-03-08 10:17:45 +0000 UTC (1 container statuses recorded) Mar 8 11:08:05.682: INFO: Container kindnet-cni ready: true, restart count 0 Mar 8 11:08:05.682: INFO: externalname-service-wjjbd from services-6041 started at 2020-03-08 11:07:56 +0000 UTC (1 container statuses recorded) Mar 8 11:08:05.682: INFO: Container externalname-service ready: true, restart count 0 Mar 8 11:08:05.682: INFO: execpod5bbfx from services-6041 started at 2020-03-08 11:08:00 +0000 UTC (1 container statuses recorded) Mar 8 11:08:05.682: INFO: Container agnhost-pause ready: true, restart count 0 Mar 8 11:08:05.682: INFO: kube-apiserver-kind-control-plane from kube-system started at 2020-03-08 10:17:29 +0000 UTC (1 container statuses recorded) Mar 8 11:08:05.682: INFO: Container kube-apiserver ready: true, restart count 0 Mar 8 11:08:05.682: INFO: local-path-provisioner-7745554f7f-5f2b8 from local-path-storage started at 2020-03-08 10:17:49 +0000 UTC (1 container statuses recorded) Mar 8 11:08:05.682: INFO: Container local-path-provisioner ready: true, restart count 0 Mar 8 11:08:05.682: INFO: coredns-6955765f44-8lfgq from kube-system started at 2020-03-08 10:17:52 +0000 UTC (1 container statuses recorded) Mar 8 11:08:05.682: INFO: Container coredns ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-58e48af2-7d89-4134-8c37-3c3aba0ba3e6 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-58e48af2-7d89-4134-8c37-3c3aba0ba3e6 off the node kind-control-plane STEP: verifying the node doesn't have the label kubernetes.io/e2e-58e48af2-7d89-4134-8c37-3c3aba0ba3e6 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:08:09.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1495" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":278,"completed":133,"skipped":2035,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:08:09.885: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:329 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller Mar 8 11:08:09.934: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4005' Mar 8 11:08:10.262: INFO: stderr: "" Mar 8 11:08:10.262: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 8 11:08:10.263: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4005' Mar 8 11:08:10.401: INFO: stderr: "" Mar 8 11:08:10.401: INFO: stdout: "update-demo-nautilus-jb28j update-demo-nautilus-v78xh " Mar 8 11:08:10.401: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jb28j -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4005' Mar 8 11:08:10.520: INFO: stderr: "" Mar 8 11:08:10.520: INFO: stdout: "" Mar 8 11:08:10.520: INFO: update-demo-nautilus-jb28j is created but not running Mar 8 11:08:15.521: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4005' Mar 8 11:08:15.588: INFO: stderr: "" Mar 8 11:08:15.588: INFO: stdout: "update-demo-nautilus-jb28j update-demo-nautilus-v78xh " Mar 8 11:08:15.588: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jb28j -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4005' Mar 8 11:08:15.665: INFO: stderr: "" Mar 8 11:08:15.665: INFO: stdout: "true" Mar 8 11:08:15.666: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jb28j -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4005' Mar 8 11:08:15.728: INFO: stderr: "" Mar 8 11:08:15.728: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 8 11:08:15.728: INFO: validating pod update-demo-nautilus-jb28j Mar 8 11:08:15.731: INFO: got data: { "image": "nautilus.jpg" } Mar 8 11:08:15.731: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 8 11:08:15.732: INFO: update-demo-nautilus-jb28j is verified up and running Mar 8 11:08:15.732: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-v78xh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4005' Mar 8 11:08:15.796: INFO: stderr: "" Mar 8 11:08:15.796: INFO: stdout: "true" Mar 8 11:08:15.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-v78xh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4005' Mar 8 11:08:15.875: INFO: stderr: "" Mar 8 11:08:15.875: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 8 11:08:15.875: INFO: validating pod update-demo-nautilus-v78xh Mar 8 11:08:15.877: INFO: got data: { "image": "nautilus.jpg" } Mar 8 11:08:15.877: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 8 11:08:15.877: INFO: update-demo-nautilus-v78xh is verified up and running STEP: using delete to clean up resources Mar 8 11:08:15.877: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4005' Mar 8 11:08:15.961: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 8 11:08:15.961: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Mar 8 11:08:15.961: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4005' Mar 8 11:08:16.034: INFO: stderr: "No resources found in kubectl-4005 namespace.\n" Mar 8 11:08:16.034: INFO: stdout: "" Mar 8 11:08:16.034: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4005 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 8 11:08:16.109: INFO: stderr: "" Mar 8 11:08:16.109: INFO: stdout: "update-demo-nautilus-jb28j\nupdate-demo-nautilus-v78xh\n" Mar 8 11:08:16.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4005' Mar 8 11:08:16.741: INFO: stderr: "No resources found in kubectl-4005 namespace.\n" Mar 8 11:08:16.741: INFO: stdout: "" Mar 8 11:08:16.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4005 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 8 11:08:16.841: INFO: stderr: "" Mar 8 11:08:16.841: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:08:16.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4005" for this suite. • [SLOW TEST:6.976 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:327 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":278,"completed":134,"skipped":2061,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:08:16.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 8 11:08:16.919: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9943613a-1945-4e7a-b551-1abd43d1c0ea" in namespace "downward-api-7519" to be "success or failure" Mar 8 11:08:16.925: INFO: Pod "downwardapi-volume-9943613a-1945-4e7a-b551-1abd43d1c0ea": Phase="Pending", Reason="", readiness=false. Elapsed: 5.915574ms Mar 8 11:08:18.929: INFO: Pod "downwardapi-volume-9943613a-1945-4e7a-b551-1abd43d1c0ea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009372018s STEP: Saw pod success Mar 8 11:08:18.929: INFO: Pod "downwardapi-volume-9943613a-1945-4e7a-b551-1abd43d1c0ea" satisfied condition "success or failure" Mar 8 11:08:18.931: INFO: Trying to get logs from node kind-control-plane pod downwardapi-volume-9943613a-1945-4e7a-b551-1abd43d1c0ea container client-container: STEP: delete the pod Mar 8 11:08:18.967: INFO: Waiting for pod downwardapi-volume-9943613a-1945-4e7a-b551-1abd43d1c0ea to disappear Mar 8 11:08:18.981: INFO: Pod downwardapi-volume-9943613a-1945-4e7a-b551-1abd43d1c0ea no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:08:18.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7519" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":135,"skipped":2075,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:08:18.989: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: executing a command with run --rm and attach with stdin Mar 8 11:08:19.046: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5103 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Mar 8 11:08:20.629: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0308 11:08:20.571965 2493 log.go:172] (0xc000a54e70) (0xc0006dfae0) Create stream\nI0308 11:08:20.572118 2493 log.go:172] (0xc000a54e70) (0xc0006dfae0) Stream added, broadcasting: 1\nI0308 11:08:20.574799 2493 log.go:172] (0xc000a54e70) Reply frame received for 1\nI0308 11:08:20.574857 2493 log.go:172] (0xc000a54e70) (0xc0006dfb80) Create stream\nI0308 11:08:20.574913 2493 log.go:172] (0xc000a54e70) (0xc0006dfb80) Stream added, broadcasting: 3\nI0308 11:08:20.575934 2493 log.go:172] (0xc000a54e70) Reply frame received for 3\nI0308 11:08:20.576022 2493 log.go:172] (0xc000a54e70) (0xc00067c000) Create stream\nI0308 11:08:20.576057 2493 log.go:172] (0xc000a54e70) (0xc00067c000) Stream added, broadcasting: 5\nI0308 11:08:20.577316 2493 log.go:172] (0xc000a54e70) Reply frame received for 5\nI0308 11:08:20.577365 2493 log.go:172] (0xc000a54e70) (0xc0006dfc20) Create stream\nI0308 11:08:20.577390 2493 log.go:172] (0xc000a54e70) (0xc0006dfc20) Stream added, broadcasting: 7\nI0308 11:08:20.578385 2493 log.go:172] (0xc000a54e70) Reply frame received for 7\nI0308 11:08:20.578631 2493 log.go:172] (0xc0006dfb80) (3) Writing data frame\nI0308 11:08:20.578751 2493 log.go:172] (0xc0006dfb80) (3) Writing data frame\nI0308 11:08:20.579639 2493 log.go:172] (0xc000a54e70) Data frame received for 5\nI0308 11:08:20.579665 2493 log.go:172] (0xc00067c000) (5) Data frame handling\nI0308 11:08:20.579682 2493 log.go:172] (0xc00067c000) (5) Data frame sent\nI0308 11:08:20.580183 2493 log.go:172] (0xc000a54e70) Data frame received for 5\nI0308 11:08:20.580201 2493 log.go:172] (0xc00067c000) (5) Data frame handling\nI0308 11:08:20.580217 2493 log.go:172] (0xc00067c000) (5) Data frame sent\nI0308 11:08:20.599204 2493 log.go:172] (0xc000a54e70) Data frame received for 5\nI0308 11:08:20.599234 2493 log.go:172] (0xc00067c000) (5) Data frame handling\nI0308 11:08:20.599361 2493 log.go:172] (0xc000a54e70) Data frame received for 7\nI0308 11:08:20.599381 2493 log.go:172] (0xc0006dfc20) (7) Data frame handling\nI0308 11:08:20.600464 2493 log.go:172] (0xc000a54e70) Data frame received for 1\nI0308 11:08:20.600488 2493 log.go:172] (0xc0006dfae0) (1) Data frame handling\nI0308 11:08:20.600505 2493 log.go:172] (0xc0006dfae0) (1) Data frame sent\nI0308 11:08:20.600528 2493 log.go:172] (0xc000a54e70) (0xc0006dfb80) Stream removed, broadcasting: 3\nI0308 11:08:20.600563 2493 log.go:172] (0xc000a54e70) (0xc0006dfae0) Stream removed, broadcasting: 1\nI0308 11:08:20.600600 2493 log.go:172] (0xc000a54e70) Go away received\nI0308 11:08:20.600883 2493 log.go:172] (0xc000a54e70) (0xc0006dfae0) Stream removed, broadcasting: 1\nI0308 11:08:20.600905 2493 log.go:172] (0xc000a54e70) (0xc0006dfb80) Stream removed, broadcasting: 3\nI0308 11:08:20.600916 2493 log.go:172] (0xc000a54e70) (0xc00067c000) Stream removed, broadcasting: 5\nI0308 11:08:20.600925 2493 log.go:172] (0xc000a54e70) (0xc0006dfc20) Stream removed, broadcasting: 7\n" Mar 8 11:08:20.629: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:08:22.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5103" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance]","total":278,"completed":136,"skipped":2133,"failed":0} S ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:08:22.641: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:08:26.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5337" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":278,"completed":137,"skipped":2134,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:08:26.740: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-projected-6gkt STEP: Creating a pod to test atomic-volume-subpath Mar 8 11:08:26.829: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-6gkt" in namespace "subpath-1458" to be "success or failure" Mar 8 11:08:26.848: INFO: Pod "pod-subpath-test-projected-6gkt": Phase="Pending", Reason="", readiness=false. Elapsed: 19.436667ms Mar 8 11:08:28.851: INFO: Pod "pod-subpath-test-projected-6gkt": Phase="Running", Reason="", readiness=true. Elapsed: 2.022543911s Mar 8 11:08:30.856: INFO: Pod "pod-subpath-test-projected-6gkt": Phase="Running", Reason="", readiness=true. Elapsed: 4.026665326s Mar 8 11:08:32.859: INFO: Pod "pod-subpath-test-projected-6gkt": Phase="Running", Reason="", readiness=true. Elapsed: 6.030170613s Mar 8 11:08:34.863: INFO: Pod "pod-subpath-test-projected-6gkt": Phase="Running", Reason="", readiness=true. Elapsed: 8.034341911s Mar 8 11:08:36.867: INFO: Pod "pod-subpath-test-projected-6gkt": Phase="Running", Reason="", readiness=true. Elapsed: 10.038298709s Mar 8 11:08:38.871: INFO: Pod "pod-subpath-test-projected-6gkt": Phase="Running", Reason="", readiness=true. Elapsed: 12.04187863s Mar 8 11:08:40.874: INFO: Pod "pod-subpath-test-projected-6gkt": Phase="Running", Reason="", readiness=true. Elapsed: 14.045368904s Mar 8 11:08:42.878: INFO: Pod "pod-subpath-test-projected-6gkt": Phase="Running", Reason="", readiness=true. Elapsed: 16.048941025s Mar 8 11:08:44.882: INFO: Pod "pod-subpath-test-projected-6gkt": Phase="Running", Reason="", readiness=true. Elapsed: 18.052666348s Mar 8 11:08:46.885: INFO: Pod "pod-subpath-test-projected-6gkt": Phase="Running", Reason="", readiness=true. Elapsed: 20.055908486s Mar 8 11:08:48.888: INFO: Pod "pod-subpath-test-projected-6gkt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.059641783s STEP: Saw pod success Mar 8 11:08:48.889: INFO: Pod "pod-subpath-test-projected-6gkt" satisfied condition "success or failure" Mar 8 11:08:48.891: INFO: Trying to get logs from node kind-control-plane pod pod-subpath-test-projected-6gkt container test-container-subpath-projected-6gkt: STEP: delete the pod Mar 8 11:08:48.911: INFO: Waiting for pod pod-subpath-test-projected-6gkt to disappear Mar 8 11:08:48.916: INFO: Pod pod-subpath-test-projected-6gkt no longer exists STEP: Deleting pod pod-subpath-test-projected-6gkt Mar 8 11:08:48.916: INFO: Deleting pod "pod-subpath-test-projected-6gkt" in namespace "subpath-1458" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:08:48.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1458" for this suite. • [SLOW TEST:22.189 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":278,"completed":138,"skipped":2142,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:08:48.931: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating replication controller my-hostname-basic-90c048cc-0ecf-4960-a426-50fcd03085d7 Mar 8 11:08:49.039: INFO: Pod name my-hostname-basic-90c048cc-0ecf-4960-a426-50fcd03085d7: Found 0 pods out of 1 Mar 8 11:08:54.042: INFO: Pod name my-hostname-basic-90c048cc-0ecf-4960-a426-50fcd03085d7: Found 1 pods out of 1 Mar 8 11:08:54.042: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-90c048cc-0ecf-4960-a426-50fcd03085d7" are running Mar 8 11:08:54.048: INFO: Pod "my-hostname-basic-90c048cc-0ecf-4960-a426-50fcd03085d7-dcwh6" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-08 11:08:49 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-08 11:08:50 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-08 11:08:50 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-08 11:08:49 +0000 UTC Reason: Message:}]) Mar 8 11:08:54.048: INFO: Trying to dial the pod Mar 8 11:08:59.059: INFO: Controller my-hostname-basic-90c048cc-0ecf-4960-a426-50fcd03085d7: Got expected result from replica 1 [my-hostname-basic-90c048cc-0ecf-4960-a426-50fcd03085d7-dcwh6]: "my-hostname-basic-90c048cc-0ecf-4960-a426-50fcd03085d7-dcwh6", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:08:59.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-997" for this suite. • [SLOW TEST:10.136 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":139,"skipped":2224,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:08:59.067: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1362 STEP: creating the pod Mar 8 11:08:59.104: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8451' Mar 8 11:08:59.411: INFO: stderr: "" Mar 8 11:08:59.411: INFO: stdout: "pod/pause created\n" Mar 8 11:08:59.411: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Mar 8 11:08:59.411: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-8451" to be "running and ready" Mar 8 11:08:59.419: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 8.243258ms Mar 8 11:09:01.422: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 2.011702666s Mar 8 11:09:01.422: INFO: Pod "pause" satisfied condition "running and ready" Mar 8 11:09:01.423: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: adding the label testing-label with value testing-label-value to a pod Mar 8 11:09:01.423: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-8451' Mar 8 11:09:01.624: INFO: stderr: "" Mar 8 11:09:01.624: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Mar 8 11:09:01.624: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-8451' Mar 8 11:09:02.178: INFO: stderr: "" Mar 8 11:09:02.178: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 3s testing-label-value\n" STEP: removing the label testing-label of a pod Mar 8 11:09:02.178: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-8451' Mar 8 11:09:02.301: INFO: stderr: "" Mar 8 11:09:02.301: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Mar 8 11:09:02.301: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-8451' Mar 8 11:09:02.387: INFO: stderr: "" Mar 8 11:09:02.387: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 3s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1369 STEP: using delete to clean up resources Mar 8 11:09:02.387: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8451' Mar 8 11:09:02.480: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 8 11:09:02.480: INFO: stdout: "pod \"pause\" force deleted\n" Mar 8 11:09:02.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-8451' Mar 8 11:09:02.590: INFO: stderr: "No resources found in kubectl-8451 namespace.\n" Mar 8 11:09:02.590: INFO: stdout: "" Mar 8 11:09:02.590: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-8451 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 8 11:09:02.679: INFO: stderr: "" Mar 8 11:09:02.679: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:09:02.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8451" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":278,"completed":140,"skipped":2227,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:09:02.697: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Mar 8 11:09:02.759: INFO: PodSpec: initContainers in spec.initContainers Mar 8 11:09:49.782: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-f8180833-eef7-4cc7-b1c7-291d2ff0c08c", GenerateName:"", Namespace:"init-container-2497", SelfLink:"/api/v1/namespaces/init-container-2497/pods/pod-init-f8180833-eef7-4cc7-b1c7-291d2ff0c08c", UID:"7b2018a5-8e51-40ea-af95-b8a8bca44ece", ResourceVersion:"19638", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63719262542, loc:(*time.Location)(0x7d100a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"759148215"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-66h4z", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc004fba000), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-66h4z", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-66h4z", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-66h4z", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0029d20d8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"kind-control-plane", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002a84000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0029d24e0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0029d25b0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0029d25b8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0029d25bc), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719262542, loc:(*time.Location)(0x7d100a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719262542, loc:(*time.Location)(0x7d100a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719262542, loc:(*time.Location)(0x7d100a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719262542, loc:(*time.Location)(0x7d100a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.2", PodIP:"10.244.0.7", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.0.7"}}, StartTime:(*v1.Time)(0xc002c74080), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0008d80e0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0008d8150)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://04d79b093a041b317259f9b4470e6a284a8c4870665a8078725bbf447820c817", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002c740c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002c740a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc0029d294f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:09:49.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2497" for this suite. • [SLOW TEST:47.104 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":278,"completed":141,"skipped":2242,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:09:49.801: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:09:52.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8902" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":278,"completed":142,"skipped":2251,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:09:52.927: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Mar 8 11:09:56.056: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:09:57.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-589" for this suite. •{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":278,"completed":143,"skipped":2260,"failed":0} ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:09:57.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 8 11:09:57.224: INFO: (0) /api/v1/nodes/kind-control-plane:10250/proxy/logs/:
containers/
pods/
(200; 4.6397ms) Mar 8 11:09:57.227: INFO: (1) /api/v1/nodes/kind-control-plane:10250/proxy/logs/:
containers/
pods/
(200; 2.704645ms) Mar 8 11:09:57.229: INFO: (2) /api/v1/nodes/kind-control-plane:10250/proxy/logs/:
containers/
pods/
(200; 2.845096ms) Mar 8 11:09:57.232: INFO: (3) /api/v1/nodes/kind-control-plane:10250/proxy/logs/:
containers/
pods/
(200; 2.80894ms) Mar 8 11:09:57.235: INFO: (4) /api/v1/nodes/kind-control-plane:10250/proxy/logs/:
containers/
pods/
(200; 2.615187ms) Mar 8 11:09:57.237: INFO: (5) /api/v1/nodes/kind-control-plane:10250/proxy/logs/:
containers/
pods/
(200; 2.202099ms) Mar 8 11:09:57.240: INFO: (6) /api/v1/nodes/kind-control-plane:10250/proxy/logs/:
containers/
pods/
(200; 2.54223ms) Mar 8 11:09:57.242: INFO: (7) /api/v1/nodes/kind-control-plane:10250/proxy/logs/:
containers/
pods/
(200; 2.301694ms) Mar 8 11:09:57.244: INFO: (8) /api/v1/nodes/kind-control-plane:10250/proxy/logs/:
containers/
pods/
(200; 2.415918ms) Mar 8 11:09:57.247: INFO: (9) /api/v1/nodes/kind-control-plane:10250/proxy/logs/:
containers/
pods/
(200; 2.259461ms) Mar 8 11:09:57.249: INFO: (10) /api/v1/nodes/kind-control-plane:10250/proxy/logs/:
containers/
pods/
(200; 2.315032ms) Mar 8 11:09:57.251: INFO: (11) /api/v1/nodes/kind-control-plane:10250/proxy/logs/:
containers/
pods/
(200; 2.377555ms) Mar 8 11:09:57.254: INFO: (12) /api/v1/nodes/kind-control-plane:10250/proxy/logs/:
containers/
pods/
(200; 2.575849ms) Mar 8 11:09:57.257: INFO: (13) /api/v1/nodes/kind-control-plane:10250/proxy/logs/:
containers/
pods/
(200; 2.68655ms) Mar 8 11:09:57.259: INFO: (14) /api/v1/nodes/kind-control-plane:10250/proxy/logs/:
containers/
pods/
(200; 2.597803ms) Mar 8 11:09:57.262: INFO: (15) /api/v1/nodes/kind-control-plane:10250/proxy/logs/:
containers/
pods/
(200; 2.58158ms) Mar 8 11:09:57.265: INFO: (16) /api/v1/nodes/kind-control-plane:10250/proxy/logs/:
containers/
pods/
(200; 2.556033ms) Mar 8 11:09:57.267: INFO: (17) /api/v1/nodes/kind-control-plane:10250/proxy/logs/:
containers/
pods/
(200; 2.370312ms) Mar 8 11:09:57.269: INFO: (18) /api/v1/nodes/kind-control-plane:10250/proxy/logs/:
containers/
pods/
(200; 2.082703ms) Mar 8 11:09:57.271: INFO: (19) /api/v1/nodes/kind-control-plane:10250/proxy/logs/:
containers/
pods/
(200; 2.375555ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:09:57.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-5661" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]","total":278,"completed":144,"skipped":2260,"failed":0} SSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:09:57.277: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-390 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet Mar 8 11:09:57.403: INFO: Found 0 stateful pods, waiting for 3 Mar 8 11:10:07.407: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 8 11:10:07.407: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 8 11:10:07.407: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Mar 8 11:10:07.432: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Mar 8 11:10:17.475: INFO: Updating stateful set ss2 Mar 8 11:10:17.513: INFO: Waiting for Pod statefulset-390/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Mar 8 11:10:27.644: INFO: Found 2 stateful pods, waiting for 3 Mar 8 11:10:37.648: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 8 11:10:37.648: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 8 11:10:37.648: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Mar 8 11:10:37.675: INFO: Updating stateful set ss2 Mar 8 11:10:37.682: INFO: Waiting for Pod statefulset-390/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Mar 8 11:10:47.706: INFO: Updating stateful set ss2 Mar 8 11:10:47.751: INFO: Waiting for StatefulSet statefulset-390/ss2 to complete update Mar 8 11:10:47.751: INFO: Waiting for Pod statefulset-390/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Mar 8 11:10:57.757: INFO: Deleting all statefulset in ns statefulset-390 Mar 8 11:10:57.760: INFO: Scaling statefulset ss2 to 0 Mar 8 11:11:07.774: INFO: Waiting for statefulset status.replicas updated to 0 Mar 8 11:11:07.776: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:11:07.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-390" for this suite. • [SLOW TEST:70.538 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":278,"completed":145,"skipped":2264,"failed":0} [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:11:07.815: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-b2315158-e889-4d95-bd85-baf8e69b569e STEP: Creating a pod to test consume configMaps Mar 8 11:11:07.901: INFO: Waiting up to 5m0s for pod "pod-configmaps-20b47f4a-6527-4388-97cc-ff4acb0f82b5" in namespace "configmap-4563" to be "success or failure" Mar 8 11:11:07.927: INFO: Pod "pod-configmaps-20b47f4a-6527-4388-97cc-ff4acb0f82b5": Phase="Pending", Reason="", readiness=false. Elapsed: 25.568308ms Mar 8 11:11:09.931: INFO: Pod "pod-configmaps-20b47f4a-6527-4388-97cc-ff4acb0f82b5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029909307s Mar 8 11:11:11.935: INFO: Pod "pod-configmaps-20b47f4a-6527-4388-97cc-ff4acb0f82b5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033817091s STEP: Saw pod success Mar 8 11:11:11.935: INFO: Pod "pod-configmaps-20b47f4a-6527-4388-97cc-ff4acb0f82b5" satisfied condition "success or failure" Mar 8 11:11:11.938: INFO: Trying to get logs from node kind-control-plane pod pod-configmaps-20b47f4a-6527-4388-97cc-ff4acb0f82b5 container configmap-volume-test: STEP: delete the pod Mar 8 11:11:11.969: INFO: Waiting for pod pod-configmaps-20b47f4a-6527-4388-97cc-ff4acb0f82b5 to disappear Mar 8 11:11:11.992: INFO: Pod pod-configmaps-20b47f4a-6527-4388-97cc-ff4acb0f82b5 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:11:11.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4563" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":146,"skipped":2264,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:11:12.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-2e0bb169-eaf5-4c1d-9fe1-3783f03c0367 STEP: Creating a pod to test consume configMaps Mar 8 11:11:12.059: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-56802ed2-969e-4974-b9d9-e0fddfdadffb" in namespace "projected-9126" to be "success or failure" Mar 8 11:11:12.070: INFO: Pod "pod-projected-configmaps-56802ed2-969e-4974-b9d9-e0fddfdadffb": Phase="Pending", Reason="", readiness=false. Elapsed: 11.252304ms Mar 8 11:11:14.083: INFO: Pod "pod-projected-configmaps-56802ed2-969e-4974-b9d9-e0fddfdadffb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.023911148s STEP: Saw pod success Mar 8 11:11:14.083: INFO: Pod "pod-projected-configmaps-56802ed2-969e-4974-b9d9-e0fddfdadffb" satisfied condition "success or failure" Mar 8 11:11:14.086: INFO: Trying to get logs from node kind-control-plane pod pod-projected-configmaps-56802ed2-969e-4974-b9d9-e0fddfdadffb container projected-configmap-volume-test: STEP: delete the pod Mar 8 11:11:14.103: INFO: Waiting for pod pod-projected-configmaps-56802ed2-969e-4974-b9d9-e0fddfdadffb to disappear Mar 8 11:11:14.122: INFO: Pod pod-projected-configmaps-56802ed2-969e-4974-b9d9-e0fddfdadffb no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:11:14.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9126" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":278,"completed":147,"skipped":2293,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:11:14.130: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 8 11:11:14.161: INFO: Creating ReplicaSet my-hostname-basic-5f3a5aa8-5a55-4948-a584-d757d130fb1c Mar 8 11:11:14.211: INFO: Pod name my-hostname-basic-5f3a5aa8-5a55-4948-a584-d757d130fb1c: Found 0 pods out of 1 Mar 8 11:11:19.226: INFO: Pod name my-hostname-basic-5f3a5aa8-5a55-4948-a584-d757d130fb1c: Found 1 pods out of 1 Mar 8 11:11:19.226: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-5f3a5aa8-5a55-4948-a584-d757d130fb1c" is running Mar 8 11:11:19.229: INFO: Pod "my-hostname-basic-5f3a5aa8-5a55-4948-a584-d757d130fb1c-8z7kx" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-08 11:11:14 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-08 11:11:16 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-08 11:11:16 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-08 11:11:14 +0000 UTC Reason: Message:}]) Mar 8 11:11:19.229: INFO: Trying to dial the pod Mar 8 11:11:24.240: INFO: Controller my-hostname-basic-5f3a5aa8-5a55-4948-a584-d757d130fb1c: Got expected result from replica 1 [my-hostname-basic-5f3a5aa8-5a55-4948-a584-d757d130fb1c-8z7kx]: "my-hostname-basic-5f3a5aa8-5a55-4948-a584-d757d130fb1c-8z7kx", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:11:24.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-7485" for this suite. • [SLOW TEST:10.117 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":148,"skipped":2357,"failed":0} SSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:11:24.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:11:32.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-8226" for this suite. • [SLOW TEST:8.060 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":278,"completed":149,"skipped":2365,"failed":0} SSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:11:32.309: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-f40a4823-ad68-4a71-8c56-c4cdf6706199 STEP: Creating a pod to test consume configMaps Mar 8 11:11:32.403: INFO: Waiting up to 5m0s for pod "pod-configmaps-9d982c39-7d54-4d8e-bf75-d2367259b73f" in namespace "configmap-9736" to be "success or failure" Mar 8 11:11:32.411: INFO: Pod "pod-configmaps-9d982c39-7d54-4d8e-bf75-d2367259b73f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.432789ms Mar 8 11:11:34.416: INFO: Pod "pod-configmaps-9d982c39-7d54-4d8e-bf75-d2367259b73f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012927208s STEP: Saw pod success Mar 8 11:11:34.416: INFO: Pod "pod-configmaps-9d982c39-7d54-4d8e-bf75-d2367259b73f" satisfied condition "success or failure" Mar 8 11:11:34.419: INFO: Trying to get logs from node kind-control-plane pod pod-configmaps-9d982c39-7d54-4d8e-bf75-d2367259b73f container configmap-volume-test: STEP: delete the pod Mar 8 11:11:34.437: INFO: Waiting for pod pod-configmaps-9d982c39-7d54-4d8e-bf75-d2367259b73f to disappear Mar 8 11:11:34.441: INFO: Pod pod-configmaps-9d982c39-7d54-4d8e-bf75-d2367259b73f no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:11:34.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9736" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":150,"skipped":2369,"failed":0} SSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:11:34.452: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 8 11:11:34.515: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-9774' Mar 8 11:11:34.629: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 8 11:11:34.629: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created STEP: confirm that you can get logs from an rc Mar 8 11:11:34.645: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-j8tsd] Mar 8 11:11:34.645: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-j8tsd" in namespace "kubectl-9774" to be "running and ready" Mar 8 11:11:34.651: INFO: Pod "e2e-test-httpd-rc-j8tsd": Phase="Pending", Reason="", readiness=false. Elapsed: 5.885567ms Mar 8 11:11:36.694: INFO: Pod "e2e-test-httpd-rc-j8tsd": Phase="Running", Reason="", readiness=true. Elapsed: 2.048628783s Mar 8 11:11:36.694: INFO: Pod "e2e-test-httpd-rc-j8tsd" satisfied condition "running and ready" Mar 8 11:11:36.694: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-j8tsd] Mar 8 11:11:36.694: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-9774' Mar 8 11:11:36.836: INFO: stderr: "" Mar 8 11:11:36.836: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.0.31. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.0.31. Set the 'ServerName' directive globally to suppress this message\n[Sun Mar 08 11:11:35.739923 2020] [mpm_event:notice] [pid 1:tid 139912716315496] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Sun Mar 08 11:11:35.739994 2020] [core:notice] [pid 1:tid 139912716315496] AH00094: Command line: 'httpd -D FOREGROUND'\n" [AfterEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617 Mar 8 11:11:36.836: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-9774' Mar 8 11:11:36.946: INFO: stderr: "" Mar 8 11:11:36.946: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:11:36.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9774" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance]","total":278,"completed":151,"skipped":2376,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:11:36.954: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-1a113432-79af-4fd4-82ab-43ab848917ef STEP: Creating a pod to test consume secrets Mar 8 11:11:37.028: INFO: Waiting up to 5m0s for pod "pod-secrets-85172c1d-e3d9-4a10-a3fb-bcf5dad97fd6" in namespace "secrets-3416" to be "success or failure" Mar 8 11:11:37.047: INFO: Pod "pod-secrets-85172c1d-e3d9-4a10-a3fb-bcf5dad97fd6": Phase="Pending", Reason="", readiness=false. Elapsed: 19.162384ms Mar 8 11:11:39.050: INFO: Pod "pod-secrets-85172c1d-e3d9-4a10-a3fb-bcf5dad97fd6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022354897s Mar 8 11:11:41.065: INFO: Pod "pod-secrets-85172c1d-e3d9-4a10-a3fb-bcf5dad97fd6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036452455s STEP: Saw pod success Mar 8 11:11:41.065: INFO: Pod "pod-secrets-85172c1d-e3d9-4a10-a3fb-bcf5dad97fd6" satisfied condition "success or failure" Mar 8 11:11:41.070: INFO: Trying to get logs from node kind-control-plane pod pod-secrets-85172c1d-e3d9-4a10-a3fb-bcf5dad97fd6 container secret-volume-test: STEP: delete the pod Mar 8 11:11:41.097: INFO: Waiting for pod pod-secrets-85172c1d-e3d9-4a10-a3fb-bcf5dad97fd6 to disappear Mar 8 11:11:41.118: INFO: Pod pod-secrets-85172c1d-e3d9-4a10-a3fb-bcf5dad97fd6 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:11:41.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3416" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":152,"skipped":2414,"failed":0} SS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:11:41.125: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Mar 8 11:11:43.705: INFO: Successfully updated pod "labelsupdate41848420-dc4b-4087-b328-1670696741b8" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:11:45.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8747" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":153,"skipped":2416,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:11:45.750: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-d76a926f-44e5-4c52-87b9-a9156c0e5cf2 STEP: Creating a pod to test consume configMaps Mar 8 11:11:45.821: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f64bbee5-caf6-4f01-9d19-9ea2dbc48537" in namespace "projected-320" to be "success or failure" Mar 8 11:11:45.855: INFO: Pod "pod-projected-configmaps-f64bbee5-caf6-4f01-9d19-9ea2dbc48537": Phase="Pending", Reason="", readiness=false. Elapsed: 34.285444ms Mar 8 11:11:47.880: INFO: Pod "pod-projected-configmaps-f64bbee5-caf6-4f01-9d19-9ea2dbc48537": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.058622659s STEP: Saw pod success Mar 8 11:11:47.880: INFO: Pod "pod-projected-configmaps-f64bbee5-caf6-4f01-9d19-9ea2dbc48537" satisfied condition "success or failure" Mar 8 11:11:47.882: INFO: Trying to get logs from node kind-control-plane pod pod-projected-configmaps-f64bbee5-caf6-4f01-9d19-9ea2dbc48537 container projected-configmap-volume-test: STEP: delete the pod Mar 8 11:11:47.899: INFO: Waiting for pod pod-projected-configmaps-f64bbee5-caf6-4f01-9d19-9ea2dbc48537 to disappear Mar 8 11:11:47.903: INFO: Pod pod-projected-configmaps-f64bbee5-caf6-4f01-9d19-9ea2dbc48537 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:11:47.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-320" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":154,"skipped":2433,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:11:47.910: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-3744e48b-3978-4d8f-9bd8-001591e47b4f STEP: Creating a pod to test consume secrets Mar 8 11:11:47.977: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-fdfc1138-fe9a-4e0a-afbb-d5a866c474e4" in namespace "projected-6413" to be "success or failure" Mar 8 11:11:48.019: INFO: Pod "pod-projected-secrets-fdfc1138-fe9a-4e0a-afbb-d5a866c474e4": Phase="Pending", Reason="", readiness=false. Elapsed: 42.597119ms Mar 8 11:11:50.023: INFO: Pod "pod-projected-secrets-fdfc1138-fe9a-4e0a-afbb-d5a866c474e4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.045894772s STEP: Saw pod success Mar 8 11:11:50.023: INFO: Pod "pod-projected-secrets-fdfc1138-fe9a-4e0a-afbb-d5a866c474e4" satisfied condition "success or failure" Mar 8 11:11:50.025: INFO: Trying to get logs from node kind-control-plane pod pod-projected-secrets-fdfc1138-fe9a-4e0a-afbb-d5a866c474e4 container projected-secret-volume-test: STEP: delete the pod Mar 8 11:11:50.049: INFO: Waiting for pod pod-projected-secrets-fdfc1138-fe9a-4e0a-afbb-d5a866c474e4 to disappear Mar 8 11:11:50.053: INFO: Pod pod-projected-secrets-fdfc1138-fe9a-4e0a-afbb-d5a866c474e4 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:11:50.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6413" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":155,"skipped":2469,"failed":0} SSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:11:50.094: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 8 11:11:50.142: INFO: Waiting up to 5m0s for pod "downwardapi-volume-497d232d-0576-4c87-87d6-136f5163ca8d" in namespace "projected-7031" to be "success or failure" Mar 8 11:11:50.144: INFO: Pod "downwardapi-volume-497d232d-0576-4c87-87d6-136f5163ca8d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.36705ms Mar 8 11:11:52.148: INFO: Pod "downwardapi-volume-497d232d-0576-4c87-87d6-136f5163ca8d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006035398s Mar 8 11:11:54.151: INFO: Pod "downwardapi-volume-497d232d-0576-4c87-87d6-136f5163ca8d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009551572s STEP: Saw pod success Mar 8 11:11:54.151: INFO: Pod "downwardapi-volume-497d232d-0576-4c87-87d6-136f5163ca8d" satisfied condition "success or failure" Mar 8 11:11:54.154: INFO: Trying to get logs from node kind-control-plane pod downwardapi-volume-497d232d-0576-4c87-87d6-136f5163ca8d container client-container: STEP: delete the pod Mar 8 11:11:54.189: INFO: Waiting for pod downwardapi-volume-497d232d-0576-4c87-87d6-136f5163ca8d to disappear Mar 8 11:11:54.197: INFO: Pod downwardapi-volume-497d232d-0576-4c87-87d6-136f5163ca8d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:11:54.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7031" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":156,"skipped":2474,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:11:54.206: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override arguments Mar 8 11:11:54.264: INFO: Waiting up to 5m0s for pod "client-containers-a88b47c9-202e-41f5-883f-88ab64c2d6a2" in namespace "containers-6420" to be "success or failure" Mar 8 11:11:54.269: INFO: Pod "client-containers-a88b47c9-202e-41f5-883f-88ab64c2d6a2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.097048ms Mar 8 11:11:56.275: INFO: Pod "client-containers-a88b47c9-202e-41f5-883f-88ab64c2d6a2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010223623s STEP: Saw pod success Mar 8 11:11:56.275: INFO: Pod "client-containers-a88b47c9-202e-41f5-883f-88ab64c2d6a2" satisfied condition "success or failure" Mar 8 11:11:56.278: INFO: Trying to get logs from node kind-control-plane pod client-containers-a88b47c9-202e-41f5-883f-88ab64c2d6a2 container test-container: STEP: delete the pod Mar 8 11:11:56.295: INFO: Waiting for pod client-containers-a88b47c9-202e-41f5-883f-88ab64c2d6a2 to disappear Mar 8 11:11:56.322: INFO: Pod client-containers-a88b47c9-202e-41f5-883f-88ab64c2d6a2 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:11:56.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-6420" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":278,"completed":157,"skipped":2491,"failed":0} SS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:11:56.329: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with configMap that has name projected-configmap-test-upd-be1dce93-9018-44d8-940e-ee283a6ed4e9 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-be1dce93-9018-44d8-940e-ee283a6ed4e9 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:12:00.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1178" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":158,"skipped":2493,"failed":0} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:12:00.484: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 8 11:12:01.034: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 8 11:12:04.073: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:12:04.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2766" for this suite. STEP: Destroying namespace "webhook-2766-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 •{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":278,"completed":159,"skipped":2496,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:12:04.276: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs Mar 8 11:12:04.354: INFO: Waiting up to 5m0s for pod "pod-1484aca1-df18-46da-ac24-bd14f602ecf5" in namespace "emptydir-9148" to be "success or failure" Mar 8 11:12:04.359: INFO: Pod "pod-1484aca1-df18-46da-ac24-bd14f602ecf5": Phase="Pending", Reason="", readiness=false. Elapsed: 5.261359ms Mar 8 11:12:06.363: INFO: Pod "pod-1484aca1-df18-46da-ac24-bd14f602ecf5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008855196s STEP: Saw pod success Mar 8 11:12:06.363: INFO: Pod "pod-1484aca1-df18-46da-ac24-bd14f602ecf5" satisfied condition "success or failure" Mar 8 11:12:06.365: INFO: Trying to get logs from node kind-control-plane pod pod-1484aca1-df18-46da-ac24-bd14f602ecf5 container test-container: STEP: delete the pod Mar 8 11:12:06.400: INFO: Waiting for pod pod-1484aca1-df18-46da-ac24-bd14f602ecf5 to disappear Mar 8 11:12:06.407: INFO: Pod pod-1484aca1-df18-46da-ac24-bd14f602ecf5 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:12:06.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9148" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":160,"skipped":2498,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:12:06.414: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:12:29.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8702" for this suite. • [SLOW TEST:23.424 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":278,"completed":161,"skipped":2509,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:12:29.839: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 8 11:12:30.403: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 8 11:12:32.413: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719262750, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719262750, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719262750, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719262750, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 8 11:12:35.452: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 8 11:12:35.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-9749-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:12:36.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-385" for this suite. STEP: Destroying namespace "webhook-385-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.883 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":278,"completed":162,"skipped":2546,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:12:36.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0308 11:13:07.325921 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 8 11:13:07.325: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:13:07.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2111" for this suite. • [SLOW TEST:30.610 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":278,"completed":163,"skipped":2575,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:13:07.333: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0308 11:13:19.357512 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 8 11:13:19.357: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:13:19.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5116" for this suite. • [SLOW TEST:12.031 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":278,"completed":164,"skipped":2589,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:13:19.365: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-4788 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 8 11:13:19.476: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 8 11:13:43.561: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.0.49:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-4788 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 11:13:43.561: INFO: >>> kubeConfig: /root/.kube/config I0308 11:13:43.599686 6 log.go:172] (0xc002a902c0) (0xc0008e1180) Create stream I0308 11:13:43.599712 6 log.go:172] (0xc002a902c0) (0xc0008e1180) Stream added, broadcasting: 1 I0308 11:13:43.602057 6 log.go:172] (0xc002a902c0) Reply frame received for 1 I0308 11:13:43.602098 6 log.go:172] (0xc002a902c0) (0xc00129c1e0) Create stream I0308 11:13:43.602113 6 log.go:172] (0xc002a902c0) (0xc00129c1e0) Stream added, broadcasting: 3 I0308 11:13:43.603495 6 log.go:172] (0xc002a902c0) Reply frame received for 3 I0308 11:13:43.603543 6 log.go:172] (0xc002a902c0) (0xc00123ac80) Create stream I0308 11:13:43.603558 6 log.go:172] (0xc002a902c0) (0xc00123ac80) Stream added, broadcasting: 5 I0308 11:13:43.604630 6 log.go:172] (0xc002a902c0) Reply frame received for 5 I0308 11:13:43.676284 6 log.go:172] (0xc002a902c0) Data frame received for 3 I0308 11:13:43.676330 6 log.go:172] (0xc00129c1e0) (3) Data frame handling I0308 11:13:43.676360 6 log.go:172] (0xc002a902c0) Data frame received for 5 I0308 11:13:43.676386 6 log.go:172] (0xc00123ac80) (5) Data frame handling I0308 11:13:43.676411 6 log.go:172] (0xc00129c1e0) (3) Data frame sent I0308 11:13:43.676427 6 log.go:172] (0xc002a902c0) Data frame received for 3 I0308 11:13:43.676439 6 log.go:172] (0xc00129c1e0) (3) Data frame handling I0308 11:13:43.678529 6 log.go:172] (0xc002a902c0) Data frame received for 1 I0308 11:13:43.678555 6 log.go:172] (0xc0008e1180) (1) Data frame handling I0308 11:13:43.678567 6 log.go:172] (0xc0008e1180) (1) Data frame sent I0308 11:13:43.678579 6 log.go:172] (0xc002a902c0) (0xc0008e1180) Stream removed, broadcasting: 1 I0308 11:13:43.678596 6 log.go:172] (0xc002a902c0) Go away received I0308 11:13:43.678755 6 log.go:172] (0xc002a902c0) (0xc0008e1180) Stream removed, broadcasting: 1 I0308 11:13:43.678775 6 log.go:172] (0xc002a902c0) (0xc00129c1e0) Stream removed, broadcasting: 3 I0308 11:13:43.678788 6 log.go:172] (0xc002a902c0) (0xc00123ac80) Stream removed, broadcasting: 5 Mar 8 11:13:43.678: INFO: Found all expected endpoints: [netserver-0] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:13:43.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-4788" for this suite. • [SLOW TEST:24.321 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":165,"skipped":2618,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:13:43.687: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Mar 8 11:13:46.282: INFO: Successfully updated pod "pod-update-activedeadlineseconds-d91c0eea-6945-4ee2-97dc-5e0820f2259c" Mar 8 11:13:46.282: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-d91c0eea-6945-4ee2-97dc-5e0820f2259c" in namespace "pods-9921" to be "terminated due to deadline exceeded" Mar 8 11:13:46.290: INFO: Pod "pod-update-activedeadlineseconds-d91c0eea-6945-4ee2-97dc-5e0820f2259c": Phase="Running", Reason="", readiness=true. Elapsed: 7.345314ms Mar 8 11:13:48.294: INFO: Pod "pod-update-activedeadlineseconds-d91c0eea-6945-4ee2-97dc-5e0820f2259c": Phase="Running", Reason="", readiness=true. Elapsed: 2.011274009s Mar 8 11:13:50.298: INFO: Pod "pod-update-activedeadlineseconds-d91c0eea-6945-4ee2-97dc-5e0820f2259c": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 4.015184915s Mar 8 11:13:50.298: INFO: Pod "pod-update-activedeadlineseconds-d91c0eea-6945-4ee2-97dc-5e0820f2259c" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:13:50.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9921" for this suite. • [SLOW TEST:6.619 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":278,"completed":166,"skipped":2626,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:13:50.306: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Mar 8 11:13:50.364: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the sample API server. Mar 8 11:13:50.989: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Mar 8 11:13:53.175: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719262830, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719262830, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719262831, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719262830, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 8 11:13:55.177: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719262830, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719262830, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719262831, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719262830, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 8 11:13:57.178: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719262830, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719262830, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719262831, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719262830, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 8 11:14:00.212: INFO: Waited 1.026184151s for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:14:00.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-4310" for this suite. • [SLOW TEST:10.480 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]","total":278,"completed":167,"skipped":2632,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:14:00.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium Mar 8 11:14:00.855: INFO: Waiting up to 5m0s for pod "pod-ade13b82-69d6-4724-a38e-e7c045026ffe" in namespace "emptydir-407" to be "success or failure" Mar 8 11:14:00.875: INFO: Pod "pod-ade13b82-69d6-4724-a38e-e7c045026ffe": Phase="Pending", Reason="", readiness=false. Elapsed: 19.938999ms Mar 8 11:14:02.879: INFO: Pod "pod-ade13b82-69d6-4724-a38e-e7c045026ffe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.024434684s STEP: Saw pod success Mar 8 11:14:02.879: INFO: Pod "pod-ade13b82-69d6-4724-a38e-e7c045026ffe" satisfied condition "success or failure" Mar 8 11:14:02.883: INFO: Trying to get logs from node kind-control-plane pod pod-ade13b82-69d6-4724-a38e-e7c045026ffe container test-container: STEP: delete the pod Mar 8 11:14:02.922: INFO: Waiting for pod pod-ade13b82-69d6-4724-a38e-e7c045026ffe to disappear Mar 8 11:14:02.925: INFO: Pod pod-ade13b82-69d6-4724-a38e-e7c045026ffe no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:14:02.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-407" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":168,"skipped":2644,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:14:02.932: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting the proxy server Mar 8 11:14:03.000: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:14:03.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7178" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":278,"completed":169,"skipped":2670,"failed":0} SSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:14:03.121: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Mar 8 11:14:03.178: INFO: Waiting up to 5m0s for pod "downward-api-88d69b98-3416-4b17-9504-fd59a9f53ad3" in namespace "downward-api-2885" to be "success or failure" Mar 8 11:14:03.189: INFO: Pod "downward-api-88d69b98-3416-4b17-9504-fd59a9f53ad3": Phase="Pending", Reason="", readiness=false. Elapsed: 11.563492ms Mar 8 11:14:05.192: INFO: Pod "downward-api-88d69b98-3416-4b17-9504-fd59a9f53ad3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.014766874s STEP: Saw pod success Mar 8 11:14:05.192: INFO: Pod "downward-api-88d69b98-3416-4b17-9504-fd59a9f53ad3" satisfied condition "success or failure" Mar 8 11:14:05.195: INFO: Trying to get logs from node kind-control-plane pod downward-api-88d69b98-3416-4b17-9504-fd59a9f53ad3 container dapi-container: STEP: delete the pod Mar 8 11:14:05.215: INFO: Waiting for pod downward-api-88d69b98-3416-4b17-9504-fd59a9f53ad3 to disappear Mar 8 11:14:05.219: INFO: Pod downward-api-88d69b98-3416-4b17-9504-fd59a9f53ad3 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:14:05.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2885" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":278,"completed":170,"skipped":2677,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:14:05.226: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Mar 8 11:14:05.349: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-8504 /api/v1/namespaces/watch-8504/configmaps/e2e-watch-test-resource-version c535e836-ae83-441c-ab91-5820251e9b75 21662 0 2020-03-08 11:14:05 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 8 11:14:05.349: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-8504 /api/v1/namespaces/watch-8504/configmaps/e2e-watch-test-resource-version c535e836-ae83-441c-ab91-5820251e9b75 21663 0 2020-03-08 11:14:05 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:14:05.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8504" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":278,"completed":171,"skipped":2682,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:14:05.371: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 8 11:14:05.977: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 8 11:14:08.007: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719262845, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719262845, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719262846, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719262845, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 8 11:14:11.035: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:14:11.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9042" for this suite. STEP: Destroying namespace "webhook-9042-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.899 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":278,"completed":172,"skipped":2692,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:14:11.270: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-1898, will wait for the garbage collector to delete the pods Mar 8 11:14:15.451: INFO: Deleting Job.batch foo took: 5.250233ms Mar 8 11:14:15.551: INFO: Terminating Job.batch foo pods took: 100.279479ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:14:47.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-1898" for this suite. • [SLOW TEST:36.591 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":278,"completed":173,"skipped":2708,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:14:47.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:14:47.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9681" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":278,"completed":174,"skipped":2717,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:14:47.989: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 8 11:15:08.071: INFO: Container started at 2020-03-08 11:14:49 +0000 UTC, pod became ready at 2020-03-08 11:15:07 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:15:08.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3719" for this suite. • [SLOW TEST:20.089 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":278,"completed":175,"skipped":2751,"failed":0} [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:15:08.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Mar 8 11:15:10.687: INFO: Successfully updated pod "annotationupdate39d854a9-095d-46a1-a7f9-61eea92ecb13" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:15:12.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7405" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":176,"skipped":2751,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:15:12.729: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test env composition Mar 8 11:15:12.809: INFO: Waiting up to 5m0s for pod "var-expansion-79594de0-02a4-434b-9bc6-8e3855d1dfbc" in namespace "var-expansion-5858" to be "success or failure" Mar 8 11:15:12.826: INFO: Pod "var-expansion-79594de0-02a4-434b-9bc6-8e3855d1dfbc": Phase="Pending", Reason="", readiness=false. Elapsed: 17.603852ms Mar 8 11:15:14.840: INFO: Pod "var-expansion-79594de0-02a4-434b-9bc6-8e3855d1dfbc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.031527229s STEP: Saw pod success Mar 8 11:15:14.840: INFO: Pod "var-expansion-79594de0-02a4-434b-9bc6-8e3855d1dfbc" satisfied condition "success or failure" Mar 8 11:15:14.844: INFO: Trying to get logs from node kind-control-plane pod var-expansion-79594de0-02a4-434b-9bc6-8e3855d1dfbc container dapi-container: STEP: delete the pod Mar 8 11:15:14.877: INFO: Waiting for pod var-expansion-79594de0-02a4-434b-9bc6-8e3855d1dfbc to disappear Mar 8 11:15:14.885: INFO: Pod var-expansion-79594de0-02a4-434b-9bc6-8e3855d1dfbc no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:15:14.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5858" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":278,"completed":177,"skipped":2771,"failed":0} S ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:15:14.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 8 11:15:14.962: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1b4f99b5-9f54-4da6-914e-78f44472a378" in namespace "projected-4014" to be "success or failure" Mar 8 11:15:14.965: INFO: Pod "downwardapi-volume-1b4f99b5-9f54-4da6-914e-78f44472a378": Phase="Pending", Reason="", readiness=false. Elapsed: 2.395425ms Mar 8 11:15:16.969: INFO: Pod "downwardapi-volume-1b4f99b5-9f54-4da6-914e-78f44472a378": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00682167s STEP: Saw pod success Mar 8 11:15:16.969: INFO: Pod "downwardapi-volume-1b4f99b5-9f54-4da6-914e-78f44472a378" satisfied condition "success or failure" Mar 8 11:15:16.972: INFO: Trying to get logs from node kind-control-plane pod downwardapi-volume-1b4f99b5-9f54-4da6-914e-78f44472a378 container client-container: STEP: delete the pod Mar 8 11:15:17.003: INFO: Waiting for pod downwardapi-volume-1b4f99b5-9f54-4da6-914e-78f44472a378 to disappear Mar 8 11:15:17.011: INFO: Pod downwardapi-volume-1b4f99b5-9f54-4da6-914e-78f44472a378 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:15:17.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4014" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":178,"skipped":2772,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:15:17.018: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4517.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-4517.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4517.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4517.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-4517.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-4517.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-4517.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-4517.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4517.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4517.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-4517.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4517.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-4517.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-4517.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-4517.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-4517.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-4517.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4517.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 8 11:15:21.154: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4517.svc.cluster.local from pod dns-4517/dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b: the server could not find the requested resource (get pods dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b) Mar 8 11:15:21.157: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4517.svc.cluster.local from pod dns-4517/dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b: the server could not find the requested resource (get pods dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b) Mar 8 11:15:21.160: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4517.svc.cluster.local from pod dns-4517/dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b: the server could not find the requested resource (get pods dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b) Mar 8 11:15:21.163: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4517.svc.cluster.local from pod dns-4517/dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b: the server could not find the requested resource (get pods dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b) Mar 8 11:15:21.172: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4517.svc.cluster.local from pod dns-4517/dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b: the server could not find the requested resource (get pods dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b) Mar 8 11:15:21.175: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4517.svc.cluster.local from pod dns-4517/dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b: the server could not find the requested resource (get pods dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b) Mar 8 11:15:21.177: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4517.svc.cluster.local from pod dns-4517/dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b: the server could not find the requested resource (get pods dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b) Mar 8 11:15:21.180: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4517.svc.cluster.local from pod dns-4517/dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b: the server could not find the requested resource (get pods dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b) Mar 8 11:15:21.186: INFO: Lookups using dns-4517/dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4517.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4517.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4517.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4517.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4517.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4517.svc.cluster.local jessie_udp@dns-test-service-2.dns-4517.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4517.svc.cluster.local] Mar 8 11:15:26.190: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4517.svc.cluster.local from pod dns-4517/dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b: the server could not find the requested resource (get pods dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b) Mar 8 11:15:26.199: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4517.svc.cluster.local from pod dns-4517/dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b: the server could not find the requested resource (get pods dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b) Mar 8 11:15:26.202: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4517.svc.cluster.local from pod dns-4517/dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b: the server could not find the requested resource (get pods dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b) Mar 8 11:15:26.205: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4517.svc.cluster.local from pod dns-4517/dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b: the server could not find the requested resource (get pods dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b) Mar 8 11:15:26.215: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4517.svc.cluster.local from pod dns-4517/dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b: the server could not find the requested resource (get pods dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b) Mar 8 11:15:26.218: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4517.svc.cluster.local from pod dns-4517/dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b: the server could not find the requested resource (get pods dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b) Mar 8 11:15:26.221: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4517.svc.cluster.local from pod dns-4517/dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b: the server could not find the requested resource (get pods dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b) Mar 8 11:15:26.225: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4517.svc.cluster.local from pod dns-4517/dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b: the server could not find the requested resource (get pods dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b) Mar 8 11:15:26.231: INFO: Lookups using dns-4517/dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4517.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4517.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4517.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4517.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4517.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4517.svc.cluster.local jessie_udp@dns-test-service-2.dns-4517.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4517.svc.cluster.local] Mar 8 11:15:31.190: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4517.svc.cluster.local from pod dns-4517/dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b: the server could not find the requested resource (get pods dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b) Mar 8 11:15:31.193: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4517.svc.cluster.local from pod dns-4517/dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b: the server could not find the requested resource (get pods dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b) Mar 8 11:15:31.196: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4517.svc.cluster.local from pod dns-4517/dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b: the server could not find the requested resource (get pods dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b) Mar 8 11:15:31.200: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4517.svc.cluster.local from pod dns-4517/dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b: the server could not find the requested resource (get pods dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b) Mar 8 11:15:31.210: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4517.svc.cluster.local from pod dns-4517/dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b: the server could not find the requested resource (get pods dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b) Mar 8 11:15:31.213: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4517.svc.cluster.local from pod dns-4517/dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b: the server could not find the requested resource (get pods dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b) Mar 8 11:15:31.216: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4517.svc.cluster.local from pod dns-4517/dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b: the server could not find the requested resource (get pods dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b) Mar 8 11:15:31.219: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4517.svc.cluster.local from pod dns-4517/dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b: the server could not find the requested resource (get pods dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b) Mar 8 11:15:31.225: INFO: Lookups using dns-4517/dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4517.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4517.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4517.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4517.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4517.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4517.svc.cluster.local jessie_udp@dns-test-service-2.dns-4517.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4517.svc.cluster.local] Mar 8 11:15:36.190: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4517.svc.cluster.local from pod dns-4517/dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b: the server could not find the requested resource (get pods dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b) Mar 8 11:15:36.193: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4517.svc.cluster.local from pod dns-4517/dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b: the server could not find the requested resource (get pods dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b) Mar 8 11:15:36.197: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4517.svc.cluster.local from pod dns-4517/dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b: the server could not find the requested resource (get pods dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b) Mar 8 11:15:36.200: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4517.svc.cluster.local from pod dns-4517/dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b: the server could not find the requested resource (get pods dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b) Mar 8 11:15:36.209: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4517.svc.cluster.local from pod dns-4517/dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b: the server could not find the requested resource (get pods dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b) Mar 8 11:15:36.217: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4517.svc.cluster.local from pod dns-4517/dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b: the server could not find the requested resource (get pods dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b) Mar 8 11:15:36.221: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4517.svc.cluster.local from pod dns-4517/dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b: the server could not find the requested resource (get pods dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b) Mar 8 11:15:36.224: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4517.svc.cluster.local from pod dns-4517/dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b: the server could not find the requested resource (get pods dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b) Mar 8 11:15:36.230: INFO: Lookups using dns-4517/dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4517.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4517.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4517.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4517.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4517.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4517.svc.cluster.local jessie_udp@dns-test-service-2.dns-4517.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4517.svc.cluster.local] Mar 8 11:15:41.190: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4517.svc.cluster.local from pod dns-4517/dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b: the server could not find the requested resource (get pods dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b) Mar 8 11:15:41.193: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4517.svc.cluster.local from pod dns-4517/dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b: the server could not find the requested resource (get pods dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b) Mar 8 11:15:41.196: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4517.svc.cluster.local from pod dns-4517/dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b: the server could not find the requested resource (get pods dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b) Mar 8 11:15:41.199: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4517.svc.cluster.local from pod dns-4517/dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b: the server could not find the requested resource (get pods dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b) Mar 8 11:15:41.208: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4517.svc.cluster.local from pod dns-4517/dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b: the server could not find the requested resource (get pods dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b) Mar 8 11:15:41.210: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4517.svc.cluster.local from pod dns-4517/dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b: the server could not find the requested resource (get pods dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b) Mar 8 11:15:41.213: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4517.svc.cluster.local from pod dns-4517/dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b: the server could not find the requested resource (get pods dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b) Mar 8 11:15:41.216: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4517.svc.cluster.local from pod dns-4517/dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b: the server could not find the requested resource (get pods dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b) Mar 8 11:15:41.221: INFO: Lookups using dns-4517/dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4517.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4517.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4517.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4517.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4517.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4517.svc.cluster.local jessie_udp@dns-test-service-2.dns-4517.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4517.svc.cluster.local] Mar 8 11:15:46.191: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4517.svc.cluster.local from pod dns-4517/dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b: the server could not find the requested resource (get pods dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b) Mar 8 11:15:46.193: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4517.svc.cluster.local from pod dns-4517/dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b: the server could not find the requested resource (get pods dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b) Mar 8 11:15:46.196: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4517.svc.cluster.local from pod dns-4517/dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b: the server could not find the requested resource (get pods dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b) Mar 8 11:15:46.199: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4517.svc.cluster.local from pod dns-4517/dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b: the server could not find the requested resource (get pods dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b) Mar 8 11:15:46.208: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4517.svc.cluster.local from pod dns-4517/dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b: the server could not find the requested resource (get pods dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b) Mar 8 11:15:46.210: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4517.svc.cluster.local from pod dns-4517/dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b: the server could not find the requested resource (get pods dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b) Mar 8 11:15:46.213: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4517.svc.cluster.local from pod dns-4517/dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b: the server could not find the requested resource (get pods dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b) Mar 8 11:15:46.215: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4517.svc.cluster.local from pod dns-4517/dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b: the server could not find the requested resource (get pods dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b) Mar 8 11:15:46.221: INFO: Lookups using dns-4517/dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4517.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4517.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4517.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4517.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4517.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4517.svc.cluster.local jessie_udp@dns-test-service-2.dns-4517.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4517.svc.cluster.local] Mar 8 11:15:51.228: INFO: DNS probes using dns-4517/dns-test-7a14a77b-ff6e-437d-86fa-7f8ab8848c2b succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:15:51.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4517" for this suite. • [SLOW TEST:34.365 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":278,"completed":179,"skipped":2802,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:15:51.384: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6990.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-6990.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6990.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-6990.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 8 11:15:55.475: INFO: DNS probes using dns-test-02cecbc7-6ad6-4212-b193-8de448add4ec succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6990.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-6990.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6990.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-6990.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 8 11:15:59.568: INFO: File wheezy_udp@dns-test-service-3.dns-6990.svc.cluster.local from pod dns-6990/dns-test-e0a6e720-8a7e-4c77-955d-c1f02f38af1d contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 8 11:15:59.571: INFO: File jessie_udp@dns-test-service-3.dns-6990.svc.cluster.local from pod dns-6990/dns-test-e0a6e720-8a7e-4c77-955d-c1f02f38af1d contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 8 11:15:59.571: INFO: Lookups using dns-6990/dns-test-e0a6e720-8a7e-4c77-955d-c1f02f38af1d failed for: [wheezy_udp@dns-test-service-3.dns-6990.svc.cluster.local jessie_udp@dns-test-service-3.dns-6990.svc.cluster.local] Mar 8 11:16:04.577: INFO: File wheezy_udp@dns-test-service-3.dns-6990.svc.cluster.local from pod dns-6990/dns-test-e0a6e720-8a7e-4c77-955d-c1f02f38af1d contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 8 11:16:04.581: INFO: File jessie_udp@dns-test-service-3.dns-6990.svc.cluster.local from pod dns-6990/dns-test-e0a6e720-8a7e-4c77-955d-c1f02f38af1d contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 8 11:16:04.581: INFO: Lookups using dns-6990/dns-test-e0a6e720-8a7e-4c77-955d-c1f02f38af1d failed for: [wheezy_udp@dns-test-service-3.dns-6990.svc.cluster.local jessie_udp@dns-test-service-3.dns-6990.svc.cluster.local] Mar 8 11:16:09.576: INFO: File wheezy_udp@dns-test-service-3.dns-6990.svc.cluster.local from pod dns-6990/dns-test-e0a6e720-8a7e-4c77-955d-c1f02f38af1d contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 8 11:16:09.580: INFO: File jessie_udp@dns-test-service-3.dns-6990.svc.cluster.local from pod dns-6990/dns-test-e0a6e720-8a7e-4c77-955d-c1f02f38af1d contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 8 11:16:09.580: INFO: Lookups using dns-6990/dns-test-e0a6e720-8a7e-4c77-955d-c1f02f38af1d failed for: [wheezy_udp@dns-test-service-3.dns-6990.svc.cluster.local jessie_udp@dns-test-service-3.dns-6990.svc.cluster.local] Mar 8 11:16:14.578: INFO: File wheezy_udp@dns-test-service-3.dns-6990.svc.cluster.local from pod dns-6990/dns-test-e0a6e720-8a7e-4c77-955d-c1f02f38af1d contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 8 11:16:14.581: INFO: File jessie_udp@dns-test-service-3.dns-6990.svc.cluster.local from pod dns-6990/dns-test-e0a6e720-8a7e-4c77-955d-c1f02f38af1d contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 8 11:16:14.581: INFO: Lookups using dns-6990/dns-test-e0a6e720-8a7e-4c77-955d-c1f02f38af1d failed for: [wheezy_udp@dns-test-service-3.dns-6990.svc.cluster.local jessie_udp@dns-test-service-3.dns-6990.svc.cluster.local] Mar 8 11:16:19.576: INFO: File wheezy_udp@dns-test-service-3.dns-6990.svc.cluster.local from pod dns-6990/dns-test-e0a6e720-8a7e-4c77-955d-c1f02f38af1d contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 8 11:16:19.579: INFO: File jessie_udp@dns-test-service-3.dns-6990.svc.cluster.local from pod dns-6990/dns-test-e0a6e720-8a7e-4c77-955d-c1f02f38af1d contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 8 11:16:19.579: INFO: Lookups using dns-6990/dns-test-e0a6e720-8a7e-4c77-955d-c1f02f38af1d failed for: [wheezy_udp@dns-test-service-3.dns-6990.svc.cluster.local jessie_udp@dns-test-service-3.dns-6990.svc.cluster.local] Mar 8 11:16:24.586: INFO: DNS probes using dns-test-e0a6e720-8a7e-4c77-955d-c1f02f38af1d succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6990.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-6990.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6990.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-6990.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 8 11:16:28.791: INFO: DNS probes using dns-test-95b1867c-7ae9-4a67-94c5-937c193d93db succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:16:28.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6990" for this suite. • [SLOW TEST:37.496 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":278,"completed":180,"skipped":2820,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:16:28.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-af352443-d277-45b3-bd1f-2b15ddd570e3 in namespace container-probe-8189 Mar 8 11:16:32.939: INFO: Started pod liveness-af352443-d277-45b3-bd1f-2b15ddd570e3 in namespace container-probe-8189 STEP: checking the pod's current state and verifying that restartCount is present Mar 8 11:16:32.942: INFO: Initial restart count of pod liveness-af352443-d277-45b3-bd1f-2b15ddd570e3 is 0 Mar 8 11:16:51.018: INFO: Restart count of pod container-probe-8189/liveness-af352443-d277-45b3-bd1f-2b15ddd570e3 is now 1 (18.075734239s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:16:51.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8189" for this suite. • [SLOW TEST:22.185 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":181,"skipped":2834,"failed":0} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:16:51.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating all guestbook components Mar 8 11:16:51.122: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend Mar 8 11:16:51.122: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8597' Mar 8 11:16:51.526: INFO: stderr: "" Mar 8 11:16:51.526: INFO: stdout: "service/agnhost-slave created\n" Mar 8 11:16:51.526: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend Mar 8 11:16:51.526: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8597' Mar 8 11:16:51.774: INFO: stderr: "" Mar 8 11:16:51.774: INFO: stdout: "service/agnhost-master created\n" Mar 8 11:16:51.774: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Mar 8 11:16:51.774: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8597' Mar 8 11:16:52.102: INFO: stderr: "" Mar 8 11:16:52.102: INFO: stdout: "service/frontend created\n" Mar 8 11:16:52.102: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Mar 8 11:16:52.102: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8597' Mar 8 11:16:52.375: INFO: stderr: "" Mar 8 11:16:52.375: INFO: stdout: "deployment.apps/frontend created\n" Mar 8 11:16:52.375: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Mar 8 11:16:52.375: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8597' Mar 8 11:16:52.667: INFO: stderr: "" Mar 8 11:16:52.667: INFO: stdout: "deployment.apps/agnhost-master created\n" Mar 8 11:16:52.667: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Mar 8 11:16:52.667: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8597' Mar 8 11:16:52.930: INFO: stderr: "" Mar 8 11:16:52.930: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app Mar 8 11:16:52.930: INFO: Waiting for all frontend pods to be Running. Mar 8 11:16:57.981: INFO: Waiting for frontend to serve content. Mar 8 11:16:57.990: INFO: Trying to add a new entry to the guestbook. Mar 8 11:16:58.001: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Mar 8 11:16:58.009: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8597' Mar 8 11:16:58.187: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 8 11:16:58.187: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources Mar 8 11:16:58.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8597' Mar 8 11:16:58.367: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 8 11:16:58.367: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Mar 8 11:16:58.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8597' Mar 8 11:16:58.501: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 8 11:16:58.501: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Mar 8 11:16:58.502: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8597' Mar 8 11:16:58.597: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 8 11:16:58.597: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Mar 8 11:16:58.597: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8597' Mar 8 11:16:58.710: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 8 11:16:58.710: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Mar 8 11:16:58.710: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8597' Mar 8 11:16:58.860: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 8 11:16:58.860: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:16:58.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8597" for this suite. • [SLOW TEST:7.824 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:385 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":278,"completed":182,"skipped":2842,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:16:58.889: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:17:10.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3849" for this suite. • [SLOW TEST:11.146 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":278,"completed":183,"skipped":2865,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:17:10.036: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Mar 8 11:17:10.090: INFO: Waiting up to 5m0s for pod "downward-api-97683c8c-6bda-4776-94e0-0360a98abc2d" in namespace "downward-api-4203" to be "success or failure" Mar 8 11:17:10.123: INFO: Pod "downward-api-97683c8c-6bda-4776-94e0-0360a98abc2d": Phase="Pending", Reason="", readiness=false. Elapsed: 32.840688ms Mar 8 11:17:12.127: INFO: Pod "downward-api-97683c8c-6bda-4776-94e0-0360a98abc2d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036699912s Mar 8 11:17:14.130: INFO: Pod "downward-api-97683c8c-6bda-4776-94e0-0360a98abc2d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040142615s STEP: Saw pod success Mar 8 11:17:14.131: INFO: Pod "downward-api-97683c8c-6bda-4776-94e0-0360a98abc2d" satisfied condition "success or failure" Mar 8 11:17:14.133: INFO: Trying to get logs from node kind-control-plane pod downward-api-97683c8c-6bda-4776-94e0-0360a98abc2d container dapi-container: STEP: delete the pod Mar 8 11:17:14.198: INFO: Waiting for pod downward-api-97683c8c-6bda-4776-94e0-0360a98abc2d to disappear Mar 8 11:17:14.202: INFO: Pod downward-api-97683c8c-6bda-4776-94e0-0360a98abc2d no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:17:14.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4203" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":278,"completed":184,"skipped":2883,"failed":0} SSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:17:14.210: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-q7vx STEP: Creating a pod to test atomic-volume-subpath Mar 8 11:17:14.276: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-q7vx" in namespace "subpath-8681" to be "success or failure" Mar 8 11:17:14.280: INFO: Pod "pod-subpath-test-configmap-q7vx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.001011ms Mar 8 11:17:16.291: INFO: Pod "pod-subpath-test-configmap-q7vx": Phase="Running", Reason="", readiness=true. Elapsed: 2.015068023s Mar 8 11:17:18.294: INFO: Pod "pod-subpath-test-configmap-q7vx": Phase="Running", Reason="", readiness=true. Elapsed: 4.018536789s Mar 8 11:17:20.301: INFO: Pod "pod-subpath-test-configmap-q7vx": Phase="Running", Reason="", readiness=true. Elapsed: 6.024953172s Mar 8 11:17:22.304: INFO: Pod "pod-subpath-test-configmap-q7vx": Phase="Running", Reason="", readiness=true. Elapsed: 8.028386491s Mar 8 11:17:24.308: INFO: Pod "pod-subpath-test-configmap-q7vx": Phase="Running", Reason="", readiness=true. Elapsed: 10.032090663s Mar 8 11:17:26.311: INFO: Pod "pod-subpath-test-configmap-q7vx": Phase="Running", Reason="", readiness=true. Elapsed: 12.035497259s Mar 8 11:17:28.315: INFO: Pod "pod-subpath-test-configmap-q7vx": Phase="Running", Reason="", readiness=true. Elapsed: 14.039688589s Mar 8 11:17:30.319: INFO: Pod "pod-subpath-test-configmap-q7vx": Phase="Running", Reason="", readiness=true. Elapsed: 16.042907197s Mar 8 11:17:32.325: INFO: Pod "pod-subpath-test-configmap-q7vx": Phase="Running", Reason="", readiness=true. Elapsed: 18.049633789s Mar 8 11:17:34.329: INFO: Pod "pod-subpath-test-configmap-q7vx": Phase="Running", Reason="", readiness=true. Elapsed: 20.053185098s Mar 8 11:17:36.332: INFO: Pod "pod-subpath-test-configmap-q7vx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.056560815s STEP: Saw pod success Mar 8 11:17:36.332: INFO: Pod "pod-subpath-test-configmap-q7vx" satisfied condition "success or failure" Mar 8 11:17:36.335: INFO: Trying to get logs from node kind-control-plane pod pod-subpath-test-configmap-q7vx container test-container-subpath-configmap-q7vx: STEP: delete the pod Mar 8 11:17:36.389: INFO: Waiting for pod pod-subpath-test-configmap-q7vx to disappear Mar 8 11:17:36.404: INFO: Pod pod-subpath-test-configmap-q7vx no longer exists STEP: Deleting pod pod-subpath-test-configmap-q7vx Mar 8 11:17:36.404: INFO: Deleting pod "pod-subpath-test-configmap-q7vx" in namespace "subpath-8681" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:17:36.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8681" for this suite. • [SLOW TEST:22.206 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":278,"completed":185,"skipped":2887,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:17:36.418: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Mar 8 11:17:36.459: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Mar 8 11:17:43.502: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:17:43.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4775" for this suite. • [SLOW TEST:7.094 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":278,"completed":186,"skipped":2920,"failed":0} SSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:17:43.512: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Mar 8 11:17:43.580: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:17:47.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7125" for this suite. •{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":278,"completed":187,"skipped":2923,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:17:47.476: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's args Mar 8 11:17:47.553: INFO: Waiting up to 5m0s for pod "var-expansion-de635d33-444d-4946-8528-4b6e36e6b2bb" in namespace "var-expansion-6825" to be "success or failure" Mar 8 11:17:47.557: INFO: Pod "var-expansion-de635d33-444d-4946-8528-4b6e36e6b2bb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.719006ms Mar 8 11:17:49.561: INFO: Pod "var-expansion-de635d33-444d-4946-8528-4b6e36e6b2bb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008538652s STEP: Saw pod success Mar 8 11:17:49.561: INFO: Pod "var-expansion-de635d33-444d-4946-8528-4b6e36e6b2bb" satisfied condition "success or failure" Mar 8 11:17:49.563: INFO: Trying to get logs from node kind-control-plane pod var-expansion-de635d33-444d-4946-8528-4b6e36e6b2bb container dapi-container: STEP: delete the pod Mar 8 11:17:49.597: INFO: Waiting for pod var-expansion-de635d33-444d-4946-8528-4b6e36e6b2bb to disappear Mar 8 11:17:49.608: INFO: Pod var-expansion-de635d33-444d-4946-8528-4b6e36e6b2bb no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:17:49.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6825" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":278,"completed":188,"skipped":2979,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:17:49.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-secret-zdfn STEP: Creating a pod to test atomic-volume-subpath Mar 8 11:17:49.693: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-zdfn" in namespace "subpath-1022" to be "success or failure" Mar 8 11:17:49.698: INFO: Pod "pod-subpath-test-secret-zdfn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.256954ms Mar 8 11:17:51.701: INFO: Pod "pod-subpath-test-secret-zdfn": Phase="Running", Reason="", readiness=true. Elapsed: 2.007685314s Mar 8 11:17:53.704: INFO: Pod "pod-subpath-test-secret-zdfn": Phase="Running", Reason="", readiness=true. Elapsed: 4.011156507s Mar 8 11:17:55.708: INFO: Pod "pod-subpath-test-secret-zdfn": Phase="Running", Reason="", readiness=true. Elapsed: 6.014712825s Mar 8 11:17:57.711: INFO: Pod "pod-subpath-test-secret-zdfn": Phase="Running", Reason="", readiness=true. Elapsed: 8.018225401s Mar 8 11:17:59.715: INFO: Pod "pod-subpath-test-secret-zdfn": Phase="Running", Reason="", readiness=true. Elapsed: 10.021703784s Mar 8 11:18:01.719: INFO: Pod "pod-subpath-test-secret-zdfn": Phase="Running", Reason="", readiness=true. Elapsed: 12.02545709s Mar 8 11:18:03.722: INFO: Pod "pod-subpath-test-secret-zdfn": Phase="Running", Reason="", readiness=true. Elapsed: 14.029007861s Mar 8 11:18:05.726: INFO: Pod "pod-subpath-test-secret-zdfn": Phase="Running", Reason="", readiness=true. Elapsed: 16.03260269s Mar 8 11:18:07.729: INFO: Pod "pod-subpath-test-secret-zdfn": Phase="Running", Reason="", readiness=true. Elapsed: 18.03607943s Mar 8 11:18:09.734: INFO: Pod "pod-subpath-test-secret-zdfn": Phase="Running", Reason="", readiness=true. Elapsed: 20.040564022s Mar 8 11:18:11.737: INFO: Pod "pod-subpath-test-secret-zdfn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.043978249s STEP: Saw pod success Mar 8 11:18:11.737: INFO: Pod "pod-subpath-test-secret-zdfn" satisfied condition "success or failure" Mar 8 11:18:11.740: INFO: Trying to get logs from node kind-control-plane pod pod-subpath-test-secret-zdfn container test-container-subpath-secret-zdfn: STEP: delete the pod Mar 8 11:18:11.777: INFO: Waiting for pod pod-subpath-test-secret-zdfn to disappear Mar 8 11:18:11.784: INFO: Pod pod-subpath-test-secret-zdfn no longer exists STEP: Deleting pod pod-subpath-test-secret-zdfn Mar 8 11:18:11.784: INFO: Deleting pod "pod-subpath-test-secret-zdfn" in namespace "subpath-1022" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:18:11.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1022" for this suite. • [SLOW TEST:22.200 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":278,"completed":189,"skipped":2994,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:18:11.814: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD Mar 8 11:18:11.860: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:18:29.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8102" for this suite. • [SLOW TEST:17.776 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":278,"completed":190,"skipped":2999,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:18:29.590: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Mar 8 11:18:29.788: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Mar 8 11:18:41.051: INFO: >>> kubeConfig: /root/.kube/config Mar 8 11:18:43.079: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:18:55.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4985" for this suite. • [SLOW TEST:26.097 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":278,"completed":191,"skipped":3011,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:18:55.687: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC Mar 8 11:18:55.769: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9695' Mar 8 11:18:57.858: INFO: stderr: "" Mar 8 11:18:57.858: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Mar 8 11:18:58.878: INFO: Selector matched 1 pods for map[app:agnhost] Mar 8 11:18:58.878: INFO: Found 0 / 1 Mar 8 11:18:59.861: INFO: Selector matched 1 pods for map[app:agnhost] Mar 8 11:18:59.861: INFO: Found 1 / 1 Mar 8 11:18:59.861: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Mar 8 11:18:59.864: INFO: Selector matched 1 pods for map[app:agnhost] Mar 8 11:18:59.864: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 8 11:18:59.864: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-pv6st --namespace=kubectl-9695 -p {"metadata":{"annotations":{"x":"y"}}}' Mar 8 11:18:59.988: INFO: stderr: "" Mar 8 11:18:59.988: INFO: stdout: "pod/agnhost-master-pv6st patched\n" STEP: checking annotations Mar 8 11:18:59.991: INFO: Selector matched 1 pods for map[app:agnhost] Mar 8 11:18:59.991: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:18:59.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9695" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":278,"completed":192,"skipped":3039,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:18:59.996: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1576 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 8 11:19:00.048: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-3698' Mar 8 11:19:00.171: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 8 11:19:00.171: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created [AfterEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1582 Mar 8 11:19:02.183: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-3698' Mar 8 11:19:02.326: INFO: stderr: "" Mar 8 11:19:02.326: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:19:02.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3698" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance]","total":278,"completed":193,"skipped":3052,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:19:02.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-8df726e3-8ab9-4990-8b2a-aa9e37813aaf STEP: Creating a pod to test consume configMaps Mar 8 11:19:02.409: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-bab2adf1-2058-494b-9b04-1e55147bf62f" in namespace "projected-9140" to be "success or failure" Mar 8 11:19:02.412: INFO: Pod "pod-projected-configmaps-bab2adf1-2058-494b-9b04-1e55147bf62f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.727981ms Mar 8 11:19:04.416: INFO: Pod "pod-projected-configmaps-bab2adf1-2058-494b-9b04-1e55147bf62f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006902946s STEP: Saw pod success Mar 8 11:19:04.416: INFO: Pod "pod-projected-configmaps-bab2adf1-2058-494b-9b04-1e55147bf62f" satisfied condition "success or failure" Mar 8 11:19:04.419: INFO: Trying to get logs from node kind-control-plane pod pod-projected-configmaps-bab2adf1-2058-494b-9b04-1e55147bf62f container projected-configmap-volume-test: STEP: delete the pod Mar 8 11:19:04.437: INFO: Waiting for pod pod-projected-configmaps-bab2adf1-2058-494b-9b04-1e55147bf62f to disappear Mar 8 11:19:04.466: INFO: Pod pod-projected-configmaps-bab2adf1-2058-494b-9b04-1e55147bf62f no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:19:04.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9140" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":194,"skipped":3058,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:19:04.473: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs Mar 8 11:19:04.546: INFO: Waiting up to 5m0s for pod "pod-10b49112-5014-4ca3-a751-945f5124c848" in namespace "emptydir-4785" to be "success or failure" Mar 8 11:19:04.552: INFO: Pod "pod-10b49112-5014-4ca3-a751-945f5124c848": Phase="Pending", Reason="", readiness=false. Elapsed: 6.647953ms Mar 8 11:19:06.556: INFO: Pod "pod-10b49112-5014-4ca3-a751-945f5124c848": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010650357s Mar 8 11:19:08.560: INFO: Pod "pod-10b49112-5014-4ca3-a751-945f5124c848": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014640515s STEP: Saw pod success Mar 8 11:19:08.560: INFO: Pod "pod-10b49112-5014-4ca3-a751-945f5124c848" satisfied condition "success or failure" Mar 8 11:19:08.563: INFO: Trying to get logs from node kind-control-plane pod pod-10b49112-5014-4ca3-a751-945f5124c848 container test-container: STEP: delete the pod Mar 8 11:19:08.601: INFO: Waiting for pod pod-10b49112-5014-4ca3-a751-945f5124c848 to disappear Mar 8 11:19:08.633: INFO: Pod pod-10b49112-5014-4ca3-a751-945f5124c848 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:19:08.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4785" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":195,"skipped":3080,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:19:08.641: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Mar 8 11:19:09.631: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Mar 8 11:19:11.641: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719263149, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719263149, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719263149, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719263149, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 8 11:19:14.670: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 8 11:19:14.674: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:19:15.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-3886" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:7.349 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":278,"completed":196,"skipped":3099,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:19:15.991: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-2706b7fd-0080-463d-b57a-c9bd9acc3130 STEP: Creating a pod to test consume secrets Mar 8 11:19:16.082: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2e830d76-e93b-4822-b0c8-4a41b2bf49a6" in namespace "projected-8034" to be "success or failure" Mar 8 11:19:16.086: INFO: Pod "pod-projected-secrets-2e830d76-e93b-4822-b0c8-4a41b2bf49a6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.909581ms Mar 8 11:19:18.090: INFO: Pod "pod-projected-secrets-2e830d76-e93b-4822-b0c8-4a41b2bf49a6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008187568s STEP: Saw pod success Mar 8 11:19:18.090: INFO: Pod "pod-projected-secrets-2e830d76-e93b-4822-b0c8-4a41b2bf49a6" satisfied condition "success or failure" Mar 8 11:19:18.093: INFO: Trying to get logs from node kind-control-plane pod pod-projected-secrets-2e830d76-e93b-4822-b0c8-4a41b2bf49a6 container projected-secret-volume-test: STEP: delete the pod Mar 8 11:19:18.126: INFO: Waiting for pod pod-projected-secrets-2e830d76-e93b-4822-b0c8-4a41b2bf49a6 to disappear Mar 8 11:19:18.134: INFO: Pod pod-projected-secrets-2e830d76-e93b-4822-b0c8-4a41b2bf49a6 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:19:18.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8034" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":197,"skipped":3125,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:19:18.141: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Mar 8 11:19:22.223: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-5396 PodName:pod-sharedvolume-4d007aa4-aba6-4a8f-b907-559fea0c6fc0 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 11:19:22.223: INFO: >>> kubeConfig: /root/.kube/config I0308 11:19:22.262153 6 log.go:172] (0xc001763130) (0xc0010f6320) Create stream I0308 11:19:22.262196 6 log.go:172] (0xc001763130) (0xc0010f6320) Stream added, broadcasting: 1 I0308 11:19:22.263938 6 log.go:172] (0xc001763130) Reply frame received for 1 I0308 11:19:22.263977 6 log.go:172] (0xc001763130) (0xc001a6e1e0) Create stream I0308 11:19:22.263991 6 log.go:172] (0xc001763130) (0xc001a6e1e0) Stream added, broadcasting: 3 I0308 11:19:22.264960 6 log.go:172] (0xc001763130) Reply frame received for 3 I0308 11:19:22.265000 6 log.go:172] (0xc001763130) (0xc000b16b40) Create stream I0308 11:19:22.265015 6 log.go:172] (0xc001763130) (0xc000b16b40) Stream added, broadcasting: 5 I0308 11:19:22.265887 6 log.go:172] (0xc001763130) Reply frame received for 5 I0308 11:19:22.337846 6 log.go:172] (0xc001763130) Data frame received for 5 I0308 11:19:22.337903 6 log.go:172] (0xc000b16b40) (5) Data frame handling I0308 11:19:22.337934 6 log.go:172] (0xc001763130) Data frame received for 3 I0308 11:19:22.337949 6 log.go:172] (0xc001a6e1e0) (3) Data frame handling I0308 11:19:22.337970 6 log.go:172] (0xc001a6e1e0) (3) Data frame sent I0308 11:19:22.337986 6 log.go:172] (0xc001763130) Data frame received for 3 I0308 11:19:22.337998 6 log.go:172] (0xc001a6e1e0) (3) Data frame handling I0308 11:19:22.339613 6 log.go:172] (0xc001763130) Data frame received for 1 I0308 11:19:22.339638 6 log.go:172] (0xc0010f6320) (1) Data frame handling I0308 11:19:22.339656 6 log.go:172] (0xc0010f6320) (1) Data frame sent I0308 11:19:22.339676 6 log.go:172] (0xc001763130) (0xc0010f6320) Stream removed, broadcasting: 1 I0308 11:19:22.339698 6 log.go:172] (0xc001763130) Go away received I0308 11:19:22.339834 6 log.go:172] (0xc001763130) (0xc0010f6320) Stream removed, broadcasting: 1 I0308 11:19:22.339862 6 log.go:172] (0xc001763130) (0xc001a6e1e0) Stream removed, broadcasting: 3 I0308 11:19:22.339875 6 log.go:172] (0xc001763130) (0xc000b16b40) Stream removed, broadcasting: 5 Mar 8 11:19:22.339: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:19:22.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5396" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":278,"completed":198,"skipped":3126,"failed":0} SSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:19:22.347: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Mar 8 11:19:22.393: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:19:25.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8009" for this suite. •{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":278,"completed":199,"skipped":3131,"failed":0} SSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:19:25.727: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 8 11:19:25.820: FAIL: Conformance test suite needs a cluster with at least 2 nodes. Expected : 1 to be > : 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 Mar 8 11:19:25.826: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3069/daemonsets","resourceVersion":"23698"},"items":null} Mar 8 11:19:25.829: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3069/pods","resourceVersion":"23698"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 STEP: Collecting events from namespace "daemonsets-3069". STEP: Found 0 events. Mar 8 11:19:25.839: INFO: POD NODE PHASE GRACE CONDITIONS Mar 8 11:19:25.839: INFO: Mar 8 11:19:25.841: INFO: Logging node info for node kind-control-plane Mar 8 11:19:25.843: INFO: Node Info: &Node{ObjectMeta:{kind-control-plane /api/v1/nodes/kind-control-plane fa196105-2440-4f06-b810-9b010a12d269 23418 0 2020-03-08 10:17:25 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-control-plane kubernetes.io/os:linux node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134929522688 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134929522688 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-08 11:19:01 +0000 UTC,LastTransitionTime:2020-03-08 10:17:22 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-08 11:19:01 +0000 UTC,LastTransitionTime:2020-03-08 10:17:22 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-08 11:19:01 +0000 UTC,LastTransitionTime:2020-03-08 10:17:22 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-08 11:19:01 +0000 UTC,LastTransitionTime:2020-03-08 10:17:49 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.17.0.2,},NodeAddress{Type:Hostname,Address:kind-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:fb15ad632d6d4f17a6c81bd2460561b7,SystemUUID:3413a663-8564-42a4-9d35-4bc84ffe178b,BootID:3de0b5b8-8b8f-48d3-9705-cabccc881bdb,KernelVersion:4.4.0-142-generic,OSImage:Ubuntu 19.10,ContainerRuntimeVersion:containerd://1.3.2,KubeletVersion:v1.17.0,KubeProxyVersion:v1.17.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4afb99b4690b418ffc2ceb67e1a17376457e441c1f09ab55447f0aaf992fa646 k8s.gcr.io/etcd:3.4.3-0 k8s.gcr.io/etcd:3.4.3],SizeBytes:289997247,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.17.0],SizeBytes:144347953,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.17.0],SizeBytes:132100734,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.17.0],SizeBytes:131180355,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.17.0],SizeBytes:111937841,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[k8s.gcr.io/coredns:1.6.5],SizeBytes:41705951,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.11],SizeBytes:36513375,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:17444032,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:1bafcc6fb1aa990b487850adba9cadc020e42d7905aa8a30481182a477ba24b0 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.10],SizeBytes:16222606,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:4331310,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:c8bf5bbec6397465a247c2bb3e589bb77e4f62ff88a027175ecb2d9e4f12c9d7 docker.io/appropriate/curl:latest],SizeBytes:2779755,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:1804628,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0],SizeBytes:1799936,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:1791163,},ContainerImage{Names:[docker.io/library/busybox@sha256:6915be4043561d64e0ab0f8f098dc2ac48e077fe23f488ac24b665166898115a docker.io/library/busybox:latest],SizeBytes:764872,},ContainerImage{Names:[k8s.gcr.io/pause:3.1],SizeBytes:746479,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:599341,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:539309,},ContainerImage{Names:[docker.io/kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 docker.io/kubernetes/pause:latest],SizeBytes:74015,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 8 11:19:25.844: INFO: Logging kubelet events for node kind-control-plane Mar 8 11:19:25.846: INFO: Logging pods the kubelet thinks is on node kind-control-plane Mar 8 11:19:25.852: INFO: kube-controller-manager-kind-control-plane started at 2020-03-08 10:17:29 +0000 UTC (0+1 container statuses recorded) Mar 8 11:19:25.852: INFO: Container kube-controller-manager ready: true, restart count 0 Mar 8 11:19:25.852: INFO: etcd-kind-control-plane started at 2020-03-08 10:17:29 +0000 UTC (0+1 container statuses recorded) Mar 8 11:19:25.852: INFO: Container etcd ready: true, restart count 0 Mar 8 11:19:25.852: INFO: kindnet-rznts started at 2020-03-08 10:17:45 +0000 UTC (0+1 container statuses recorded) Mar 8 11:19:25.852: INFO: Container kindnet-cni ready: true, restart count 0 Mar 8 11:19:25.852: INFO: pod-sharedvolume-4d007aa4-aba6-4a8f-b907-559fea0c6fc0 started at 2020-03-08 11:19:18 +0000 UTC (0+2 container statuses recorded) Mar 8 11:19:25.852: INFO: Container busybox-main-container ready: true, restart count 0 Mar 8 11:19:25.852: INFO: Container busybox-sub-container ready: false, restart count 0 Mar 8 11:19:25.852: INFO: pod-init-3b2ddfd0-4e6f-440c-a2e8-65454febde81 started at 2020-03-08 11:19:22 +0000 UTC (2+1 container statuses recorded) Mar 8 11:19:25.852: INFO: Init container init1 ready: true, restart count 0 Mar 8 11:19:25.852: INFO: Init container init2 ready: false, restart count 0 Mar 8 11:19:25.852: INFO: Container run1 ready: false, restart count 0 Mar 8 11:19:25.852: INFO: kube-proxy-9qrbc started at 2020-03-08 10:17:45 +0000 UTC (0+1 container statuses recorded) Mar 8 11:19:25.852: INFO: Container kube-proxy ready: true, restart count 0 Mar 8 11:19:25.852: INFO: local-path-provisioner-7745554f7f-5f2b8 started at 2020-03-08 10:17:49 +0000 UTC (0+1 container statuses recorded) Mar 8 11:19:25.852: INFO: Container local-path-provisioner ready: true, restart count 0 Mar 8 11:19:25.852: INFO: coredns-6955765f44-8lfgq started at 2020-03-08 10:17:52 +0000 UTC (0+1 container statuses recorded) Mar 8 11:19:25.852: INFO: Container coredns ready: true, restart count 0 Mar 8 11:19:25.852: INFO: kube-apiserver-kind-control-plane started at 2020-03-08 10:17:29 +0000 UTC (0+1 container statuses recorded) Mar 8 11:19:25.852: INFO: Container kube-apiserver ready: true, restart count 0 Mar 8 11:19:25.852: INFO: coredns-6955765f44-2ncc6 started at 2020-03-08 10:17:49 +0000 UTC (0+1 container statuses recorded) Mar 8 11:19:25.852: INFO: Container coredns ready: true, restart count 0 Mar 8 11:19:25.852: INFO: kube-scheduler-kind-control-plane started at 2020-03-08 10:17:29 +0000 UTC (0+1 container statuses recorded) Mar 8 11:19:25.852: INFO: Container kube-scheduler ready: true, restart count 0 W0308 11:19:25.855318 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 8 11:19:26.520: INFO: Latency metrics for node kind-control-plane Mar 8 11:19:26.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3069" for this suite. • Failure [0.801 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 8 11:19:25.820: Conformance test suite needs a cluster with at least 2 nodes. Expected : 1 to be > : 1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:417 ------------------------------ {"msg":"FAILED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":278,"completed":199,"skipped":3136,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:19:26.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-07100225-297e-4e63-851a-3eb60b71ea6d STEP: Creating a pod to test consume secrets Mar 8 11:19:26.606: INFO: Waiting up to 5m0s for pod "pod-secrets-f931db5a-f916-48d2-a038-eb63027b7a84" in namespace "secrets-9769" to be "success or failure" Mar 8 11:19:26.611: INFO: Pod "pod-secrets-f931db5a-f916-48d2-a038-eb63027b7a84": Phase="Pending", Reason="", readiness=false. Elapsed: 4.354258ms Mar 8 11:19:28.615: INFO: Pod "pod-secrets-f931db5a-f916-48d2-a038-eb63027b7a84": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008299534s Mar 8 11:19:30.619: INFO: Pod "pod-secrets-f931db5a-f916-48d2-a038-eb63027b7a84": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012545113s STEP: Saw pod success Mar 8 11:19:30.619: INFO: Pod "pod-secrets-f931db5a-f916-48d2-a038-eb63027b7a84" satisfied condition "success or failure" Mar 8 11:19:30.622: INFO: Trying to get logs from node kind-control-plane pod pod-secrets-f931db5a-f916-48d2-a038-eb63027b7a84 container secret-volume-test: STEP: delete the pod Mar 8 11:19:30.643: INFO: Waiting for pod pod-secrets-f931db5a-f916-48d2-a038-eb63027b7a84 to disappear Mar 8 11:19:30.647: INFO: Pod pod-secrets-f931db5a-f916-48d2-a038-eb63027b7a84 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:19:30.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9769" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":200,"skipped":3186,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:19:30.654: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Mar 8 11:19:30.707: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7993 /api/v1/namespaces/watch-7993/configmaps/e2e-watch-test-configmap-a 85ec88f0-e215-4a99-8777-03b6c0338836 23748 0 2020-03-08 11:19:30 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 8 11:19:30.708: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7993 /api/v1/namespaces/watch-7993/configmaps/e2e-watch-test-configmap-a 85ec88f0-e215-4a99-8777-03b6c0338836 23748 0 2020-03-08 11:19:30 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Mar 8 11:19:40.716: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7993 /api/v1/namespaces/watch-7993/configmaps/e2e-watch-test-configmap-a 85ec88f0-e215-4a99-8777-03b6c0338836 23804 0 2020-03-08 11:19:30 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Mar 8 11:19:40.716: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7993 /api/v1/namespaces/watch-7993/configmaps/e2e-watch-test-configmap-a 85ec88f0-e215-4a99-8777-03b6c0338836 23804 0 2020-03-08 11:19:30 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Mar 8 11:19:50.723: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7993 /api/v1/namespaces/watch-7993/configmaps/e2e-watch-test-configmap-a 85ec88f0-e215-4a99-8777-03b6c0338836 23832 0 2020-03-08 11:19:30 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 8 11:19:50.723: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7993 /api/v1/namespaces/watch-7993/configmaps/e2e-watch-test-configmap-a 85ec88f0-e215-4a99-8777-03b6c0338836 23832 0 2020-03-08 11:19:30 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Mar 8 11:20:00.729: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7993 /api/v1/namespaces/watch-7993/configmaps/e2e-watch-test-configmap-a 85ec88f0-e215-4a99-8777-03b6c0338836 23860 0 2020-03-08 11:19:30 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 8 11:20:00.730: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7993 /api/v1/namespaces/watch-7993/configmaps/e2e-watch-test-configmap-a 85ec88f0-e215-4a99-8777-03b6c0338836 23860 0 2020-03-08 11:19:30 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Mar 8 11:20:10.737: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-7993 /api/v1/namespaces/watch-7993/configmaps/e2e-watch-test-configmap-b bae4805d-a3d5-453c-bbab-99ddb48a7121 23888 0 2020-03-08 11:20:10 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 8 11:20:10.737: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-7993 /api/v1/namespaces/watch-7993/configmaps/e2e-watch-test-configmap-b bae4805d-a3d5-453c-bbab-99ddb48a7121 23888 0 2020-03-08 11:20:10 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Mar 8 11:20:20.743: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-7993 /api/v1/namespaces/watch-7993/configmaps/e2e-watch-test-configmap-b bae4805d-a3d5-453c-bbab-99ddb48a7121 23914 0 2020-03-08 11:20:10 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 8 11:20:20.743: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-7993 /api/v1/namespaces/watch-7993/configmaps/e2e-watch-test-configmap-b bae4805d-a3d5-453c-bbab-99ddb48a7121 23914 0 2020-03-08 11:20:10 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:20:30.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7993" for this suite. • [SLOW TEST:60.098 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":278,"completed":201,"skipped":3236,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} S ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:20:30.753: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-5876 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 8 11:20:30.804: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 8 11:20:52.932: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.0.89 8081 | grep -v '^\s*$'] Namespace:pod-network-test-5876 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 11:20:52.932: INFO: >>> kubeConfig: /root/.kube/config I0308 11:20:52.965712 6 log.go:172] (0xc006930f20) (0xc001de1900) Create stream I0308 11:20:52.965742 6 log.go:172] (0xc006930f20) (0xc001de1900) Stream added, broadcasting: 1 I0308 11:20:52.969735 6 log.go:172] (0xc006930f20) Reply frame received for 1 I0308 11:20:52.969786 6 log.go:172] (0xc006930f20) (0xc001ae4320) Create stream I0308 11:20:52.969804 6 log.go:172] (0xc006930f20) (0xc001ae4320) Stream added, broadcasting: 3 I0308 11:20:52.973911 6 log.go:172] (0xc006930f20) Reply frame received for 3 I0308 11:20:52.973959 6 log.go:172] (0xc006930f20) (0xc0010f63c0) Create stream I0308 11:20:52.973977 6 log.go:172] (0xc006930f20) (0xc0010f63c0) Stream added, broadcasting: 5 I0308 11:20:52.975069 6 log.go:172] (0xc006930f20) Reply frame received for 5 I0308 11:20:54.030604 6 log.go:172] (0xc006930f20) Data frame received for 5 I0308 11:20:54.030645 6 log.go:172] (0xc0010f63c0) (5) Data frame handling I0308 11:20:54.030669 6 log.go:172] (0xc006930f20) Data frame received for 3 I0308 11:20:54.030682 6 log.go:172] (0xc001ae4320) (3) Data frame handling I0308 11:20:54.030696 6 log.go:172] (0xc001ae4320) (3) Data frame sent I0308 11:20:54.030741 6 log.go:172] (0xc006930f20) Data frame received for 3 I0308 11:20:54.030750 6 log.go:172] (0xc001ae4320) (3) Data frame handling I0308 11:20:54.033208 6 log.go:172] (0xc006930f20) Data frame received for 1 I0308 11:20:54.033232 6 log.go:172] (0xc001de1900) (1) Data frame handling I0308 11:20:54.033245 6 log.go:172] (0xc001de1900) (1) Data frame sent I0308 11:20:54.033288 6 log.go:172] (0xc006930f20) (0xc001de1900) Stream removed, broadcasting: 1 I0308 11:20:54.033392 6 log.go:172] (0xc006930f20) (0xc001de1900) Stream removed, broadcasting: 1 I0308 11:20:54.033416 6 log.go:172] (0xc006930f20) (0xc001ae4320) Stream removed, broadcasting: 3 I0308 11:20:54.033621 6 log.go:172] (0xc006930f20) Go away received I0308 11:20:54.033651 6 log.go:172] (0xc006930f20) (0xc0010f63c0) Stream removed, broadcasting: 5 Mar 8 11:20:54.033: INFO: Found all expected endpoints: [netserver-0] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:20:54.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-5876" for this suite. • [SLOW TEST:23.289 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":202,"skipped":3237,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:20:54.042: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap that has name configmap-test-emptyKey-ab7a3222-a601-4683-b42b-8aa172d692c8 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:20:54.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4545" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":278,"completed":203,"skipped":3259,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} SSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:20:54.117: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's command Mar 8 11:20:54.188: INFO: Waiting up to 5m0s for pod "var-expansion-dcb8dad9-34c6-44ac-a041-8f3642b2cfcd" in namespace "var-expansion-5694" to be "success or failure" Mar 8 11:20:54.210: INFO: Pod "var-expansion-dcb8dad9-34c6-44ac-a041-8f3642b2cfcd": Phase="Pending", Reason="", readiness=false. Elapsed: 22.117929ms Mar 8 11:20:56.217: INFO: Pod "var-expansion-dcb8dad9-34c6-44ac-a041-8f3642b2cfcd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.029156785s STEP: Saw pod success Mar 8 11:20:56.217: INFO: Pod "var-expansion-dcb8dad9-34c6-44ac-a041-8f3642b2cfcd" satisfied condition "success or failure" Mar 8 11:20:56.222: INFO: Trying to get logs from node kind-control-plane pod var-expansion-dcb8dad9-34c6-44ac-a041-8f3642b2cfcd container dapi-container: STEP: delete the pod Mar 8 11:20:56.278: INFO: Waiting for pod var-expansion-dcb8dad9-34c6-44ac-a041-8f3642b2cfcd to disappear Mar 8 11:20:56.283: INFO: Pod var-expansion-dcb8dad9-34c6-44ac-a041-8f3642b2cfcd no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:20:56.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5694" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":278,"completed":204,"skipped":3264,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:20:56.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0308 11:20:57.381872 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 8 11:20:57.381: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:20:57.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-692" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":278,"completed":205,"skipped":3300,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:20:57.464: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium Mar 8 11:20:57.530: INFO: Waiting up to 5m0s for pod "pod-eaf1b827-d68d-43e3-8bef-527c685fb1ed" in namespace "emptydir-9008" to be "success or failure" Mar 8 11:20:57.581: INFO: Pod "pod-eaf1b827-d68d-43e3-8bef-527c685fb1ed": Phase="Pending", Reason="", readiness=false. Elapsed: 50.897711ms Mar 8 11:20:59.585: INFO: Pod "pod-eaf1b827-d68d-43e3-8bef-527c685fb1ed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.054087837s STEP: Saw pod success Mar 8 11:20:59.585: INFO: Pod "pod-eaf1b827-d68d-43e3-8bef-527c685fb1ed" satisfied condition "success or failure" Mar 8 11:20:59.587: INFO: Trying to get logs from node kind-control-plane pod pod-eaf1b827-d68d-43e3-8bef-527c685fb1ed container test-container: STEP: delete the pod Mar 8 11:20:59.605: INFO: Waiting for pod pod-eaf1b827-d68d-43e3-8bef-527c685fb1ed to disappear Mar 8 11:20:59.609: INFO: Pod pod-eaf1b827-d68d-43e3-8bef-527c685fb1ed no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:20:59.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9008" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":206,"skipped":3316,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:20:59.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-b01a305c-7e80-4146-a4fa-c44799993be3 STEP: Creating a pod to test consume configMaps Mar 8 11:20:59.721: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d87e445e-2b36-4c43-8a54-02b0ab85df13" in namespace "projected-8573" to be "success or failure" Mar 8 11:20:59.730: INFO: Pod "pod-projected-configmaps-d87e445e-2b36-4c43-8a54-02b0ab85df13": Phase="Pending", Reason="", readiness=false. Elapsed: 8.981171ms Mar 8 11:21:01.733: INFO: Pod "pod-projected-configmaps-d87e445e-2b36-4c43-8a54-02b0ab85df13": Phase="Running", Reason="", readiness=true. Elapsed: 2.012435901s Mar 8 11:21:03.737: INFO: Pod "pod-projected-configmaps-d87e445e-2b36-4c43-8a54-02b0ab85df13": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015764643s STEP: Saw pod success Mar 8 11:21:03.737: INFO: Pod "pod-projected-configmaps-d87e445e-2b36-4c43-8a54-02b0ab85df13" satisfied condition "success or failure" Mar 8 11:21:03.740: INFO: Trying to get logs from node kind-control-plane pod pod-projected-configmaps-d87e445e-2b36-4c43-8a54-02b0ab85df13 container projected-configmap-volume-test: STEP: delete the pod Mar 8 11:21:03.775: INFO: Waiting for pod pod-projected-configmaps-d87e445e-2b36-4c43-8a54-02b0ab85df13 to disappear Mar 8 11:21:03.803: INFO: Pod pod-projected-configmaps-d87e445e-2b36-4c43-8a54-02b0ab85df13 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:21:03.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8573" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":207,"skipped":3335,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} S ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:21:03.810: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service nodeport-service with the type=NodePort in namespace services-7490 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-7490 STEP: creating replication controller externalsvc in namespace services-7490 I0308 11:21:03.958253 6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-7490, replica count: 2 I0308 11:21:07.008839 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Mar 8 11:21:07.052: INFO: Creating new exec pod Mar 8 11:21:09.096: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7490 execpod6x792 -- /bin/sh -x -c nslookup nodeport-service' Mar 8 11:21:09.353: INFO: stderr: "I0308 11:21:09.269512 3108 log.go:172] (0xc0001046e0) (0xc00073b5e0) Create stream\nI0308 11:21:09.269568 3108 log.go:172] (0xc0001046e0) (0xc00073b5e0) Stream added, broadcasting: 1\nI0308 11:21:09.271918 3108 log.go:172] (0xc0001046e0) Reply frame received for 1\nI0308 11:21:09.271972 3108 log.go:172] (0xc0001046e0) (0xc00068db80) Create stream\nI0308 11:21:09.271994 3108 log.go:172] (0xc0001046e0) (0xc00068db80) Stream added, broadcasting: 3\nI0308 11:21:09.272871 3108 log.go:172] (0xc0001046e0) Reply frame received for 3\nI0308 11:21:09.272922 3108 log.go:172] (0xc0001046e0) (0xc0008bc000) Create stream\nI0308 11:21:09.272942 3108 log.go:172] (0xc0001046e0) (0xc0008bc000) Stream added, broadcasting: 5\nI0308 11:21:09.273906 3108 log.go:172] (0xc0001046e0) Reply frame received for 5\nI0308 11:21:09.337575 3108 log.go:172] (0xc0001046e0) Data frame received for 5\nI0308 11:21:09.337601 3108 log.go:172] (0xc0008bc000) (5) Data frame handling\nI0308 11:21:09.337623 3108 log.go:172] (0xc0008bc000) (5) Data frame sent\n+ nslookup nodeport-service\nI0308 11:21:09.346715 3108 log.go:172] (0xc0001046e0) Data frame received for 3\nI0308 11:21:09.346738 3108 log.go:172] (0xc00068db80) (3) Data frame handling\nI0308 11:21:09.346754 3108 log.go:172] (0xc00068db80) (3) Data frame sent\nI0308 11:21:09.347784 3108 log.go:172] (0xc0001046e0) Data frame received for 3\nI0308 11:21:09.347809 3108 log.go:172] (0xc00068db80) (3) Data frame handling\nI0308 11:21:09.347826 3108 log.go:172] (0xc00068db80) (3) Data frame sent\nI0308 11:21:09.348538 3108 log.go:172] (0xc0001046e0) Data frame received for 5\nI0308 11:21:09.348564 3108 log.go:172] (0xc0008bc000) (5) Data frame handling\nI0308 11:21:09.348592 3108 log.go:172] (0xc0001046e0) Data frame received for 3\nI0308 11:21:09.348606 3108 log.go:172] (0xc00068db80) (3) Data frame handling\nI0308 11:21:09.350108 3108 log.go:172] (0xc0001046e0) Data frame received for 1\nI0308 11:21:09.350150 3108 log.go:172] (0xc00073b5e0) (1) Data frame handling\nI0308 11:21:09.350166 3108 log.go:172] (0xc00073b5e0) (1) Data frame sent\nI0308 11:21:09.350213 3108 log.go:172] (0xc0001046e0) (0xc00073b5e0) Stream removed, broadcasting: 1\nI0308 11:21:09.350291 3108 log.go:172] (0xc0001046e0) Go away received\nI0308 11:21:09.350569 3108 log.go:172] (0xc0001046e0) (0xc00073b5e0) Stream removed, broadcasting: 1\nI0308 11:21:09.350587 3108 log.go:172] (0xc0001046e0) (0xc00068db80) Stream removed, broadcasting: 3\nI0308 11:21:09.350595 3108 log.go:172] (0xc0001046e0) (0xc0008bc000) Stream removed, broadcasting: 5\n" Mar 8 11:21:09.353: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-7490.svc.cluster.local\tcanonical name = externalsvc.services-7490.svc.cluster.local.\nName:\texternalsvc.services-7490.svc.cluster.local\nAddress: 10.96.254.173\n\n" STEP: deleting ReplicationController externalsvc in namespace services-7490, will wait for the garbage collector to delete the pods Mar 8 11:21:09.411: INFO: Deleting ReplicationController externalsvc took: 5.079649ms Mar 8 11:21:09.712: INFO: Terminating ReplicationController externalsvc pods took: 300.243847ms Mar 8 11:21:19.553: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:21:19.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7490" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:15.767 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":278,"completed":208,"skipped":3336,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} SSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:21:19.577: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 8 11:21:19.630: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:21:20.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9155" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":278,"completed":209,"skipped":3342,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:21:20.260: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 8 11:21:20.319: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:21:22.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8766" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":278,"completed":210,"skipped":3364,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} SSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:21:22.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating server pod server in namespace prestop-8924 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-8924 STEP: Deleting pre-stop pod Mar 8 11:21:31.581: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:21:31.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-8924" for this suite. • [SLOW TEST:9.128 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":278,"completed":211,"skipped":3369,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:21:31.603: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8151 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8151;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8151 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8151;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8151.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8151.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8151.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8151.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8151.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-8151.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8151.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-8151.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8151.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-8151.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8151.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-8151.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8151.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 221.26.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.26.221_udp@PTR;check="$$(dig +tcp +noall +answer +search 221.26.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.26.221_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8151 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8151;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8151 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8151;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8151.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8151.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8151.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8151.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8151.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-8151.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8151.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-8151.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8151.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-8151.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8151.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-8151.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8151.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 221.26.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.26.221_udp@PTR;check="$$(dig +tcp +noall +answer +search 221.26.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.26.221_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 8 11:21:35.762: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:21:35.765: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:21:35.768: INFO: Unable to read wheezy_udp@dns-test-service.dns-8151 from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:21:35.771: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8151 from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:21:35.774: INFO: Unable to read wheezy_udp@dns-test-service.dns-8151.svc from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:21:35.777: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8151.svc from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:21:35.780: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8151.svc from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:21:35.783: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8151.svc from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:21:35.803: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:21:35.806: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:21:35.809: INFO: Unable to read jessie_udp@dns-test-service.dns-8151 from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:21:35.811: INFO: Unable to read jessie_tcp@dns-test-service.dns-8151 from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:21:35.814: INFO: Unable to read jessie_udp@dns-test-service.dns-8151.svc from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:21:35.816: INFO: Unable to read jessie_tcp@dns-test-service.dns-8151.svc from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:21:35.818: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8151.svc from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:21:35.821: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8151.svc from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:21:35.835: INFO: Lookups using dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8151 wheezy_tcp@dns-test-service.dns-8151 wheezy_udp@dns-test-service.dns-8151.svc wheezy_tcp@dns-test-service.dns-8151.svc wheezy_udp@_http._tcp.dns-test-service.dns-8151.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8151.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8151 jessie_tcp@dns-test-service.dns-8151 jessie_udp@dns-test-service.dns-8151.svc jessie_tcp@dns-test-service.dns-8151.svc jessie_udp@_http._tcp.dns-test-service.dns-8151.svc jessie_tcp@_http._tcp.dns-test-service.dns-8151.svc] Mar 8 11:21:40.840: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:21:40.843: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:21:40.846: INFO: Unable to read wheezy_udp@dns-test-service.dns-8151 from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:21:40.849: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8151 from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:21:40.852: INFO: Unable to read wheezy_udp@dns-test-service.dns-8151.svc from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:21:40.855: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8151.svc from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:21:40.858: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8151.svc from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:21:40.861: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8151.svc from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:21:40.880: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:21:40.882: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:21:40.884: INFO: Unable to read jessie_udp@dns-test-service.dns-8151 from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:21:40.887: INFO: Unable to read jessie_tcp@dns-test-service.dns-8151 from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:21:40.889: INFO: Unable to read jessie_udp@dns-test-service.dns-8151.svc from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:21:40.892: INFO: Unable to read jessie_tcp@dns-test-service.dns-8151.svc from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:21:40.894: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8151.svc from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:21:40.900: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8151.svc from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:21:40.913: INFO: Lookups using dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8151 wheezy_tcp@dns-test-service.dns-8151 wheezy_udp@dns-test-service.dns-8151.svc wheezy_tcp@dns-test-service.dns-8151.svc wheezy_udp@_http._tcp.dns-test-service.dns-8151.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8151.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8151 jessie_tcp@dns-test-service.dns-8151 jessie_udp@dns-test-service.dns-8151.svc jessie_tcp@dns-test-service.dns-8151.svc jessie_udp@_http._tcp.dns-test-service.dns-8151.svc jessie_tcp@_http._tcp.dns-test-service.dns-8151.svc] Mar 8 11:21:45.839: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:21:45.841: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:21:45.843: INFO: Unable to read wheezy_udp@dns-test-service.dns-8151 from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:21:45.846: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8151 from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:21:45.849: INFO: Unable to read wheezy_udp@dns-test-service.dns-8151.svc from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:21:45.851: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8151.svc from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:21:45.854: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8151.svc from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:21:45.856: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8151.svc from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:21:45.871: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:21:45.874: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:21:45.876: INFO: Unable to read jessie_udp@dns-test-service.dns-8151 from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:21:45.878: INFO: Unable to read jessie_tcp@dns-test-service.dns-8151 from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:21:45.880: INFO: Unable to read jessie_udp@dns-test-service.dns-8151.svc from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:21:45.881: INFO: Unable to read jessie_tcp@dns-test-service.dns-8151.svc from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:21:45.883: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8151.svc from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:21:45.886: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8151.svc from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:21:45.910: INFO: Lookups using dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8151 wheezy_tcp@dns-test-service.dns-8151 wheezy_udp@dns-test-service.dns-8151.svc wheezy_tcp@dns-test-service.dns-8151.svc wheezy_udp@_http._tcp.dns-test-service.dns-8151.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8151.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8151 jessie_tcp@dns-test-service.dns-8151 jessie_udp@dns-test-service.dns-8151.svc jessie_tcp@dns-test-service.dns-8151.svc jessie_udp@_http._tcp.dns-test-service.dns-8151.svc jessie_tcp@_http._tcp.dns-test-service.dns-8151.svc] Mar 8 11:21:50.839: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:21:50.842: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:21:50.845: INFO: Unable to read wheezy_udp@dns-test-service.dns-8151 from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:21:50.848: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8151 from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:21:50.851: INFO: Unable to read wheezy_udp@dns-test-service.dns-8151.svc from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:21:50.854: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8151.svc from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:21:50.857: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8151.svc from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:21:50.859: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8151.svc from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:21:50.878: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:21:50.881: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:21:50.883: INFO: Unable to read jessie_udp@dns-test-service.dns-8151 from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:21:50.885: INFO: Unable to read jessie_tcp@dns-test-service.dns-8151 from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:21:50.888: INFO: Unable to read jessie_udp@dns-test-service.dns-8151.svc from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:21:50.890: INFO: Unable to read jessie_tcp@dns-test-service.dns-8151.svc from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:21:50.892: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8151.svc from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:21:50.895: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8151.svc from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:21:50.909: INFO: Lookups using dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8151 wheezy_tcp@dns-test-service.dns-8151 wheezy_udp@dns-test-service.dns-8151.svc wheezy_tcp@dns-test-service.dns-8151.svc wheezy_udp@_http._tcp.dns-test-service.dns-8151.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8151.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8151 jessie_tcp@dns-test-service.dns-8151 jessie_udp@dns-test-service.dns-8151.svc jessie_tcp@dns-test-service.dns-8151.svc jessie_udp@_http._tcp.dns-test-service.dns-8151.svc jessie_tcp@_http._tcp.dns-test-service.dns-8151.svc] Mar 8 11:21:55.839: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:21:55.843: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:21:55.846: INFO: Unable to read wheezy_udp@dns-test-service.dns-8151 from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:21:55.849: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8151 from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:21:55.853: INFO: Unable to read wheezy_udp@dns-test-service.dns-8151.svc from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:21:55.856: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8151.svc from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:21:55.859: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8151.svc from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:21:55.862: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8151.svc from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:21:55.882: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:21:55.884: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:21:55.887: INFO: Unable to read jessie_udp@dns-test-service.dns-8151 from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:21:55.889: INFO: Unable to read jessie_tcp@dns-test-service.dns-8151 from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:21:55.892: INFO: Unable to read jessie_udp@dns-test-service.dns-8151.svc from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:21:55.894: INFO: Unable to read jessie_tcp@dns-test-service.dns-8151.svc from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:21:55.897: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8151.svc from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:21:55.899: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8151.svc from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:21:55.915: INFO: Lookups using dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8151 wheezy_tcp@dns-test-service.dns-8151 wheezy_udp@dns-test-service.dns-8151.svc wheezy_tcp@dns-test-service.dns-8151.svc wheezy_udp@_http._tcp.dns-test-service.dns-8151.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8151.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8151 jessie_tcp@dns-test-service.dns-8151 jessie_udp@dns-test-service.dns-8151.svc jessie_tcp@dns-test-service.dns-8151.svc jessie_udp@_http._tcp.dns-test-service.dns-8151.svc jessie_tcp@_http._tcp.dns-test-service.dns-8151.svc] Mar 8 11:22:00.839: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:22:00.843: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:22:00.846: INFO: Unable to read wheezy_udp@dns-test-service.dns-8151 from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:22:00.849: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8151 from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:22:00.852: INFO: Unable to read wheezy_udp@dns-test-service.dns-8151.svc from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:22:00.855: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8151.svc from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:22:00.858: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8151.svc from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:22:00.861: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8151.svc from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:22:00.881: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:22:00.884: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:22:00.886: INFO: Unable to read jessie_udp@dns-test-service.dns-8151 from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:22:00.889: INFO: Unable to read jessie_tcp@dns-test-service.dns-8151 from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:22:00.891: INFO: Unable to read jessie_udp@dns-test-service.dns-8151.svc from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:22:00.894: INFO: Unable to read jessie_tcp@dns-test-service.dns-8151.svc from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:22:00.897: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8151.svc from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:22:00.899: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8151.svc from pod dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643: the server could not find the requested resource (get pods dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643) Mar 8 11:22:00.915: INFO: Lookups using dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8151 wheezy_tcp@dns-test-service.dns-8151 wheezy_udp@dns-test-service.dns-8151.svc wheezy_tcp@dns-test-service.dns-8151.svc wheezy_udp@_http._tcp.dns-test-service.dns-8151.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8151.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8151 jessie_tcp@dns-test-service.dns-8151 jessie_udp@dns-test-service.dns-8151.svc jessie_tcp@dns-test-service.dns-8151.svc jessie_udp@_http._tcp.dns-test-service.dns-8151.svc jessie_tcp@_http._tcp.dns-test-service.dns-8151.svc] Mar 8 11:22:05.911: INFO: DNS probes using dns-8151/dns-test-fd93901e-3b93-43ea-a2c3-c7032e767643 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:22:06.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8151" for this suite. • [SLOW TEST:34.488 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":278,"completed":212,"skipped":3385,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:22:06.092: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Mar 8 11:22:10.256: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 8 11:22:10.267: INFO: Pod pod-with-prestop-http-hook still exists Mar 8 11:22:12.267: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 8 11:22:12.271: INFO: Pod pod-with-prestop-http-hook still exists Mar 8 11:22:14.267: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 8 11:22:14.272: INFO: Pod pod-with-prestop-http-hook still exists Mar 8 11:22:16.267: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 8 11:22:16.271: INFO: Pod pod-with-prestop-http-hook still exists Mar 8 11:22:18.267: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 8 11:22:18.271: INFO: Pod pod-with-prestop-http-hook still exists Mar 8 11:22:20.267: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 8 11:22:20.271: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:22:20.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4851" for this suite. • [SLOW TEST:14.196 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":278,"completed":213,"skipped":3419,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:22:20.289: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Mar 8 11:22:24.458: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3784 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 11:22:24.458: INFO: >>> kubeConfig: /root/.kube/config I0308 11:22:24.493745 6 log.go:172] (0xc002a909a0) (0xc0017db720) Create stream I0308 11:22:24.493784 6 log.go:172] (0xc002a909a0) (0xc0017db720) Stream added, broadcasting: 1 I0308 11:22:24.499726 6 log.go:172] (0xc002a909a0) Reply frame received for 1 I0308 11:22:24.499768 6 log.go:172] (0xc002a909a0) (0xc0017db7c0) Create stream I0308 11:22:24.499784 6 log.go:172] (0xc002a909a0) (0xc0017db7c0) Stream added, broadcasting: 3 I0308 11:22:24.501656 6 log.go:172] (0xc002a909a0) Reply frame received for 3 I0308 11:22:24.501692 6 log.go:172] (0xc002a909a0) (0xc001d268c0) Create stream I0308 11:22:24.501707 6 log.go:172] (0xc002a909a0) (0xc001d268c0) Stream added, broadcasting: 5 I0308 11:22:24.502788 6 log.go:172] (0xc002a909a0) Reply frame received for 5 I0308 11:22:24.553967 6 log.go:172] (0xc002a909a0) Data frame received for 3 I0308 11:22:24.553994 6 log.go:172] (0xc0017db7c0) (3) Data frame handling I0308 11:22:24.554006 6 log.go:172] (0xc0017db7c0) (3) Data frame sent I0308 11:22:24.554023 6 log.go:172] (0xc002a909a0) Data frame received for 3 I0308 11:22:24.554038 6 log.go:172] (0xc0017db7c0) (3) Data frame handling I0308 11:22:24.554057 6 log.go:172] (0xc002a909a0) Data frame received for 5 I0308 11:22:24.554076 6 log.go:172] (0xc001d268c0) (5) Data frame handling I0308 11:22:24.555682 6 log.go:172] (0xc002a909a0) Data frame received for 1 I0308 11:22:24.555709 6 log.go:172] (0xc0017db720) (1) Data frame handling I0308 11:22:24.555728 6 log.go:172] (0xc0017db720) (1) Data frame sent I0308 11:22:24.555744 6 log.go:172] (0xc002a909a0) (0xc0017db720) Stream removed, broadcasting: 1 I0308 11:22:24.555770 6 log.go:172] (0xc002a909a0) Go away received I0308 11:22:24.556009 6 log.go:172] (0xc002a909a0) (0xc0017db720) Stream removed, broadcasting: 1 I0308 11:22:24.556038 6 log.go:172] (0xc002a909a0) (0xc0017db7c0) Stream removed, broadcasting: 3 I0308 11:22:24.556052 6 log.go:172] (0xc002a909a0) (0xc001d268c0) Stream removed, broadcasting: 5 Mar 8 11:22:24.556: INFO: Exec stderr: "" Mar 8 11:22:24.556: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3784 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 11:22:24.556: INFO: >>> kubeConfig: /root/.kube/config I0308 11:22:24.591567 6 log.go:172] (0xc002a91130) (0xc0017dbc20) Create stream I0308 11:22:24.591591 6 log.go:172] (0xc002a91130) (0xc0017dbc20) Stream added, broadcasting: 1 I0308 11:22:24.594020 6 log.go:172] (0xc002a91130) Reply frame received for 1 I0308 11:22:24.594058 6 log.go:172] (0xc002a91130) (0xc0017dbd60) Create stream I0308 11:22:24.594071 6 log.go:172] (0xc002a91130) (0xc0017dbd60) Stream added, broadcasting: 3 I0308 11:22:24.595116 6 log.go:172] (0xc002a91130) Reply frame received for 3 I0308 11:22:24.595161 6 log.go:172] (0xc002a91130) (0xc002064000) Create stream I0308 11:22:24.595177 6 log.go:172] (0xc002a91130) (0xc002064000) Stream added, broadcasting: 5 I0308 11:22:24.596096 6 log.go:172] (0xc002a91130) Reply frame received for 5 I0308 11:22:24.685432 6 log.go:172] (0xc002a91130) Data frame received for 5 I0308 11:22:24.685468 6 log.go:172] (0xc002064000) (5) Data frame handling I0308 11:22:24.685498 6 log.go:172] (0xc002a91130) Data frame received for 3 I0308 11:22:24.685528 6 log.go:172] (0xc0017dbd60) (3) Data frame handling I0308 11:22:24.685548 6 log.go:172] (0xc0017dbd60) (3) Data frame sent I0308 11:22:24.685561 6 log.go:172] (0xc002a91130) Data frame received for 3 I0308 11:22:24.685569 6 log.go:172] (0xc0017dbd60) (3) Data frame handling I0308 11:22:24.686960 6 log.go:172] (0xc002a91130) Data frame received for 1 I0308 11:22:24.686987 6 log.go:172] (0xc0017dbc20) (1) Data frame handling I0308 11:22:24.686997 6 log.go:172] (0xc0017dbc20) (1) Data frame sent I0308 11:22:24.687007 6 log.go:172] (0xc002a91130) (0xc0017dbc20) Stream removed, broadcasting: 1 I0308 11:22:24.687025 6 log.go:172] (0xc002a91130) Go away received I0308 11:22:24.687139 6 log.go:172] (0xc002a91130) (0xc0017dbc20) Stream removed, broadcasting: 1 I0308 11:22:24.687160 6 log.go:172] (0xc002a91130) (0xc0017dbd60) Stream removed, broadcasting: 3 I0308 11:22:24.687174 6 log.go:172] (0xc002a91130) (0xc002064000) Stream removed, broadcasting: 5 Mar 8 11:22:24.687: INFO: Exec stderr: "" Mar 8 11:22:24.687: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3784 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 11:22:24.687: INFO: >>> kubeConfig: /root/.kube/config I0308 11:22:24.761570 6 log.go:172] (0xc002c73130) (0xc001d26be0) Create stream I0308 11:22:24.761602 6 log.go:172] (0xc002c73130) (0xc001d26be0) Stream added, broadcasting: 1 I0308 11:22:24.764176 6 log.go:172] (0xc002c73130) Reply frame received for 1 I0308 11:22:24.764214 6 log.go:172] (0xc002c73130) (0xc001cd7400) Create stream I0308 11:22:24.764223 6 log.go:172] (0xc002c73130) (0xc001cd7400) Stream added, broadcasting: 3 I0308 11:22:24.765088 6 log.go:172] (0xc002c73130) Reply frame received for 3 I0308 11:22:24.765130 6 log.go:172] (0xc002c73130) (0xc0017dbe00) Create stream I0308 11:22:24.765143 6 log.go:172] (0xc002c73130) (0xc0017dbe00) Stream added, broadcasting: 5 I0308 11:22:24.765915 6 log.go:172] (0xc002c73130) Reply frame received for 5 I0308 11:22:24.821491 6 log.go:172] (0xc002c73130) Data frame received for 3 I0308 11:22:24.821528 6 log.go:172] (0xc001cd7400) (3) Data frame handling I0308 11:22:24.821558 6 log.go:172] (0xc001cd7400) (3) Data frame sent I0308 11:22:24.821597 6 log.go:172] (0xc002c73130) Data frame received for 5 I0308 11:22:24.821629 6 log.go:172] (0xc0017dbe00) (5) Data frame handling I0308 11:22:24.821692 6 log.go:172] (0xc002c73130) Data frame received for 3 I0308 11:22:24.821709 6 log.go:172] (0xc001cd7400) (3) Data frame handling I0308 11:22:24.823093 6 log.go:172] (0xc002c73130) Data frame received for 1 I0308 11:22:24.823133 6 log.go:172] (0xc001d26be0) (1) Data frame handling I0308 11:22:24.823146 6 log.go:172] (0xc001d26be0) (1) Data frame sent I0308 11:22:24.823235 6 log.go:172] (0xc002c73130) (0xc001d26be0) Stream removed, broadcasting: 1 I0308 11:22:24.823274 6 log.go:172] (0xc002c73130) Go away received I0308 11:22:24.823464 6 log.go:172] (0xc002c73130) (0xc001d26be0) Stream removed, broadcasting: 1 I0308 11:22:24.823496 6 log.go:172] (0xc002c73130) (0xc001cd7400) Stream removed, broadcasting: 3 I0308 11:22:24.823539 6 log.go:172] (0xc002c73130) (0xc0017dbe00) Stream removed, broadcasting: 5 Mar 8 11:22:24.823: INFO: Exec stderr: "" Mar 8 11:22:24.823: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3784 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 11:22:24.823: INFO: >>> kubeConfig: /root/.kube/config I0308 11:22:24.856834 6 log.go:172] (0xc004bfe6e0) (0xc001cd77c0) Create stream I0308 11:22:24.856858 6 log.go:172] (0xc004bfe6e0) (0xc001cd77c0) Stream added, broadcasting: 1 I0308 11:22:24.859516 6 log.go:172] (0xc004bfe6e0) Reply frame received for 1 I0308 11:22:24.859556 6 log.go:172] (0xc004bfe6e0) (0xc001cd7860) Create stream I0308 11:22:24.859571 6 log.go:172] (0xc004bfe6e0) (0xc001cd7860) Stream added, broadcasting: 3 I0308 11:22:24.860515 6 log.go:172] (0xc004bfe6e0) Reply frame received for 3 I0308 11:22:24.860546 6 log.go:172] (0xc004bfe6e0) (0xc0020640a0) Create stream I0308 11:22:24.860557 6 log.go:172] (0xc004bfe6e0) (0xc0020640a0) Stream added, broadcasting: 5 I0308 11:22:24.861345 6 log.go:172] (0xc004bfe6e0) Reply frame received for 5 I0308 11:22:24.921041 6 log.go:172] (0xc004bfe6e0) Data frame received for 5 I0308 11:22:24.921079 6 log.go:172] (0xc0020640a0) (5) Data frame handling I0308 11:22:24.921117 6 log.go:172] (0xc004bfe6e0) Data frame received for 3 I0308 11:22:24.921149 6 log.go:172] (0xc001cd7860) (3) Data frame handling I0308 11:22:24.921178 6 log.go:172] (0xc001cd7860) (3) Data frame sent I0308 11:22:24.921191 6 log.go:172] (0xc004bfe6e0) Data frame received for 3 I0308 11:22:24.921202 6 log.go:172] (0xc001cd7860) (3) Data frame handling I0308 11:22:24.922160 6 log.go:172] (0xc004bfe6e0) Data frame received for 1 I0308 11:22:24.922179 6 log.go:172] (0xc001cd77c0) (1) Data frame handling I0308 11:22:24.922187 6 log.go:172] (0xc001cd77c0) (1) Data frame sent I0308 11:22:24.922196 6 log.go:172] (0xc004bfe6e0) (0xc001cd77c0) Stream removed, broadcasting: 1 I0308 11:22:24.922212 6 log.go:172] (0xc004bfe6e0) Go away received I0308 11:22:24.922355 6 log.go:172] (0xc004bfe6e0) (0xc001cd77c0) Stream removed, broadcasting: 1 I0308 11:22:24.922389 6 log.go:172] (0xc004bfe6e0) (0xc001cd7860) Stream removed, broadcasting: 3 I0308 11:22:24.922418 6 log.go:172] (0xc004bfe6e0) (0xc0020640a0) Stream removed, broadcasting: 5 Mar 8 11:22:24.922: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Mar 8 11:22:24.922: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3784 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 11:22:24.922: INFO: >>> kubeConfig: /root/.kube/config I0308 11:22:24.955346 6 log.go:172] (0xc0028de630) (0xc002064320) Create stream I0308 11:22:24.955371 6 log.go:172] (0xc0028de630) (0xc002064320) Stream added, broadcasting: 1 I0308 11:22:24.957877 6 log.go:172] (0xc0028de630) Reply frame received for 1 I0308 11:22:24.957908 6 log.go:172] (0xc0028de630) (0xc001cd7900) Create stream I0308 11:22:24.957916 6 log.go:172] (0xc0028de630) (0xc001cd7900) Stream added, broadcasting: 3 I0308 11:22:24.958707 6 log.go:172] (0xc0028de630) Reply frame received for 3 I0308 11:22:24.958754 6 log.go:172] (0xc0028de630) (0xc0017dbea0) Create stream I0308 11:22:24.958766 6 log.go:172] (0xc0028de630) (0xc0017dbea0) Stream added, broadcasting: 5 I0308 11:22:24.959568 6 log.go:172] (0xc0028de630) Reply frame received for 5 I0308 11:22:25.029683 6 log.go:172] (0xc0028de630) Data frame received for 5 I0308 11:22:25.029717 6 log.go:172] (0xc0017dbea0) (5) Data frame handling I0308 11:22:25.029735 6 log.go:172] (0xc0028de630) Data frame received for 3 I0308 11:22:25.029745 6 log.go:172] (0xc001cd7900) (3) Data frame handling I0308 11:22:25.029756 6 log.go:172] (0xc001cd7900) (3) Data frame sent I0308 11:22:25.029845 6 log.go:172] (0xc0028de630) Data frame received for 3 I0308 11:22:25.029867 6 log.go:172] (0xc001cd7900) (3) Data frame handling I0308 11:22:25.031376 6 log.go:172] (0xc0028de630) Data frame received for 1 I0308 11:22:25.031416 6 log.go:172] (0xc002064320) (1) Data frame handling I0308 11:22:25.031450 6 log.go:172] (0xc002064320) (1) Data frame sent I0308 11:22:25.031521 6 log.go:172] (0xc0028de630) (0xc002064320) Stream removed, broadcasting: 1 I0308 11:22:25.031560 6 log.go:172] (0xc0028de630) Go away received I0308 11:22:25.031693 6 log.go:172] (0xc0028de630) (0xc002064320) Stream removed, broadcasting: 1 I0308 11:22:25.031718 6 log.go:172] (0xc0028de630) (0xc001cd7900) Stream removed, broadcasting: 3 I0308 11:22:25.031732 6 log.go:172] (0xc0028de630) (0xc0017dbea0) Stream removed, broadcasting: 5 Mar 8 11:22:25.031: INFO: Exec stderr: "" Mar 8 11:22:25.031: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3784 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 11:22:25.031: INFO: >>> kubeConfig: /root/.kube/config I0308 11:22:25.062056 6 log.go:172] (0xc0028ded10) (0xc002064500) Create stream I0308 11:22:25.062082 6 log.go:172] (0xc0028ded10) (0xc002064500) Stream added, broadcasting: 1 I0308 11:22:25.064790 6 log.go:172] (0xc0028ded10) Reply frame received for 1 I0308 11:22:25.064826 6 log.go:172] (0xc0028ded10) (0xc001cd7a40) Create stream I0308 11:22:25.064843 6 log.go:172] (0xc0028ded10) (0xc001cd7a40) Stream added, broadcasting: 3 I0308 11:22:25.065810 6 log.go:172] (0xc0028ded10) Reply frame received for 3 I0308 11:22:25.065845 6 log.go:172] (0xc0028ded10) (0xc001cd7b80) Create stream I0308 11:22:25.065858 6 log.go:172] (0xc0028ded10) (0xc001cd7b80) Stream added, broadcasting: 5 I0308 11:22:25.066833 6 log.go:172] (0xc0028ded10) Reply frame received for 5 I0308 11:22:25.133364 6 log.go:172] (0xc0028ded10) Data frame received for 5 I0308 11:22:25.133389 6 log.go:172] (0xc001cd7b80) (5) Data frame handling I0308 11:22:25.133409 6 log.go:172] (0xc0028ded10) Data frame received for 3 I0308 11:22:25.133426 6 log.go:172] (0xc001cd7a40) (3) Data frame handling I0308 11:22:25.133439 6 log.go:172] (0xc001cd7a40) (3) Data frame sent I0308 11:22:25.133457 6 log.go:172] (0xc0028ded10) Data frame received for 3 I0308 11:22:25.133492 6 log.go:172] (0xc001cd7a40) (3) Data frame handling I0308 11:22:25.134607 6 log.go:172] (0xc0028ded10) Data frame received for 1 I0308 11:22:25.134655 6 log.go:172] (0xc002064500) (1) Data frame handling I0308 11:22:25.134683 6 log.go:172] (0xc002064500) (1) Data frame sent I0308 11:22:25.134710 6 log.go:172] (0xc0028ded10) (0xc002064500) Stream removed, broadcasting: 1 I0308 11:22:25.134748 6 log.go:172] (0xc0028ded10) Go away received I0308 11:22:25.135092 6 log.go:172] (0xc0028ded10) (0xc002064500) Stream removed, broadcasting: 1 I0308 11:22:25.135114 6 log.go:172] (0xc0028ded10) (0xc001cd7a40) Stream removed, broadcasting: 3 I0308 11:22:25.135127 6 log.go:172] (0xc0028ded10) (0xc001cd7b80) Stream removed, broadcasting: 5 Mar 8 11:22:25.135: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Mar 8 11:22:25.135: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3784 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 11:22:25.135: INFO: >>> kubeConfig: /root/.kube/config I0308 11:22:25.165140 6 log.go:172] (0xc0028df340) (0xc0020648c0) Create stream I0308 11:22:25.165161 6 log.go:172] (0xc0028df340) (0xc0020648c0) Stream added, broadcasting: 1 I0308 11:22:25.169795 6 log.go:172] (0xc0028df340) Reply frame received for 1 I0308 11:22:25.169832 6 log.go:172] (0xc0028df340) (0xc002064960) Create stream I0308 11:22:25.169846 6 log.go:172] (0xc0028df340) (0xc002064960) Stream added, broadcasting: 3 I0308 11:22:25.171040 6 log.go:172] (0xc0028df340) Reply frame received for 3 I0308 11:22:25.171078 6 log.go:172] (0xc0028df340) (0xc001cd7c20) Create stream I0308 11:22:25.171092 6 log.go:172] (0xc0028df340) (0xc001cd7c20) Stream added, broadcasting: 5 I0308 11:22:25.171693 6 log.go:172] (0xc0028df340) Reply frame received for 5 I0308 11:22:25.241221 6 log.go:172] (0xc0028df340) Data frame received for 3 I0308 11:22:25.241247 6 log.go:172] (0xc002064960) (3) Data frame handling I0308 11:22:25.241266 6 log.go:172] (0xc002064960) (3) Data frame sent I0308 11:22:25.241280 6 log.go:172] (0xc0028df340) Data frame received for 3 I0308 11:22:25.241292 6 log.go:172] (0xc002064960) (3) Data frame handling I0308 11:22:25.241480 6 log.go:172] (0xc0028df340) Data frame received for 5 I0308 11:22:25.241503 6 log.go:172] (0xc001cd7c20) (5) Data frame handling I0308 11:22:25.243245 6 log.go:172] (0xc0028df340) Data frame received for 1 I0308 11:22:25.243270 6 log.go:172] (0xc0020648c0) (1) Data frame handling I0308 11:22:25.243293 6 log.go:172] (0xc0020648c0) (1) Data frame sent I0308 11:22:25.243404 6 log.go:172] (0xc0028df340) (0xc0020648c0) Stream removed, broadcasting: 1 I0308 11:22:25.243493 6 log.go:172] (0xc0028df340) (0xc0020648c0) Stream removed, broadcasting: 1 I0308 11:22:25.243508 6 log.go:172] (0xc0028df340) (0xc002064960) Stream removed, broadcasting: 3 I0308 11:22:25.243523 6 log.go:172] (0xc0028df340) (0xc001cd7c20) Stream removed, broadcasting: 5 Mar 8 11:22:25.243: INFO: Exec stderr: "" Mar 8 11:22:25.243: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3784 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 11:22:25.243: INFO: >>> kubeConfig: /root/.kube/config I0308 11:22:25.244105 6 log.go:172] (0xc0028df340) Go away received I0308 11:22:25.273685 6 log.go:172] (0xc004bfed10) (0xc0020d6000) Create stream I0308 11:22:25.273710 6 log.go:172] (0xc004bfed10) (0xc0020d6000) Stream added, broadcasting: 1 I0308 11:22:25.275595 6 log.go:172] (0xc004bfed10) Reply frame received for 1 I0308 11:22:25.275632 6 log.go:172] (0xc004bfed10) (0xc0020d6140) Create stream I0308 11:22:25.275645 6 log.go:172] (0xc004bfed10) (0xc0020d6140) Stream added, broadcasting: 3 I0308 11:22:25.276497 6 log.go:172] (0xc004bfed10) Reply frame received for 3 I0308 11:22:25.276523 6 log.go:172] (0xc004bfed10) (0xc002064a00) Create stream I0308 11:22:25.276537 6 log.go:172] (0xc004bfed10) (0xc002064a00) Stream added, broadcasting: 5 I0308 11:22:25.277211 6 log.go:172] (0xc004bfed10) Reply frame received for 5 I0308 11:22:25.336560 6 log.go:172] (0xc004bfed10) Data frame received for 3 I0308 11:22:25.336591 6 log.go:172] (0xc0020d6140) (3) Data frame handling I0308 11:22:25.336602 6 log.go:172] (0xc0020d6140) (3) Data frame sent I0308 11:22:25.336616 6 log.go:172] (0xc004bfed10) Data frame received for 3 I0308 11:22:25.336626 6 log.go:172] (0xc0020d6140) (3) Data frame handling I0308 11:22:25.336649 6 log.go:172] (0xc004bfed10) Data frame received for 5 I0308 11:22:25.336665 6 log.go:172] (0xc002064a00) (5) Data frame handling I0308 11:22:25.337841 6 log.go:172] (0xc004bfed10) Data frame received for 1 I0308 11:22:25.337863 6 log.go:172] (0xc0020d6000) (1) Data frame handling I0308 11:22:25.337875 6 log.go:172] (0xc0020d6000) (1) Data frame sent I0308 11:22:25.337895 6 log.go:172] (0xc004bfed10) (0xc0020d6000) Stream removed, broadcasting: 1 I0308 11:22:25.337982 6 log.go:172] (0xc004bfed10) (0xc0020d6000) Stream removed, broadcasting: 1 I0308 11:22:25.337996 6 log.go:172] (0xc004bfed10) (0xc0020d6140) Stream removed, broadcasting: 3 I0308 11:22:25.338012 6 log.go:172] (0xc004bfed10) (0xc002064a00) Stream removed, broadcasting: 5 Mar 8 11:22:25.338: INFO: Exec stderr: "" Mar 8 11:22:25.338: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3784 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 11:22:25.338: INFO: >>> kubeConfig: /root/.kube/config I0308 11:22:25.338192 6 log.go:172] (0xc004bfed10) Go away received I0308 11:22:25.410403 6 log.go:172] (0xc002663290) (0xc002142320) Create stream I0308 11:22:25.410434 6 log.go:172] (0xc002663290) (0xc002142320) Stream added, broadcasting: 1 I0308 11:22:25.412453 6 log.go:172] (0xc002663290) Reply frame received for 1 I0308 11:22:25.412486 6 log.go:172] (0xc002663290) (0xc001d26dc0) Create stream I0308 11:22:25.412499 6 log.go:172] (0xc002663290) (0xc001d26dc0) Stream added, broadcasting: 3 I0308 11:22:25.413500 6 log.go:172] (0xc002663290) Reply frame received for 3 I0308 11:22:25.413626 6 log.go:172] (0xc002663290) (0xc0020d61e0) Create stream I0308 11:22:25.413646 6 log.go:172] (0xc002663290) (0xc0020d61e0) Stream added, broadcasting: 5 I0308 11:22:25.414882 6 log.go:172] (0xc002663290) Reply frame received for 5 I0308 11:22:25.476044 6 log.go:172] (0xc002663290) Data frame received for 3 I0308 11:22:25.476080 6 log.go:172] (0xc001d26dc0) (3) Data frame handling I0308 11:22:25.476102 6 log.go:172] (0xc001d26dc0) (3) Data frame sent I0308 11:22:25.476165 6 log.go:172] (0xc002663290) Data frame received for 3 I0308 11:22:25.476186 6 log.go:172] (0xc001d26dc0) (3) Data frame handling I0308 11:22:25.476358 6 log.go:172] (0xc002663290) Data frame received for 5 I0308 11:22:25.476375 6 log.go:172] (0xc0020d61e0) (5) Data frame handling I0308 11:22:25.477933 6 log.go:172] (0xc002663290) Data frame received for 1 I0308 11:22:25.477950 6 log.go:172] (0xc002142320) (1) Data frame handling I0308 11:22:25.477959 6 log.go:172] (0xc002142320) (1) Data frame sent I0308 11:22:25.477974 6 log.go:172] (0xc002663290) (0xc002142320) Stream removed, broadcasting: 1 I0308 11:22:25.477993 6 log.go:172] (0xc002663290) Go away received I0308 11:22:25.478136 6 log.go:172] (0xc002663290) (0xc002142320) Stream removed, broadcasting: 1 I0308 11:22:25.478155 6 log.go:172] (0xc002663290) (0xc001d26dc0) Stream removed, broadcasting: 3 I0308 11:22:25.478174 6 log.go:172] (0xc002663290) (0xc0020d61e0) Stream removed, broadcasting: 5 Mar 8 11:22:25.478: INFO: Exec stderr: "" Mar 8 11:22:25.478: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3784 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 11:22:25.478: INFO: >>> kubeConfig: /root/.kube/config I0308 11:22:25.503828 6 log.go:172] (0xc002c73760) (0xc001d27040) Create stream I0308 11:22:25.503853 6 log.go:172] (0xc002c73760) (0xc001d27040) Stream added, broadcasting: 1 I0308 11:22:25.505532 6 log.go:172] (0xc002c73760) Reply frame received for 1 I0308 11:22:25.505559 6 log.go:172] (0xc002c73760) (0xc0017dbf40) Create stream I0308 11:22:25.505569 6 log.go:172] (0xc002c73760) (0xc0017dbf40) Stream added, broadcasting: 3 I0308 11:22:25.506336 6 log.go:172] (0xc002c73760) Reply frame received for 3 I0308 11:22:25.506359 6 log.go:172] (0xc002c73760) (0xc0020d6320) Create stream I0308 11:22:25.506368 6 log.go:172] (0xc002c73760) (0xc0020d6320) Stream added, broadcasting: 5 I0308 11:22:25.507136 6 log.go:172] (0xc002c73760) Reply frame received for 5 I0308 11:22:25.587980 6 log.go:172] (0xc002c73760) Data frame received for 5 I0308 11:22:25.588009 6 log.go:172] (0xc0020d6320) (5) Data frame handling I0308 11:22:25.588038 6 log.go:172] (0xc002c73760) Data frame received for 3 I0308 11:22:25.588076 6 log.go:172] (0xc0017dbf40) (3) Data frame handling I0308 11:22:25.588090 6 log.go:172] (0xc0017dbf40) (3) Data frame sent I0308 11:22:25.588101 6 log.go:172] (0xc002c73760) Data frame received for 3 I0308 11:22:25.588107 6 log.go:172] (0xc0017dbf40) (3) Data frame handling I0308 11:22:25.589018 6 log.go:172] (0xc002c73760) Data frame received for 1 I0308 11:22:25.589034 6 log.go:172] (0xc001d27040) (1) Data frame handling I0308 11:22:25.589045 6 log.go:172] (0xc001d27040) (1) Data frame sent I0308 11:22:25.589058 6 log.go:172] (0xc002c73760) (0xc001d27040) Stream removed, broadcasting: 1 I0308 11:22:25.589122 6 log.go:172] (0xc002c73760) Go away received I0308 11:22:25.589159 6 log.go:172] (0xc002c73760) (0xc001d27040) Stream removed, broadcasting: 1 I0308 11:22:25.589175 6 log.go:172] (0xc002c73760) (0xc0017dbf40) Stream removed, broadcasting: 3 I0308 11:22:25.589187 6 log.go:172] (0xc002c73760) (0xc0020d6320) Stream removed, broadcasting: 5 Mar 8 11:22:25.589: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:22:25.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-3784" for this suite. • [SLOW TEST:5.305 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":214,"skipped":3458,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} SSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:22:25.594: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 8 11:22:25.680: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Mar 8 11:22:30.683: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 8 11:22:30.683: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Mar 8 11:22:32.743: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-7001 /apis/apps/v1/namespaces/deployment-7001/deployments/test-cleanup-deployment 2c1bb30e-9de0-4a57-9c92-95a20850f775 24837 1 2020-03-08 11:22:30 +0000 UTC map[name:cleanup-pod] map[deployment.kubernetes.io/revision:1] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0040a16e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-03-08 11:22:30 +0000 UTC,LastTransitionTime:2020-03-08 11:22:30 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-cleanup-deployment-55ffc6b7b6" has successfully progressed.,LastUpdateTime:2020-03-08 11:22:32 +0000 UTC,LastTransitionTime:2020-03-08 11:22:30 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Mar 8 11:22:32.746: INFO: New ReplicaSet "test-cleanup-deployment-55ffc6b7b6" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6 deployment-7001 /apis/apps/v1/namespaces/deployment-7001/replicasets/test-cleanup-deployment-55ffc6b7b6 8e868da3-f856-48e0-8598-56c3d370b67b 24826 1 2020-03-08 11:22:30 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 2c1bb30e-9de0-4a57-9c92-95a20850f775 0xc0029efc97 0xc0029efc98}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55ffc6b7b6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0029efd08 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Mar 8 11:22:32.749: INFO: Pod "test-cleanup-deployment-55ffc6b7b6-x9k4d" is available: &Pod{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6-x9k4d test-cleanup-deployment-55ffc6b7b6- deployment-7001 /api/v1/namespaces/deployment-7001/pods/test-cleanup-deployment-55ffc6b7b6-x9k4d 2d8c017d-7b97-4932-9ffc-e3149f37cac5 24825 0 2020-03-08 11:22:30 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-55ffc6b7b6 8e868da3-f856-48e0-8598-56c3d370b67b 0xc003798207 0xc003798208}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xnh7r,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xnh7r,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xnh7r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-control-plane,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:22:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:22:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:22:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:22:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.2,PodIP:10.244.0.107,StartTime:2020-03-08 11:22:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-08 11:22:31 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://b081f96fc03b37aa6b4f626067198061aeb87440799b96ec37dd03608144f95a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.0.107,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:22:32.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7001" for this suite. • [SLOW TEST:7.161 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":278,"completed":215,"skipped":3462,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:22:32.756: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 8 11:22:32.856: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-ba627c1a-15b8-4cfd-9277-df8b0563fe08" in namespace "security-context-test-3313" to be "success or failure" Mar 8 11:22:32.900: INFO: Pod "busybox-privileged-false-ba627c1a-15b8-4cfd-9277-df8b0563fe08": Phase="Pending", Reason="", readiness=false. Elapsed: 44.383361ms Mar 8 11:22:34.904: INFO: Pod "busybox-privileged-false-ba627c1a-15b8-4cfd-9277-df8b0563fe08": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.048247313s Mar 8 11:22:34.904: INFO: Pod "busybox-privileged-false-ba627c1a-15b8-4cfd-9277-df8b0563fe08" satisfied condition "success or failure" Mar 8 11:22:34.910: INFO: Got logs for pod "busybox-privileged-false-ba627c1a-15b8-4cfd-9277-df8b0563fe08": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:22:34.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3313" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":216,"skipped":3504,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:22:34.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 8 11:22:34.980: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:22:40.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2545" for this suite. • [SLOW TEST:5.696 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":278,"completed":217,"skipped":3526,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} SSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:22:40.615: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test hostPath mode Mar 8 11:22:40.674: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-8897" to be "success or failure" Mar 8 11:22:40.680: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.019806ms Mar 8 11:22:42.684: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009237765s STEP: Saw pod success Mar 8 11:22:42.684: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Mar 8 11:22:42.686: INFO: Trying to get logs from node kind-control-plane pod pod-host-path-test container test-container-1: STEP: delete the pod Mar 8 11:22:42.843: INFO: Waiting for pod pod-host-path-test to disappear Mar 8 11:22:42.848: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:22:42.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-8897" for this suite. •{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":218,"skipped":3532,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:22:42.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-9fj6r in namespace proxy-8844 I0308 11:22:42.928222 6 runners.go:189] Created replication controller with name: proxy-service-9fj6r, namespace: proxy-8844, replica count: 1 I0308 11:22:43.978653 6 runners.go:189] proxy-service-9fj6r Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0308 11:22:44.978959 6 runners.go:189] proxy-service-9fj6r Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0308 11:22:45.979191 6 runners.go:189] proxy-service-9fj6r Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0308 11:22:46.979410 6 runners.go:189] proxy-service-9fj6r Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0308 11:22:47.979684 6 runners.go:189] proxy-service-9fj6r Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0308 11:22:48.979928 6 runners.go:189] proxy-service-9fj6r Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0308 11:22:49.980231 6 runners.go:189] proxy-service-9fj6r Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0308 11:22:50.980428 6 runners.go:189] proxy-service-9fj6r Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 8 11:22:50.983: INFO: setup took 8.077432638s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Mar 8 11:22:50.988: INFO: (0) /api/v1/namespaces/proxy-8844/pods/proxy-service-9fj6r-ng442:1080/proxy/: test<... (200; 4.555318ms) Mar 8 11:22:50.988: INFO: (0) /api/v1/namespaces/proxy-8844/pods/http:proxy-service-9fj6r-ng442:160/proxy/: foo (200; 5.00352ms) Mar 8 11:22:50.997: INFO: (0) /api/v1/namespaces/proxy-8844/pods/proxy-service-9fj6r-ng442:162/proxy/: bar (200; 13.934827ms) Mar 8 11:22:50.997: INFO: (0) /api/v1/namespaces/proxy-8844/pods/http:proxy-service-9fj6r-ng442:1080/proxy/: ... (200; 13.945492ms) Mar 8 11:22:50.997: INFO: (0) /api/v1/namespaces/proxy-8844/services/proxy-service-9fj6r:portname1/proxy/: foo (200; 13.939717ms) Mar 8 11:22:50.997: INFO: (0) /api/v1/namespaces/proxy-8844/services/http:proxy-service-9fj6r:portname2/proxy/: bar (200; 14.068273ms) Mar 8 11:22:50.997: INFO: (0) /api/v1/namespaces/proxy-8844/services/proxy-service-9fj6r:portname2/proxy/: bar (200; 14.035345ms) Mar 8 11:22:50.999: INFO: (0) /api/v1/namespaces/proxy-8844/services/http:proxy-service-9fj6r:portname1/proxy/: foo (200; 15.202222ms) Mar 8 11:22:50.999: INFO: (0) /api/v1/namespaces/proxy-8844/pods/proxy-service-9fj6r-ng442:160/proxy/: foo (200; 15.276274ms) Mar 8 11:22:50.999: INFO: (0) /api/v1/namespaces/proxy-8844/pods/proxy-service-9fj6r-ng442/proxy/: test (200; 15.388306ms) Mar 8 11:22:50.999: INFO: (0) /api/v1/namespaces/proxy-8844/pods/http:proxy-service-9fj6r-ng442:162/proxy/: bar (200; 15.721533ms) Mar 8 11:22:51.001: INFO: (0) /api/v1/namespaces/proxy-8844/pods/https:proxy-service-9fj6r-ng442:443/proxy/: test<... (200; 3.392472ms) Mar 8 11:22:51.011: INFO: (1) /api/v1/namespaces/proxy-8844/pods/https:proxy-service-9fj6r-ng442:443/proxy/: ... (200; 5.252569ms) Mar 8 11:22:51.012: INFO: (1) /api/v1/namespaces/proxy-8844/services/http:proxy-service-9fj6r:portname2/proxy/: bar (200; 5.383368ms) Mar 8 11:22:51.012: INFO: (1) /api/v1/namespaces/proxy-8844/pods/proxy-service-9fj6r-ng442/proxy/: test (200; 5.380852ms) Mar 8 11:22:51.013: INFO: (1) /api/v1/namespaces/proxy-8844/pods/proxy-service-9fj6r-ng442:160/proxy/: foo (200; 5.768191ms) Mar 8 11:22:51.013: INFO: (1) /api/v1/namespaces/proxy-8844/pods/http:proxy-service-9fj6r-ng442:162/proxy/: bar (200; 5.790486ms) Mar 8 11:22:51.013: INFO: (1) /api/v1/namespaces/proxy-8844/services/https:proxy-service-9fj6r:tlsportname2/proxy/: tls qux (200; 5.990604ms) Mar 8 11:22:51.013: INFO: (1) /api/v1/namespaces/proxy-8844/pods/https:proxy-service-9fj6r-ng442:460/proxy/: tls baz (200; 5.886884ms) Mar 8 11:22:51.013: INFO: (1) /api/v1/namespaces/proxy-8844/pods/proxy-service-9fj6r-ng442:162/proxy/: bar (200; 5.997181ms) Mar 8 11:22:51.014: INFO: (1) /api/v1/namespaces/proxy-8844/services/https:proxy-service-9fj6r:tlsportname1/proxy/: tls baz (200; 7.418547ms) Mar 8 11:22:51.014: INFO: (1) /api/v1/namespaces/proxy-8844/services/http:proxy-service-9fj6r:portname1/proxy/: foo (200; 7.401948ms) Mar 8 11:22:51.015: INFO: (1) /api/v1/namespaces/proxy-8844/services/proxy-service-9fj6r:portname2/proxy/: bar (200; 7.54476ms) Mar 8 11:22:51.015: INFO: (1) /api/v1/namespaces/proxy-8844/services/proxy-service-9fj6r:portname1/proxy/: foo (200; 7.607056ms) Mar 8 11:22:51.020: INFO: (2) /api/v1/namespaces/proxy-8844/pods/http:proxy-service-9fj6r-ng442:162/proxy/: bar (200; 5.277359ms) Mar 8 11:22:51.022: INFO: (2) /api/v1/namespaces/proxy-8844/pods/https:proxy-service-9fj6r-ng442:443/proxy/: test<... (200; 7.15892ms) Mar 8 11:22:51.022: INFO: (2) /api/v1/namespaces/proxy-8844/services/proxy-service-9fj6r:portname2/proxy/: bar (200; 7.171462ms) Mar 8 11:22:51.022: INFO: (2) /api/v1/namespaces/proxy-8844/pods/proxy-service-9fj6r-ng442/proxy/: test (200; 7.175438ms) Mar 8 11:22:51.022: INFO: (2) /api/v1/namespaces/proxy-8844/services/https:proxy-service-9fj6r:tlsportname1/proxy/: tls baz (200; 7.130026ms) Mar 8 11:22:51.022: INFO: (2) /api/v1/namespaces/proxy-8844/pods/proxy-service-9fj6r-ng442:160/proxy/: foo (200; 7.150959ms) Mar 8 11:22:51.022: INFO: (2) /api/v1/namespaces/proxy-8844/services/http:proxy-service-9fj6r:portname1/proxy/: foo (200; 7.258024ms) Mar 8 11:22:51.022: INFO: (2) /api/v1/namespaces/proxy-8844/services/proxy-service-9fj6r:portname1/proxy/: foo (200; 7.348518ms) Mar 8 11:22:51.022: INFO: (2) /api/v1/namespaces/proxy-8844/pods/proxy-service-9fj6r-ng442:162/proxy/: bar (200; 7.456459ms) Mar 8 11:22:51.023: INFO: (2) /api/v1/namespaces/proxy-8844/pods/https:proxy-service-9fj6r-ng442:462/proxy/: tls qux (200; 7.858624ms) Mar 8 11:22:51.023: INFO: (2) /api/v1/namespaces/proxy-8844/services/https:proxy-service-9fj6r:tlsportname2/proxy/: tls qux (200; 7.848016ms) Mar 8 11:22:51.023: INFO: (2) /api/v1/namespaces/proxy-8844/pods/http:proxy-service-9fj6r-ng442:1080/proxy/: ... (200; 7.828077ms) Mar 8 11:22:51.023: INFO: (2) /api/v1/namespaces/proxy-8844/services/http:proxy-service-9fj6r:portname2/proxy/: bar (200; 7.761228ms) Mar 8 11:22:51.023: INFO: (2) /api/v1/namespaces/proxy-8844/pods/https:proxy-service-9fj6r-ng442:460/proxy/: tls baz (200; 7.732039ms) Mar 8 11:22:51.032: INFO: (3) /api/v1/namespaces/proxy-8844/pods/https:proxy-service-9fj6r-ng442:460/proxy/: tls baz (200; 9.590764ms) Mar 8 11:22:51.036: INFO: (3) /api/v1/namespaces/proxy-8844/services/http:proxy-service-9fj6r:portname2/proxy/: bar (200; 13.717042ms) Mar 8 11:22:51.036: INFO: (3) /api/v1/namespaces/proxy-8844/services/http:proxy-service-9fj6r:portname1/proxy/: foo (200; 13.756733ms) Mar 8 11:22:51.037: INFO: (3) /api/v1/namespaces/proxy-8844/services/https:proxy-service-9fj6r:tlsportname1/proxy/: tls baz (200; 14.425303ms) Mar 8 11:22:51.037: INFO: (3) /api/v1/namespaces/proxy-8844/pods/proxy-service-9fj6r-ng442:160/proxy/: foo (200; 14.651937ms) Mar 8 11:22:51.037: INFO: (3) /api/v1/namespaces/proxy-8844/pods/https:proxy-service-9fj6r-ng442:462/proxy/: tls qux (200; 14.600608ms) Mar 8 11:22:51.037: INFO: (3) /api/v1/namespaces/proxy-8844/pods/proxy-service-9fj6r-ng442:162/proxy/: bar (200; 14.554521ms) Mar 8 11:22:51.037: INFO: (3) /api/v1/namespaces/proxy-8844/pods/http:proxy-service-9fj6r-ng442:1080/proxy/: ... (200; 14.66731ms) Mar 8 11:22:51.037: INFO: (3) /api/v1/namespaces/proxy-8844/pods/https:proxy-service-9fj6r-ng442:443/proxy/: test<... (200; 14.731041ms) Mar 8 11:22:51.037: INFO: (3) /api/v1/namespaces/proxy-8844/pods/proxy-service-9fj6r-ng442/proxy/: test (200; 14.816453ms) Mar 8 11:22:51.037: INFO: (3) /api/v1/namespaces/proxy-8844/services/proxy-service-9fj6r:portname1/proxy/: foo (200; 14.776383ms) Mar 8 11:22:51.037: INFO: (3) /api/v1/namespaces/proxy-8844/pods/http:proxy-service-9fj6r-ng442:160/proxy/: foo (200; 14.739292ms) Mar 8 11:22:51.037: INFO: (3) /api/v1/namespaces/proxy-8844/services/proxy-service-9fj6r:portname2/proxy/: bar (200; 14.839606ms) Mar 8 11:22:51.044: INFO: (4) /api/v1/namespaces/proxy-8844/pods/proxy-service-9fj6r-ng442:1080/proxy/: test<... (200; 6.617231ms) Mar 8 11:22:51.044: INFO: (4) /api/v1/namespaces/proxy-8844/pods/proxy-service-9fj6r-ng442:160/proxy/: foo (200; 6.796718ms) Mar 8 11:22:51.045: INFO: (4) /api/v1/namespaces/proxy-8844/pods/https:proxy-service-9fj6r-ng442:443/proxy/: test (200; 9.667416ms) Mar 8 11:22:51.048: INFO: (4) /api/v1/namespaces/proxy-8844/pods/http:proxy-service-9fj6r-ng442:1080/proxy/: ... (200; 9.932859ms) Mar 8 11:22:51.048: INFO: (4) /api/v1/namespaces/proxy-8844/services/https:proxy-service-9fj6r:tlsportname1/proxy/: tls baz (200; 10.188601ms) Mar 8 11:22:51.048: INFO: (4) /api/v1/namespaces/proxy-8844/pods/http:proxy-service-9fj6r-ng442:162/proxy/: bar (200; 10.176812ms) Mar 8 11:22:51.048: INFO: (4) /api/v1/namespaces/proxy-8844/pods/http:proxy-service-9fj6r-ng442:160/proxy/: foo (200; 10.056975ms) Mar 8 11:22:51.048: INFO: (4) /api/v1/namespaces/proxy-8844/services/proxy-service-9fj6r:portname1/proxy/: foo (200; 10.1485ms) Mar 8 11:22:51.061: INFO: (5) /api/v1/namespaces/proxy-8844/pods/proxy-service-9fj6r-ng442/proxy/: test (200; 12.516948ms) Mar 8 11:22:51.061: INFO: (5) /api/v1/namespaces/proxy-8844/pods/https:proxy-service-9fj6r-ng442:462/proxy/: tls qux (200; 12.538383ms) Mar 8 11:22:51.061: INFO: (5) /api/v1/namespaces/proxy-8844/pods/http:proxy-service-9fj6r-ng442:1080/proxy/: ... (200; 12.561729ms) Mar 8 11:22:51.061: INFO: (5) /api/v1/namespaces/proxy-8844/pods/http:proxy-service-9fj6r-ng442:162/proxy/: bar (200; 12.579438ms) Mar 8 11:22:51.061: INFO: (5) /api/v1/namespaces/proxy-8844/pods/proxy-service-9fj6r-ng442:162/proxy/: bar (200; 12.788567ms) Mar 8 11:22:51.061: INFO: (5) /api/v1/namespaces/proxy-8844/pods/proxy-service-9fj6r-ng442:160/proxy/: foo (200; 13.368427ms) Mar 8 11:22:51.061: INFO: (5) /api/v1/namespaces/proxy-8844/pods/https:proxy-service-9fj6r-ng442:460/proxy/: tls baz (200; 13.540452ms) Mar 8 11:22:51.061: INFO: (5) /api/v1/namespaces/proxy-8844/services/proxy-service-9fj6r:portname1/proxy/: foo (200; 13.324943ms) Mar 8 11:22:51.061: INFO: (5) /api/v1/namespaces/proxy-8844/pods/proxy-service-9fj6r-ng442:1080/proxy/: test<... (200; 13.436082ms) Mar 8 11:22:51.061: INFO: (5) /api/v1/namespaces/proxy-8844/services/http:proxy-service-9fj6r:portname1/proxy/: foo (200; 13.405694ms) Mar 8 11:22:51.061: INFO: (5) /api/v1/namespaces/proxy-8844/pods/http:proxy-service-9fj6r-ng442:160/proxy/: foo (200; 13.430116ms) Mar 8 11:22:51.062: INFO: (5) /api/v1/namespaces/proxy-8844/services/proxy-service-9fj6r:portname2/proxy/: bar (200; 13.88568ms) Mar 8 11:22:51.062: INFO: (5) /api/v1/namespaces/proxy-8844/services/https:proxy-service-9fj6r:tlsportname2/proxy/: tls qux (200; 13.918086ms) Mar 8 11:22:51.062: INFO: (5) /api/v1/namespaces/proxy-8844/services/http:proxy-service-9fj6r:portname2/proxy/: bar (200; 13.947825ms) Mar 8 11:22:51.062: INFO: (5) /api/v1/namespaces/proxy-8844/services/https:proxy-service-9fj6r:tlsportname1/proxy/: tls baz (200; 13.936171ms) Mar 8 11:22:51.062: INFO: (5) /api/v1/namespaces/proxy-8844/pods/https:proxy-service-9fj6r-ng442:443/proxy/: ... (200; 4.757724ms) Mar 8 11:22:51.067: INFO: (6) /api/v1/namespaces/proxy-8844/pods/proxy-service-9fj6r-ng442:160/proxy/: foo (200; 5.293113ms) Mar 8 11:22:51.067: INFO: (6) /api/v1/namespaces/proxy-8844/pods/proxy-service-9fj6r-ng442:162/proxy/: bar (200; 5.330839ms) Mar 8 11:22:51.069: INFO: (6) /api/v1/namespaces/proxy-8844/pods/proxy-service-9fj6r-ng442/proxy/: test (200; 6.32309ms) Mar 8 11:22:51.069: INFO: (6) /api/v1/namespaces/proxy-8844/pods/https:proxy-service-9fj6r-ng442:443/proxy/: test<... (200; 6.452279ms) Mar 8 11:22:51.069: INFO: (6) /api/v1/namespaces/proxy-8844/services/http:proxy-service-9fj6r:portname1/proxy/: foo (200; 6.538853ms) Mar 8 11:22:51.069: INFO: (6) /api/v1/namespaces/proxy-8844/services/proxy-service-9fj6r:portname2/proxy/: bar (200; 6.584883ms) Mar 8 11:22:51.069: INFO: (6) /api/v1/namespaces/proxy-8844/services/https:proxy-service-9fj6r:tlsportname2/proxy/: tls qux (200; 7.111109ms) Mar 8 11:22:51.069: INFO: (6) /api/v1/namespaces/proxy-8844/services/proxy-service-9fj6r:portname1/proxy/: foo (200; 7.149157ms) Mar 8 11:22:51.069: INFO: (6) /api/v1/namespaces/proxy-8844/services/http:proxy-service-9fj6r:portname2/proxy/: bar (200; 7.159358ms) Mar 8 11:22:51.069: INFO: (6) /api/v1/namespaces/proxy-8844/services/https:proxy-service-9fj6r:tlsportname1/proxy/: tls baz (200; 7.184261ms) Mar 8 11:22:51.073: INFO: (7) /api/v1/namespaces/proxy-8844/pods/http:proxy-service-9fj6r-ng442:160/proxy/: foo (200; 3.452658ms) Mar 8 11:22:51.073: INFO: (7) /api/v1/namespaces/proxy-8844/pods/proxy-service-9fj6r-ng442/proxy/: test (200; 3.47482ms) Mar 8 11:22:51.073: INFO: (7) /api/v1/namespaces/proxy-8844/pods/proxy-service-9fj6r-ng442:160/proxy/: foo (200; 3.478633ms) Mar 8 11:22:51.073: INFO: (7) /api/v1/namespaces/proxy-8844/pods/proxy-service-9fj6r-ng442:1080/proxy/: test<... (200; 3.870419ms) Mar 8 11:22:51.074: INFO: (7) /api/v1/namespaces/proxy-8844/pods/https:proxy-service-9fj6r-ng442:462/proxy/: tls qux (200; 4.184683ms) Mar 8 11:22:51.074: INFO: (7) /api/v1/namespaces/proxy-8844/pods/http:proxy-service-9fj6r-ng442:162/proxy/: bar (200; 4.253794ms) Mar 8 11:22:51.076: INFO: (7) /api/v1/namespaces/proxy-8844/services/http:proxy-service-9fj6r:portname2/proxy/: bar (200; 6.150856ms) Mar 8 11:22:51.076: INFO: (7) /api/v1/namespaces/proxy-8844/pods/http:proxy-service-9fj6r-ng442:1080/proxy/: ... (200; 6.746353ms) Mar 8 11:22:51.076: INFO: (7) /api/v1/namespaces/proxy-8844/pods/https:proxy-service-9fj6r-ng442:443/proxy/: ... (200; 5.529103ms) Mar 8 11:22:51.083: INFO: (8) /api/v1/namespaces/proxy-8844/pods/https:proxy-service-9fj6r-ng442:460/proxy/: tls baz (200; 5.56756ms) Mar 8 11:22:51.083: INFO: (8) /api/v1/namespaces/proxy-8844/services/proxy-service-9fj6r:portname1/proxy/: foo (200; 5.655589ms) Mar 8 11:22:51.083: INFO: (8) /api/v1/namespaces/proxy-8844/pods/http:proxy-service-9fj6r-ng442:160/proxy/: foo (200; 5.621721ms) Mar 8 11:22:51.083: INFO: (8) /api/v1/namespaces/proxy-8844/pods/http:proxy-service-9fj6r-ng442:162/proxy/: bar (200; 5.646605ms) Mar 8 11:22:51.083: INFO: (8) /api/v1/namespaces/proxy-8844/services/http:proxy-service-9fj6r:portname1/proxy/: foo (200; 5.329994ms) Mar 8 11:22:51.083: INFO: (8) /api/v1/namespaces/proxy-8844/pods/proxy-service-9fj6r-ng442:1080/proxy/: test<... (200; 5.714461ms) Mar 8 11:22:51.083: INFO: (8) /api/v1/namespaces/proxy-8844/pods/proxy-service-9fj6r-ng442:162/proxy/: bar (200; 5.678859ms) Mar 8 11:22:51.083: INFO: (8) /api/v1/namespaces/proxy-8844/services/https:proxy-service-9fj6r:tlsportname1/proxy/: tls baz (200; 5.681277ms) Mar 8 11:22:51.083: INFO: (8) /api/v1/namespaces/proxy-8844/pods/proxy-service-9fj6r-ng442/proxy/: test (200; 5.680889ms) Mar 8 11:22:51.083: INFO: (8) /api/v1/namespaces/proxy-8844/pods/proxy-service-9fj6r-ng442:160/proxy/: foo (200; 5.744905ms) Mar 8 11:22:51.083: INFO: (8) /api/v1/namespaces/proxy-8844/pods/https:proxy-service-9fj6r-ng442:443/proxy/: ... (200; 5.336698ms) Mar 8 11:22:51.089: INFO: (9) /api/v1/namespaces/proxy-8844/pods/proxy-service-9fj6r-ng442/proxy/: test (200; 5.338332ms) Mar 8 11:22:51.090: INFO: (9) /api/v1/namespaces/proxy-8844/pods/proxy-service-9fj6r-ng442:1080/proxy/: test<... (200; 5.463871ms) Mar 8 11:22:51.090: INFO: (9) /api/v1/namespaces/proxy-8844/pods/proxy-service-9fj6r-ng442:162/proxy/: bar (200; 5.480742ms) Mar 8 11:22:51.090: INFO: (9) /api/v1/namespaces/proxy-8844/pods/https:proxy-service-9fj6r-ng442:443/proxy/: test (200; 4.492444ms) Mar 8 11:22:51.108: INFO: (10) /api/v1/namespaces/proxy-8844/pods/https:proxy-service-9fj6r-ng442:462/proxy/: tls qux (200; 4.581376ms) Mar 8 11:22:51.109: INFO: (10) /api/v1/namespaces/proxy-8844/services/proxy-service-9fj6r:portname2/proxy/: bar (200; 4.629042ms) Mar 8 11:22:51.109: INFO: (10) /api/v1/namespaces/proxy-8844/pods/proxy-service-9fj6r-ng442:1080/proxy/: test<... (200; 4.794068ms) Mar 8 11:22:51.111: INFO: (10) /api/v1/namespaces/proxy-8844/services/http:proxy-service-9fj6r:portname1/proxy/: foo (200; 6.697347ms) Mar 8 11:22:51.111: INFO: (10) /api/v1/namespaces/proxy-8844/services/https:proxy-service-9fj6r:tlsportname2/proxy/: tls qux (200; 6.661956ms) Mar 8 11:22:51.111: INFO: (10) /api/v1/namespaces/proxy-8844/pods/proxy-service-9fj6r-ng442:162/proxy/: bar (200; 6.639282ms) Mar 8 11:22:51.111: INFO: (10) /api/v1/namespaces/proxy-8844/pods/http:proxy-service-9fj6r-ng442:1080/proxy/: ... (200; 6.726889ms) Mar 8 11:22:51.111: INFO: (10) /api/v1/namespaces/proxy-8844/pods/http:proxy-service-9fj6r-ng442:160/proxy/: foo (200; 6.77571ms) Mar 8 11:22:51.111: INFO: (10) /api/v1/namespaces/proxy-8844/services/http:proxy-service-9fj6r:portname2/proxy/: bar (200; 6.858747ms) Mar 8 11:22:51.111: INFO: (10) /api/v1/namespaces/proxy-8844/pods/https:proxy-service-9fj6r-ng442:443/proxy/: test (200; 3.039543ms) Mar 8 11:22:51.114: INFO: (11) /api/v1/namespaces/proxy-8844/pods/proxy-service-9fj6r-ng442:160/proxy/: foo (200; 3.383237ms) Mar 8 11:22:51.117: INFO: (11) /api/v1/namespaces/proxy-8844/services/proxy-service-9fj6r:portname1/proxy/: foo (200; 5.632519ms) Mar 8 11:22:51.117: INFO: (11) /api/v1/namespaces/proxy-8844/pods/http:proxy-service-9fj6r-ng442:162/proxy/: bar (200; 5.598492ms) Mar 8 11:22:51.117: INFO: (11) /api/v1/namespaces/proxy-8844/pods/https:proxy-service-9fj6r-ng442:443/proxy/: ... (200; 5.739563ms) Mar 8 11:22:51.117: INFO: (11) /api/v1/namespaces/proxy-8844/pods/proxy-service-9fj6r-ng442:1080/proxy/: test<... (200; 5.731213ms) Mar 8 11:22:51.117: INFO: (11) /api/v1/namespaces/proxy-8844/services/http:proxy-service-9fj6r:portname2/proxy/: bar (200; 6.030926ms) Mar 8 11:22:51.119: INFO: (12) /api/v1/namespaces/proxy-8844/pods/http:proxy-service-9fj6r-ng442:162/proxy/: bar (200; 2.091788ms) Mar 8 11:22:51.120: INFO: (12) /api/v1/namespaces/proxy-8844/pods/proxy-service-9fj6r-ng442/proxy/: test (200; 2.421536ms) Mar 8 11:22:51.122: INFO: (12) /api/v1/namespaces/proxy-8844/pods/http:proxy-service-9fj6r-ng442:1080/proxy/: ... (200; 4.561602ms) Mar 8 11:22:51.122: INFO: (12) /api/v1/namespaces/proxy-8844/pods/https:proxy-service-9fj6r-ng442:460/proxy/: tls baz (200; 4.596666ms) Mar 8 11:22:51.122: INFO: (12) /api/v1/namespaces/proxy-8844/pods/https:proxy-service-9fj6r-ng442:462/proxy/: tls qux (200; 4.711676ms) Mar 8 11:22:51.122: INFO: (12) /api/v1/namespaces/proxy-8844/services/https:proxy-service-9fj6r:tlsportname2/proxy/: tls qux (200; 4.985921ms) Mar 8 11:22:51.122: INFO: (12) /api/v1/namespaces/proxy-8844/pods/proxy-service-9fj6r-ng442:1080/proxy/: test<... (200; 5.276895ms) Mar 8 11:22:51.123: INFO: (12) /api/v1/namespaces/proxy-8844/pods/proxy-service-9fj6r-ng442:162/proxy/: bar (200; 5.365063ms) Mar 8 11:22:51.123: INFO: (12) /api/v1/namespaces/proxy-8844/pods/https:proxy-service-9fj6r-ng442:443/proxy/: test (200; 5.416589ms) Mar 8 11:22:51.129: INFO: (13) /api/v1/namespaces/proxy-8844/services/http:proxy-service-9fj6r:portname1/proxy/: foo (200; 5.619021ms) Mar 8 11:22:51.129: INFO: (13) /api/v1/namespaces/proxy-8844/services/proxy-service-9fj6r:portname2/proxy/: bar (200; 5.651284ms) Mar 8 11:22:51.130: INFO: (13) /api/v1/namespaces/proxy-8844/services/proxy-service-9fj6r:portname1/proxy/: foo (200; 5.96358ms) Mar 8 11:22:51.130: INFO: (13) /api/v1/namespaces/proxy-8844/pods/https:proxy-service-9fj6r-ng442:443/proxy/: test<... (200; 5.971518ms) Mar 8 11:22:51.130: INFO: (13) /api/v1/namespaces/proxy-8844/services/http:proxy-service-9fj6r:portname2/proxy/: bar (200; 6.112884ms) Mar 8 11:22:51.130: INFO: (13) /api/v1/namespaces/proxy-8844/pods/https:proxy-service-9fj6r-ng442:460/proxy/: tls baz (200; 6.142275ms) Mar 8 11:22:51.130: INFO: (13) /api/v1/namespaces/proxy-8844/pods/http:proxy-service-9fj6r-ng442:1080/proxy/: ... (200; 6.295092ms) Mar 8 11:22:51.130: INFO: (13) /api/v1/namespaces/proxy-8844/pods/https:proxy-service-9fj6r-ng442:462/proxy/: tls qux (200; 6.243976ms) Mar 8 11:22:51.130: INFO: (13) /api/v1/namespaces/proxy-8844/services/https:proxy-service-9fj6r:tlsportname1/proxy/: tls baz (200; 6.225097ms) Mar 8 11:22:51.130: INFO: (13) /api/v1/namespaces/proxy-8844/pods/http:proxy-service-9fj6r-ng442:160/proxy/: foo (200; 6.213659ms) Mar 8 11:22:51.133: INFO: (14) /api/v1/namespaces/proxy-8844/pods/http:proxy-service-9fj6r-ng442:160/proxy/: foo (200; 3.603204ms) Mar 8 11:22:51.134: INFO: (14) /api/v1/namespaces/proxy-8844/pods/proxy-service-9fj6r-ng442:1080/proxy/: test<... (200; 4.294576ms) Mar 8 11:22:51.137: INFO: (14) /api/v1/namespaces/proxy-8844/pods/https:proxy-service-9fj6r-ng442:462/proxy/: tls qux (200; 7.115138ms) Mar 8 11:22:51.139: INFO: (14) /api/v1/namespaces/proxy-8844/pods/proxy-service-9fj6r-ng442:160/proxy/: foo (200; 8.787791ms) Mar 8 11:22:51.141: INFO: (14) /api/v1/namespaces/proxy-8844/pods/proxy-service-9fj6r-ng442:162/proxy/: bar (200; 10.615563ms) Mar 8 11:22:51.141: INFO: (14) /api/v1/namespaces/proxy-8844/services/proxy-service-9fj6r:portname1/proxy/: foo (200; 10.736728ms) Mar 8 11:22:51.141: INFO: (14) /api/v1/namespaces/proxy-8844/services/proxy-service-9fj6r:portname2/proxy/: bar (200; 10.740034ms) Mar 8 11:22:51.141: INFO: (14) /api/v1/namespaces/proxy-8844/pods/http:proxy-service-9fj6r-ng442:1080/proxy/: ... (200; 11.032501ms) Mar 8 11:22:51.141: INFO: (14) /api/v1/namespaces/proxy-8844/pods/proxy-service-9fj6r-ng442/proxy/: test (200; 11.154548ms) Mar 8 11:22:51.141: INFO: (14) /api/v1/namespaces/proxy-8844/pods/https:proxy-service-9fj6r-ng442:443/proxy/: test (200; 4.932527ms) Mar 8 11:22:51.148: INFO: (15) /api/v1/namespaces/proxy-8844/pods/http:proxy-service-9fj6r-ng442:160/proxy/: foo (200; 4.893569ms) Mar 8 11:22:51.148: INFO: (15) /api/v1/namespaces/proxy-8844/services/http:proxy-service-9fj6r:portname2/proxy/: bar (200; 5.041783ms) Mar 8 11:22:51.148: INFO: (15) /api/v1/namespaces/proxy-8844/services/http:proxy-service-9fj6r:portname1/proxy/: foo (200; 5.060195ms) Mar 8 11:22:51.148: INFO: (15) /api/v1/namespaces/proxy-8844/services/https:proxy-service-9fj6r:tlsportname1/proxy/: tls baz (200; 5.564841ms) Mar 8 11:22:51.148: INFO: (15) /api/v1/namespaces/proxy-8844/pods/proxy-service-9fj6r-ng442:160/proxy/: foo (200; 5.589635ms) Mar 8 11:22:51.148: INFO: (15) /api/v1/namespaces/proxy-8844/pods/https:proxy-service-9fj6r-ng442:460/proxy/: tls baz (200; 5.639115ms) Mar 8 11:22:51.148: INFO: (15) /api/v1/namespaces/proxy-8844/services/https:proxy-service-9fj6r:tlsportname2/proxy/: tls qux (200; 5.655614ms) Mar 8 11:22:51.148: INFO: (15) /api/v1/namespaces/proxy-8844/services/proxy-service-9fj6r:portname2/proxy/: bar (200; 5.6054ms) Mar 8 11:22:51.149: INFO: (15) /api/v1/namespaces/proxy-8844/pods/proxy-service-9fj6r-ng442:1080/proxy/: test<... (200; 5.786198ms) Mar 8 11:22:51.149: INFO: (15) /api/v1/namespaces/proxy-8844/services/proxy-service-9fj6r:portname1/proxy/: foo (200; 5.826314ms) Mar 8 11:22:51.149: INFO: (15) /api/v1/namespaces/proxy-8844/pods/http:proxy-service-9fj6r-ng442:1080/proxy/: ... (200; 5.788524ms) Mar 8 11:22:51.149: INFO: (15) /api/v1/namespaces/proxy-8844/pods/http:proxy-service-9fj6r-ng442:162/proxy/: bar (200; 5.81754ms) Mar 8 11:22:51.153: INFO: (16) /api/v1/namespaces/proxy-8844/pods/http:proxy-service-9fj6r-ng442:160/proxy/: foo (200; 4.14797ms) Mar 8 11:22:51.153: INFO: (16) /api/v1/namespaces/proxy-8844/pods/proxy-service-9fj6r-ng442/proxy/: test (200; 4.18666ms) Mar 8 11:22:51.153: INFO: (16) /api/v1/namespaces/proxy-8844/pods/proxy-service-9fj6r-ng442:162/proxy/: bar (200; 4.214354ms) Mar 8 11:22:51.153: INFO: (16) /api/v1/namespaces/proxy-8844/pods/proxy-service-9fj6r-ng442:1080/proxy/: test<... (200; 4.275685ms) Mar 8 11:22:51.154: INFO: (16) /api/v1/namespaces/proxy-8844/services/http:proxy-service-9fj6r:portname1/proxy/: foo (200; 5.105084ms) Mar 8 11:22:51.154: INFO: (16) /api/v1/namespaces/proxy-8844/pods/http:proxy-service-9fj6r-ng442:162/proxy/: bar (200; 5.103331ms) Mar 8 11:22:51.154: INFO: (16) /api/v1/namespaces/proxy-8844/services/http:proxy-service-9fj6r:portname2/proxy/: bar (200; 5.631988ms) Mar 8 11:22:51.154: INFO: (16) /api/v1/namespaces/proxy-8844/pods/proxy-service-9fj6r-ng442:160/proxy/: foo (200; 5.54626ms) Mar 8 11:22:51.154: INFO: (16) /api/v1/namespaces/proxy-8844/services/proxy-service-9fj6r:portname2/proxy/: bar (200; 5.630928ms) Mar 8 11:22:51.154: INFO: (16) /api/v1/namespaces/proxy-8844/services/proxy-service-9fj6r:portname1/proxy/: foo (200; 5.672162ms) Mar 8 11:22:51.155: INFO: (16) /api/v1/namespaces/proxy-8844/services/https:proxy-service-9fj6r:tlsportname2/proxy/: tls qux (200; 5.746848ms) Mar 8 11:22:51.155: INFO: (16) /api/v1/namespaces/proxy-8844/pods/http:proxy-service-9fj6r-ng442:1080/proxy/: ... (200; 5.718485ms) Mar 8 11:22:51.155: INFO: (16) /api/v1/namespaces/proxy-8844/pods/https:proxy-service-9fj6r-ng442:462/proxy/: tls qux (200; 5.734172ms) Mar 8 11:22:51.155: INFO: (16) /api/v1/namespaces/proxy-8844/pods/https:proxy-service-9fj6r-ng442:443/proxy/: ... (200; 5.110431ms) Mar 8 11:22:51.160: INFO: (17) /api/v1/namespaces/proxy-8844/pods/proxy-service-9fj6r-ng442:1080/proxy/: test<... (200; 5.102727ms) Mar 8 11:22:51.160: INFO: (17) /api/v1/namespaces/proxy-8844/pods/proxy-service-9fj6r-ng442:160/proxy/: foo (200; 5.180457ms) Mar 8 11:22:51.160: INFO: (17) /api/v1/namespaces/proxy-8844/services/http:proxy-service-9fj6r:portname2/proxy/: bar (200; 5.215409ms) Mar 8 11:22:51.160: INFO: (17) /api/v1/namespaces/proxy-8844/services/http:proxy-service-9fj6r:portname1/proxy/: foo (200; 5.177844ms) Mar 8 11:22:51.160: INFO: (17) /api/v1/namespaces/proxy-8844/services/proxy-service-9fj6r:portname2/proxy/: bar (200; 5.197607ms) Mar 8 11:22:51.160: INFO: (17) /api/v1/namespaces/proxy-8844/pods/https:proxy-service-9fj6r-ng442:462/proxy/: tls qux (200; 5.256732ms) Mar 8 11:22:51.160: INFO: (17) /api/v1/namespaces/proxy-8844/services/https:proxy-service-9fj6r:tlsportname2/proxy/: tls qux (200; 5.213811ms) Mar 8 11:22:51.160: INFO: (17) /api/v1/namespaces/proxy-8844/pods/proxy-service-9fj6r-ng442/proxy/: test (200; 5.243447ms) Mar 8 11:22:51.160: INFO: (17) /api/v1/namespaces/proxy-8844/pods/proxy-service-9fj6r-ng442:162/proxy/: bar (200; 5.29338ms) Mar 8 11:22:51.160: INFO: (17) /api/v1/namespaces/proxy-8844/services/proxy-service-9fj6r:portname1/proxy/: foo (200; 5.423498ms) Mar 8 11:22:51.163: INFO: (18) /api/v1/namespaces/proxy-8844/pods/proxy-service-9fj6r-ng442:1080/proxy/: test<... (200; 2.708451ms) Mar 8 11:22:51.163: INFO: (18) /api/v1/namespaces/proxy-8844/pods/proxy-service-9fj6r-ng442/proxy/: test (200; 2.777638ms) Mar 8 11:22:51.164: INFO: (18) /api/v1/namespaces/proxy-8844/pods/http:proxy-service-9fj6r-ng442:1080/proxy/: ... (200; 4.159577ms) Mar 8 11:22:51.164: INFO: (18) /api/v1/namespaces/proxy-8844/pods/http:proxy-service-9fj6r-ng442:160/proxy/: foo (200; 4.161234ms) Mar 8 11:22:51.165: INFO: (18) /api/v1/namespaces/proxy-8844/pods/http:proxy-service-9fj6r-ng442:162/proxy/: bar (200; 5.013853ms) Mar 8 11:22:51.166: INFO: (18) /api/v1/namespaces/proxy-8844/services/https:proxy-service-9fj6r:tlsportname2/proxy/: tls qux (200; 5.344047ms) Mar 8 11:22:51.166: INFO: (18) /api/v1/namespaces/proxy-8844/services/proxy-service-9fj6r:portname2/proxy/: bar (200; 5.330853ms) Mar 8 11:22:51.166: INFO: (18) /api/v1/namespaces/proxy-8844/pods/proxy-service-9fj6r-ng442:162/proxy/: bar (200; 5.321479ms) Mar 8 11:22:51.166: INFO: (18) /api/v1/namespaces/proxy-8844/pods/https:proxy-service-9fj6r-ng442:460/proxy/: tls baz (200; 5.282ms) Mar 8 11:22:51.166: INFO: (18) /api/v1/namespaces/proxy-8844/pods/https:proxy-service-9fj6r-ng442:462/proxy/: tls qux (200; 5.3271ms) Mar 8 11:22:51.166: INFO: (18) /api/v1/namespaces/proxy-8844/pods/proxy-service-9fj6r-ng442:160/proxy/: foo (200; 5.402157ms) Mar 8 11:22:51.166: INFO: (18) /api/v1/namespaces/proxy-8844/pods/https:proxy-service-9fj6r-ng442:443/proxy/: test (200; 2.934093ms) Mar 8 11:22:51.169: INFO: (19) /api/v1/namespaces/proxy-8844/pods/http:proxy-service-9fj6r-ng442:162/proxy/: bar (200; 2.907917ms) Mar 8 11:22:51.169: INFO: (19) /api/v1/namespaces/proxy-8844/pods/http:proxy-service-9fj6r-ng442:160/proxy/: foo (200; 3.200422ms) Mar 8 11:22:51.169: INFO: (19) /api/v1/namespaces/proxy-8844/pods/https:proxy-service-9fj6r-ng442:443/proxy/: ... (200; 3.241429ms) Mar 8 11:22:51.169: INFO: (19) /api/v1/namespaces/proxy-8844/pods/https:proxy-service-9fj6r-ng442:460/proxy/: tls baz (200; 3.307209ms) Mar 8 11:22:51.169: INFO: (19) /api/v1/namespaces/proxy-8844/pods/proxy-service-9fj6r-ng442:1080/proxy/: test<... (200; 3.23804ms) Mar 8 11:22:51.169: INFO: (19) /api/v1/namespaces/proxy-8844/pods/proxy-service-9fj6r-ng442:162/proxy/: bar (200; 3.314718ms) Mar 8 11:22:51.171: INFO: (19) /api/v1/namespaces/proxy-8844/services/http:proxy-service-9fj6r:portname1/proxy/: foo (200; 4.519281ms) Mar 8 11:22:51.171: INFO: (19) /api/v1/namespaces/proxy-8844/services/proxy-service-9fj6r:portname2/proxy/: bar (200; 4.821599ms) Mar 8 11:22:51.171: INFO: (19) /api/v1/namespaces/proxy-8844/services/https:proxy-service-9fj6r:tlsportname1/proxy/: tls baz (200; 4.994011ms) Mar 8 11:22:51.171: INFO: (19) /api/v1/namespaces/proxy-8844/services/http:proxy-service-9fj6r:portname2/proxy/: bar (200; 5.246233ms) Mar 8 11:22:51.172: INFO: (19) /api/v1/namespaces/proxy-8844/services/https:proxy-service-9fj6r:tlsportname2/proxy/: tls qux (200; 5.41012ms) Mar 8 11:22:51.172: INFO: (19) /api/v1/namespaces/proxy-8844/services/proxy-service-9fj6r:portname1/proxy/: foo (200; 5.493078ms) STEP: deleting ReplicationController proxy-service-9fj6r in namespace proxy-8844, will wait for the garbage collector to delete the pods Mar 8 11:22:51.227: INFO: Deleting ReplicationController proxy-service-9fj6r took: 3.239677ms Mar 8 11:22:51.327: INFO: Terminating ReplicationController proxy-service-9fj6r pods took: 100.190717ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:22:59.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-8844" for this suite. • [SLOW TEST:16.690 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:57 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":278,"completed":219,"skipped":3560,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:22:59.546: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:23:03.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-702" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":220,"skipped":3560,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:23:03.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on node default medium Mar 8 11:23:03.693: INFO: Waiting up to 5m0s for pod "pod-49e1405b-2e4b-4031-8c32-52743a8477b7" in namespace "emptydir-7239" to be "success or failure" Mar 8 11:23:03.696: INFO: Pod "pod-49e1405b-2e4b-4031-8c32-52743a8477b7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.487674ms Mar 8 11:23:05.700: INFO: Pod "pod-49e1405b-2e4b-4031-8c32-52743a8477b7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006605076s Mar 8 11:23:07.703: INFO: Pod "pod-49e1405b-2e4b-4031-8c32-52743a8477b7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009781567s STEP: Saw pod success Mar 8 11:23:07.703: INFO: Pod "pod-49e1405b-2e4b-4031-8c32-52743a8477b7" satisfied condition "success or failure" Mar 8 11:23:07.706: INFO: Trying to get logs from node kind-control-plane pod pod-49e1405b-2e4b-4031-8c32-52743a8477b7 container test-container: STEP: delete the pod Mar 8 11:23:07.737: INFO: Waiting for pod pod-49e1405b-2e4b-4031-8c32-52743a8477b7 to disappear Mar 8 11:23:07.747: INFO: Pod pod-49e1405b-2e4b-4031-8c32-52743a8477b7 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:23:07.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7239" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":221,"skipped":3587,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} SSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:23:07.754: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Mar 8 11:23:07.827: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 8 11:23:07.843: INFO: Waiting for terminating namespaces to be deleted... Mar 8 11:23:07.845: INFO: Logging pods the kubelet thinks is on node kind-control-plane before test Mar 8 11:23:07.853: INFO: etcd-kind-control-plane from kube-system started at 2020-03-08 10:17:29 +0000 UTC (1 container statuses recorded) Mar 8 11:23:07.853: INFO: Container etcd ready: true, restart count 0 Mar 8 11:23:07.853: INFO: kube-controller-manager-kind-control-plane from kube-system started at 2020-03-08 10:17:29 +0000 UTC (1 container statuses recorded) Mar 8 11:23:07.853: INFO: Container kube-controller-manager ready: true, restart count 0 Mar 8 11:23:07.853: INFO: kube-proxy-9qrbc from kube-system started at 2020-03-08 10:17:45 +0000 UTC (1 container statuses recorded) Mar 8 11:23:07.853: INFO: Container kube-proxy ready: true, restart count 0 Mar 8 11:23:07.853: INFO: kindnet-rznts from kube-system started at 2020-03-08 10:17:45 +0000 UTC (1 container statuses recorded) Mar 8 11:23:07.853: INFO: Container kindnet-cni ready: true, restart count 0 Mar 8 11:23:07.853: INFO: test-host-network-pod from e2e-kubelet-etc-hosts-3784 started at 2020-03-08 11:22:22 +0000 UTC (2 container statuses recorded) Mar 8 11:23:07.853: INFO: Container busybox-1 ready: false, restart count 0 Mar 8 11:23:07.853: INFO: Container busybox-2 ready: false, restart count 0 Mar 8 11:23:07.853: INFO: kube-apiserver-kind-control-plane from kube-system started at 2020-03-08 10:17:29 +0000 UTC (1 container statuses recorded) Mar 8 11:23:07.853: INFO: Container kube-apiserver ready: true, restart count 0 Mar 8 11:23:07.853: INFO: local-path-provisioner-7745554f7f-5f2b8 from local-path-storage started at 2020-03-08 10:17:49 +0000 UTC (1 container statuses recorded) Mar 8 11:23:07.853: INFO: Container local-path-provisioner ready: true, restart count 0 Mar 8 11:23:07.853: INFO: coredns-6955765f44-8lfgq from kube-system started at 2020-03-08 10:17:52 +0000 UTC (1 container statuses recorded) Mar 8 11:23:07.853: INFO: Container coredns ready: true, restart count 0 Mar 8 11:23:07.853: INFO: busybox-readonly-fsbcad6247-5b3e-4437-91a4-5e4c87a7cbc9 from kubelet-test-702 started at 2020-03-08 11:22:59 +0000 UTC (1 container statuses recorded) Mar 8 11:23:07.853: INFO: Container busybox-readonly-fsbcad6247-5b3e-4437-91a4-5e4c87a7cbc9 ready: true, restart count 0 Mar 8 11:23:07.853: INFO: kube-scheduler-kind-control-plane from kube-system started at 2020-03-08 10:17:29 +0000 UTC (1 container statuses recorded) Mar 8 11:23:07.853: INFO: Container kube-scheduler ready: true, restart count 0 Mar 8 11:23:07.853: INFO: coredns-6955765f44-2ncc6 from kube-system started at 2020-03-08 10:17:49 +0000 UTC (1 container statuses recorded) Mar 8 11:23:07.853: INFO: Container coredns ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-21cddc48-34cb-4621-a015-c091218fa808 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-21cddc48-34cb-4621-a015-c091218fa808 off the node kind-control-plane STEP: verifying the node doesn't have the label kubernetes.io/e2e-21cddc48-34cb-4621-a015-c091218fa808 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:23:16.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5131" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:8.293 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":278,"completed":222,"skipped":3590,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} SSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:23:16.047: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1713 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 8 11:23:16.097: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-5492' Mar 8 11:23:16.233: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 8 11:23:16.233: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the deployment e2e-test-httpd-deployment was created STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created [AfterEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1718 Mar 8 11:23:18.301: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-5492' Mar 8 11:23:18.447: INFO: stderr: "" Mar 8 11:23:18.447: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:23:18.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5492" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance]","total":278,"completed":223,"skipped":3595,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:23:18.454: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:23:25.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3672" for this suite. • [SLOW TEST:7.093 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":278,"completed":224,"skipped":3609,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} SSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:23:25.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Mar 8 11:23:25.638: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8237 /api/v1/namespaces/watch-8237/configmaps/e2e-watch-test-label-changed 66a90337-da1b-4332-b78f-642228aa4bb1 25342 0 2020-03-08 11:23:25 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 8 11:23:25.638: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8237 /api/v1/namespaces/watch-8237/configmaps/e2e-watch-test-label-changed 66a90337-da1b-4332-b78f-642228aa4bb1 25343 0 2020-03-08 11:23:25 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Mar 8 11:23:25.638: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8237 /api/v1/namespaces/watch-8237/configmaps/e2e-watch-test-label-changed 66a90337-da1b-4332-b78f-642228aa4bb1 25344 0 2020-03-08 11:23:25 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Mar 8 11:23:35.676: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8237 /api/v1/namespaces/watch-8237/configmaps/e2e-watch-test-label-changed 66a90337-da1b-4332-b78f-642228aa4bb1 25388 0 2020-03-08 11:23:25 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 8 11:23:35.676: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8237 /api/v1/namespaces/watch-8237/configmaps/e2e-watch-test-label-changed 66a90337-da1b-4332-b78f-642228aa4bb1 25389 0 2020-03-08 11:23:25 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Mar 8 11:23:35.676: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8237 /api/v1/namespaces/watch-8237/configmaps/e2e-watch-test-label-changed 66a90337-da1b-4332-b78f-642228aa4bb1 25390 0 2020-03-08 11:23:25 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:23:35.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8237" for this suite. • [SLOW TEST:10.135 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":278,"completed":225,"skipped":3614,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} SSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:23:35.684: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 8 11:23:35.746: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cbc02c87-9f6c-4e4c-a94f-0d17c07e1875" in namespace "projected-5649" to be "success or failure" Mar 8 11:23:35.762: INFO: Pod "downwardapi-volume-cbc02c87-9f6c-4e4c-a94f-0d17c07e1875": Phase="Pending", Reason="", readiness=false. Elapsed: 15.21415ms Mar 8 11:23:37.765: INFO: Pod "downwardapi-volume-cbc02c87-9f6c-4e4c-a94f-0d17c07e1875": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018973745s Mar 8 11:23:39.769: INFO: Pod "downwardapi-volume-cbc02c87-9f6c-4e4c-a94f-0d17c07e1875": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022725455s STEP: Saw pod success Mar 8 11:23:39.769: INFO: Pod "downwardapi-volume-cbc02c87-9f6c-4e4c-a94f-0d17c07e1875" satisfied condition "success or failure" Mar 8 11:23:39.772: INFO: Trying to get logs from node kind-control-plane pod downwardapi-volume-cbc02c87-9f6c-4e4c-a94f-0d17c07e1875 container client-container: STEP: delete the pod Mar 8 11:23:39.804: INFO: Waiting for pod downwardapi-volume-cbc02c87-9f6c-4e4c-a94f-0d17c07e1875 to disappear Mar 8 11:23:39.810: INFO: Pod downwardapi-volume-cbc02c87-9f6c-4e4c-a94f-0d17c07e1875 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:23:39.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5649" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":226,"skipped":3617,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} SS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:23:39.818: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod test-webserver-54e8baeb-b7cc-445e-b8d4-2e9539d12aa6 in namespace container-probe-1152 Mar 8 11:23:41.928: INFO: Started pod test-webserver-54e8baeb-b7cc-445e-b8d4-2e9539d12aa6 in namespace container-probe-1152 STEP: checking the pod's current state and verifying that restartCount is present Mar 8 11:23:41.930: INFO: Initial restart count of pod test-webserver-54e8baeb-b7cc-445e-b8d4-2e9539d12aa6 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:27:42.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1152" for this suite. • [SLOW TEST:243.079 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":227,"skipped":3619,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} SSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:27:42.897: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-9a4f810a-a3a8-4bff-bb5b-e3e8ad6e6f76 in namespace container-probe-2498 Mar 8 11:27:45.009: INFO: Started pod busybox-9a4f810a-a3a8-4bff-bb5b-e3e8ad6e6f76 in namespace container-probe-2498 STEP: checking the pod's current state and verifying that restartCount is present Mar 8 11:27:45.012: INFO: Initial restart count of pod busybox-9a4f810a-a3a8-4bff-bb5b-e3e8ad6e6f76 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:31:45.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2498" for this suite. • [SLOW TEST:242.925 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":228,"skipped":3627,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:31:45.822: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 8 11:31:46.439: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 8 11:31:49.474: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:32:01.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7054" for this suite. STEP: Destroying namespace "webhook-7054-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.935 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":278,"completed":229,"skipped":3648,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:32:01.758: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name secret-emptykey-test-52ee74fe-e2f1-4f10-90ca-2668d9886bae [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:32:01.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9453" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":278,"completed":230,"skipped":3648,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:32:01.824: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 8 11:32:01.875: INFO: Creating deployment "webserver-deployment" Mar 8 11:32:01.880: INFO: Waiting for observed generation 1 Mar 8 11:32:03.919: INFO: Waiting for all required pods to come up Mar 8 11:32:03.925: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Mar 8 11:32:09.936: INFO: Waiting for deployment "webserver-deployment" to complete Mar 8 11:32:09.943: INFO: Updating deployment "webserver-deployment" with a non-existent image Mar 8 11:32:09.950: INFO: Updating deployment webserver-deployment Mar 8 11:32:09.950: INFO: Waiting for observed generation 2 Mar 8 11:32:11.962: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Mar 8 11:32:11.964: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Mar 8 11:32:11.967: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Mar 8 11:32:11.974: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Mar 8 11:32:11.974: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Mar 8 11:32:11.976: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Mar 8 11:32:11.981: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Mar 8 11:32:11.981: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Mar 8 11:32:11.987: INFO: Updating deployment webserver-deployment Mar 8 11:32:11.987: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Mar 8 11:32:12.035: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Mar 8 11:32:12.047: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Mar 8 11:32:12.185: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-2469 /apis/apps/v1/namespaces/deployment-2469/deployments/webserver-deployment e75d0846-bea3-4858-80c1-00ed9412a730 27142 3 2020-03-08 11:32:01 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc000696998 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-03-08 11:32:10 +0000 UTC,LastTransitionTime:2020-03-08 11:32:01 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-03-08 11:32:12 +0000 UTC,LastTransitionTime:2020-03-08 11:32:12 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Mar 8 11:32:12.218: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8 deployment-2469 /apis/apps/v1/namespaces/deployment-2469/replicasets/webserver-deployment-c7997dcc8 93445489-09c9-4465-8b41-1be985f8a6f8 27174 3 2020-03-08 11:32:09 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment e75d0846-bea3-4858-80c1-00ed9412a730 0xc002c90717 0xc002c90718}] [] []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002c907e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 8 11:32:12.218: INFO: All old ReplicaSets of Deployment "webserver-deployment": Mar 8 11:32:12.218: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587 deployment-2469 /apis/apps/v1/namespaces/deployment-2469/replicasets/webserver-deployment-595b5b9587 b86d4a07-1940-4c2b-8640-a01ee2af7e5d 27176 3 2020-03-08 11:32:01 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment e75d0846-bea3-4858-80c1-00ed9412a730 0xc002c905c7 0xc002c905c8}] [] []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002c90658 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Mar 8 11:32:12.263: INFO: Pod "webserver-deployment-595b5b9587-2blf2" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-2blf2 webserver-deployment-595b5b9587- deployment-2469 /api/v1/namespaces/deployment-2469/pods/webserver-deployment-595b5b9587-2blf2 af1cf30e-d57a-412d-9239-56ef38e78702 26986 0 2020-03-08 11:32:01 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b86d4a07-1940-4c2b-8640-a01ee2af7e5d 0xc00348aaa7 0xc00348aaa8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6hkzz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6hkzz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6hkzz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-control-plane,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:32:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:32:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:32:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:32:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.2,PodIP:10.244.0.123,StartTime:2020-03-08 11:32:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-08 11:32:04 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://cda8ae74b9ff456a018f4a2703c67a577a9c57a14403f865462ec1fcae4111d2,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.0.123,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 11:32:12.263: INFO: Pod "webserver-deployment-595b5b9587-5cjvm" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-5cjvm webserver-deployment-595b5b9587- deployment-2469 /api/v1/namespaces/deployment-2469/pods/webserver-deployment-595b5b9587-5cjvm 85514987-b391-4ac9-8e2e-095442a43aaf 27185 0 2020-03-08 11:32:11 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b86d4a07-1940-4c2b-8640-a01ee2af7e5d 0xc00348ac30 0xc00348ac31}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6hkzz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6hkzz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6hkzz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-control-plane,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:32:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:32:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:32:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:32:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.2,PodIP:,StartTime:2020-03-08 11:32:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 11:32:12.263: INFO: Pod "webserver-deployment-595b5b9587-6fcq9" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-6fcq9 webserver-deployment-595b5b9587- deployment-2469 /api/v1/namespaces/deployment-2469/pods/webserver-deployment-595b5b9587-6fcq9 b22f4a14-6d44-466f-8482-ff770ff30090 27159 0 2020-03-08 11:32:12 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b86d4a07-1940-4c2b-8640-a01ee2af7e5d 0xc00348ad70 0xc00348ad71}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6hkzz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6hkzz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6hkzz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-control-plane,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:32:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 11:32:12.263: INFO: Pod "webserver-deployment-595b5b9587-7bcz5" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-7bcz5 webserver-deployment-595b5b9587- deployment-2469 /api/v1/namespaces/deployment-2469/pods/webserver-deployment-595b5b9587-7bcz5 f206dd3f-2296-4449-8e50-fa726e611053 27180 0 2020-03-08 11:32:12 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b86d4a07-1940-4c2b-8640-a01ee2af7e5d 0xc00348ae80 0xc00348ae81}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6hkzz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6hkzz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6hkzz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-control-plane,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:32:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 11:32:12.264: INFO: Pod "webserver-deployment-595b5b9587-96d29" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-96d29 webserver-deployment-595b5b9587- deployment-2469 /api/v1/namespaces/deployment-2469/pods/webserver-deployment-595b5b9587-96d29 4f983746-c634-47a5-9804-abe2f1837f64 27179 0 2020-03-08 11:32:12 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b86d4a07-1940-4c2b-8640-a01ee2af7e5d 0xc00348af80 0xc00348af81}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6hkzz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6hkzz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6hkzz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-control-plane,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:32:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 11:32:12.264: INFO: Pod "webserver-deployment-595b5b9587-c5xjs" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-c5xjs webserver-deployment-595b5b9587- deployment-2469 /api/v1/namespaces/deployment-2469/pods/webserver-deployment-595b5b9587-c5xjs 253d8bdf-5c7a-4506-911f-5af8780c5c37 27160 0 2020-03-08 11:32:12 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b86d4a07-1940-4c2b-8640-a01ee2af7e5d 0xc00348b080 0xc00348b081}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6hkzz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6hkzz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6hkzz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-control-plane,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:32:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 11:32:12.264: INFO: Pod "webserver-deployment-595b5b9587-ctt4v" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-ctt4v webserver-deployment-595b5b9587- deployment-2469 /api/v1/namespaces/deployment-2469/pods/webserver-deployment-595b5b9587-ctt4v d35e0cc7-a9a9-49a9-bc1c-7a5b9ab20fca 27043 0 2020-03-08 11:32:01 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b86d4a07-1940-4c2b-8640-a01ee2af7e5d 0xc00348b180 0xc00348b181}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6hkzz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6hkzz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6hkzz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-control-plane,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:32:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:32:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:32:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:32:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.2,PodIP:10.244.0.132,StartTime:2020-03-08 11:32:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-08 11:32:07 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://195d1429027aee769dbf1927180195b925accf3f05b2bfb3a4d37427277585b5,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.0.132,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 11:32:12.264: INFO: Pod "webserver-deployment-595b5b9587-dqtq2" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-dqtq2 webserver-deployment-595b5b9587- deployment-2469 /api/v1/namespaces/deployment-2469/pods/webserver-deployment-595b5b9587-dqtq2 828c3248-46ee-40fd-8e03-ee64de556cd4 27175 0 2020-03-08 11:32:12 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b86d4a07-1940-4c2b-8640-a01ee2af7e5d 0xc00348b2e0 0xc00348b2e1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6hkzz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6hkzz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6hkzz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-control-plane,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:32:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 11:32:12.264: INFO: Pod "webserver-deployment-595b5b9587-dv4lw" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-dv4lw webserver-deployment-595b5b9587- deployment-2469 /api/v1/namespaces/deployment-2469/pods/webserver-deployment-595b5b9587-dv4lw 71ca18ef-7da2-4967-96aa-8ab0897888f8 27163 0 2020-03-08 11:32:12 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b86d4a07-1940-4c2b-8640-a01ee2af7e5d 0xc00348b550 0xc00348b551}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6hkzz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6hkzz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6hkzz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-control-plane,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:32:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 11:32:12.264: INFO: Pod "webserver-deployment-595b5b9587-h2pxg" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-h2pxg webserver-deployment-595b5b9587- deployment-2469 /api/v1/namespaces/deployment-2469/pods/webserver-deployment-595b5b9587-h2pxg 6644ad24-ffde-41bb-b120-a16475dc9395 27182 0 2020-03-08 11:32:12 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b86d4a07-1940-4c2b-8640-a01ee2af7e5d 0xc00348b910 0xc00348b911}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6hkzz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6hkzz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6hkzz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-control-plane,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:32:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 11:32:12.265: INFO: Pod "webserver-deployment-595b5b9587-hz8dc" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-hz8dc webserver-deployment-595b5b9587- deployment-2469 /api/v1/namespaces/deployment-2469/pods/webserver-deployment-595b5b9587-hz8dc 8eac4c2d-d0d0-472b-8329-6c218234b76d 27040 0 2020-03-08 11:32:01 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b86d4a07-1940-4c2b-8640-a01ee2af7e5d 0xc00348bb00 0xc00348bb01}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6hkzz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6hkzz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6hkzz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-control-plane,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:32:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:32:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:32:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:32:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.2,PodIP:10.244.0.130,StartTime:2020-03-08 11:32:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-08 11:32:07 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://007a8823d8f059c18e6d8c4bc8db866579cb020ea25cdd82935de745d6f0be6e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.0.130,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 11:32:12.265: INFO: Pod "webserver-deployment-595b5b9587-j5wlg" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-j5wlg webserver-deployment-595b5b9587- deployment-2469 /api/v1/namespaces/deployment-2469/pods/webserver-deployment-595b5b9587-j5wlg f89cbb97-131a-44fb-b1e3-431ba807cdb3 27158 0 2020-03-08 11:32:12 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b86d4a07-1940-4c2b-8640-a01ee2af7e5d 0xc00348bc60 0xc00348bc61}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6hkzz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6hkzz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6hkzz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-control-plane,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:32:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 11:32:12.265: INFO: Pod "webserver-deployment-595b5b9587-nrckt" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-nrckt webserver-deployment-595b5b9587- deployment-2469 /api/v1/namespaces/deployment-2469/pods/webserver-deployment-595b5b9587-nrckt d26b9f39-c0e3-4b54-ba61-4eaec977ae24 27009 0 2020-03-08 11:32:01 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b86d4a07-1940-4c2b-8640-a01ee2af7e5d 0xc00348bd60 0xc00348bd61}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6hkzz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6hkzz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6hkzz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-control-plane,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:32:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:32:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:32:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:32:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.2,PodIP:10.244.0.126,StartTime:2020-03-08 11:32:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-08 11:32:06 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://7e6f12b5d5cf2f13f406801527eb2663070f2c62f13ca05b15bab18ef54b0ead,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.0.126,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 11:32:12.265: INFO: Pod "webserver-deployment-595b5b9587-phqgp" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-phqgp webserver-deployment-595b5b9587- deployment-2469 /api/v1/namespaces/deployment-2469/pods/webserver-deployment-595b5b9587-phqgp ba3a7a9f-5d11-430e-91cc-343641c3e84f 27002 0 2020-03-08 11:32:01 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b86d4a07-1940-4c2b-8640-a01ee2af7e5d 0xc000445a90 0xc000445a91}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6hkzz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6hkzz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6hkzz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-control-plane,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:32:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:32:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:32:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:32:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.2,PodIP:10.244.0.125,StartTime:2020-03-08 11:32:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-08 11:32:05 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://ebc1857b6f3b46bbd719d6758309eadd88ad39a6ae6dca337e67e27c890b65d0,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.0.125,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 11:32:12.265: INFO: Pod "webserver-deployment-595b5b9587-plfcx" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-plfcx webserver-deployment-595b5b9587- deployment-2469 /api/v1/namespaces/deployment-2469/pods/webserver-deployment-595b5b9587-plfcx f020d29c-4e89-454f-863a-1ae0d71331be 27144 0 2020-03-08 11:32:12 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b86d4a07-1940-4c2b-8640-a01ee2af7e5d 0xc000445f40 0xc000445f41}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6hkzz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6hkzz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6hkzz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-control-plane,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:32:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 11:32:12.266: INFO: Pod "webserver-deployment-595b5b9587-qqdvn" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-qqdvn webserver-deployment-595b5b9587- deployment-2469 /api/v1/namespaces/deployment-2469/pods/webserver-deployment-595b5b9587-qqdvn c4f9d0a8-b8dd-4daf-a437-28ad19514887 26990 0 2020-03-08 11:32:01 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b86d4a07-1940-4c2b-8640-a01ee2af7e5d 0xc0000f2c70 0xc0000f2c71}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6hkzz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6hkzz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6hkzz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-control-plane,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:32:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:32:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:32:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:32:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.2,PodIP:10.244.0.129,StartTime:2020-03-08 11:32:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-08 11:32:05 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://cc6f48468e13729bda6b2b368b28a71e3d01bc27ef27ad65c45c85bd76521ccb,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.0.129,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 11:32:12.266: INFO: Pod "webserver-deployment-595b5b9587-sq9hr" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-sq9hr webserver-deployment-595b5b9587- deployment-2469 /api/v1/namespaces/deployment-2469/pods/webserver-deployment-595b5b9587-sq9hr 4ffb8391-130d-453b-9472-8f52c3441614 27181 0 2020-03-08 11:32:12 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b86d4a07-1940-4c2b-8640-a01ee2af7e5d 0xc0000f3a30 0xc0000f3a31}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6hkzz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6hkzz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6hkzz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-control-plane,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:32:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 11:32:12.266: INFO: Pod "webserver-deployment-595b5b9587-ttqv8" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-ttqv8 webserver-deployment-595b5b9587- deployment-2469 /api/v1/namespaces/deployment-2469/pods/webserver-deployment-595b5b9587-ttqv8 f1d92809-5610-4e58-afd0-a03eb8d52734 26995 0 2020-03-08 11:32:01 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b86d4a07-1940-4c2b-8640-a01ee2af7e5d 0xc0006dc010 0xc0006dc011}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6hkzz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6hkzz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6hkzz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-control-plane,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:32:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:32:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:32:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:32:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.2,PodIP:10.244.0.124,StartTime:2020-03-08 11:32:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-08 11:32:05 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://e9c847036f5fae7b72aa54544da852c16da2a780bb853c0eafd9601c4aa8ba90,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.0.124,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 11:32:12.266: INFO: Pod "webserver-deployment-595b5b9587-wsrld" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-wsrld webserver-deployment-595b5b9587- deployment-2469 /api/v1/namespaces/deployment-2469/pods/webserver-deployment-595b5b9587-wsrld 67f9ab89-9f53-46d3-b2fb-97e36d7e7fe6 27145 0 2020-03-08 11:32:12 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b86d4a07-1940-4c2b-8640-a01ee2af7e5d 0xc0006dc220 0xc0006dc221}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6hkzz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6hkzz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6hkzz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-control-plane,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:32:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 11:32:12.266: INFO: Pod "webserver-deployment-595b5b9587-wtb7x" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-wtb7x webserver-deployment-595b5b9587- deployment-2469 /api/v1/namespaces/deployment-2469/pods/webserver-deployment-595b5b9587-wtb7x beb2f2ed-f6a7-4cb4-9b98-e83eb9d37816 27049 0 2020-03-08 11:32:01 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b86d4a07-1940-4c2b-8640-a01ee2af7e5d 0xc0006dc340 0xc0006dc341}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6hkzz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6hkzz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6hkzz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-control-plane,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:32:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:32:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:32:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:32:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.2,PodIP:10.244.0.131,StartTime:2020-03-08 11:32:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-08 11:32:07 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://cde055dddbbc7cb085b5d7716c7dd47add3020ab44edddb67df11f5afec0f247,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.0.131,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 11:32:12.267: INFO: Pod "webserver-deployment-c7997dcc8-cvpjf" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-cvpjf webserver-deployment-c7997dcc8- deployment-2469 /api/v1/namespaces/deployment-2469/pods/webserver-deployment-c7997dcc8-cvpjf 162b0b9a-635a-4f2b-ad8f-8429c22b073a 27110 0 2020-03-08 11:32:09 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 93445489-09c9-4465-8b41-1be985f8a6f8 0xc0006dc5b0 0xc0006dc5b1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6hkzz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6hkzz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6hkzz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-control-plane,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:32:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:32:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:32:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:32:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.2,PodIP:,StartTime:2020-03-08 11:32:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 11:32:12.267: INFO: Pod "webserver-deployment-c7997dcc8-dq5gb" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-dq5gb webserver-deployment-c7997dcc8- deployment-2469 /api/v1/namespaces/deployment-2469/pods/webserver-deployment-c7997dcc8-dq5gb f6189808-d149-4870-9964-8cbb2b8f2a3f 27190 0 2020-03-08 11:32:12 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 93445489-09c9-4465-8b41-1be985f8a6f8 0xc0006dcb90 0xc0006dcb91}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6hkzz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6hkzz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6hkzz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-control-plane,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:32:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:32:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:32:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:32:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.2,PodIP:,StartTime:2020-03-08 11:32:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 11:32:12.267: INFO: Pod "webserver-deployment-c7997dcc8-f98cv" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-f98cv webserver-deployment-c7997dcc8- deployment-2469 /api/v1/namespaces/deployment-2469/pods/webserver-deployment-c7997dcc8-f98cv 37ccab24-f11a-466b-9dc2-27a3fadd4ed1 27089 0 2020-03-08 11:32:09 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 93445489-09c9-4465-8b41-1be985f8a6f8 0xc00073a6d0 0xc00073a6d1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6hkzz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6hkzz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6hkzz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-control-plane,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:32:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:32:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:32:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:32:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.2,PodIP:,StartTime:2020-03-08 11:32:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 11:32:12.267: INFO: Pod "webserver-deployment-c7997dcc8-fkksm" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-fkksm webserver-deployment-c7997dcc8- deployment-2469 /api/v1/namespaces/deployment-2469/pods/webserver-deployment-c7997dcc8-fkksm ebc71f80-2b77-42ca-a8ff-b9e339ee34ff 27161 0 2020-03-08 11:32:12 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 93445489-09c9-4465-8b41-1be985f8a6f8 0xc00073b2e0 0xc00073b2e1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6hkzz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6hkzz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6hkzz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-control-plane,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:32:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 11:32:12.268: INFO: Pod "webserver-deployment-c7997dcc8-ft2md" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-ft2md webserver-deployment-c7997dcc8- deployment-2469 /api/v1/namespaces/deployment-2469/pods/webserver-deployment-c7997dcc8-ft2md cc747645-5343-40a8-96a7-4b8be08a0767 27171 0 2020-03-08 11:32:12 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 93445489-09c9-4465-8b41-1be985f8a6f8 0xc00073b680 0xc00073b681}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6hkzz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6hkzz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6hkzz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-control-plane,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:32:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 11:32:12.268: INFO: Pod "webserver-deployment-c7997dcc8-gfhp2" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-gfhp2 webserver-deployment-c7997dcc8- deployment-2469 /api/v1/namespaces/deployment-2469/pods/webserver-deployment-c7997dcc8-gfhp2 0dcc0501-7a65-4ed8-8900-2df4d73091b1 27151 0 2020-03-08 11:32:12 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 93445489-09c9-4465-8b41-1be985f8a6f8 0xc0007deb00 0xc0007deb01}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6hkzz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6hkzz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6hkzz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-control-plane,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:32:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 11:32:12.268: INFO: Pod "webserver-deployment-c7997dcc8-lmx2h" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-lmx2h webserver-deployment-c7997dcc8- deployment-2469 /api/v1/namespaces/deployment-2469/pods/webserver-deployment-c7997dcc8-lmx2h ee55e021-7d5b-4b16-b40f-0d3c784fa037 27112 0 2020-03-08 11:32:10 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 93445489-09c9-4465-8b41-1be985f8a6f8 0xc00034b930 0xc00034b931}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6hkzz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6hkzz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6hkzz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-control-plane,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:32:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:32:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:32:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:32:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.2,PodIP:,StartTime:2020-03-08 11:32:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 11:32:12.268: INFO: Pod "webserver-deployment-c7997dcc8-lslx5" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-lslx5 webserver-deployment-c7997dcc8- deployment-2469 /api/v1/namespaces/deployment-2469/pods/webserver-deployment-c7997dcc8-lslx5 9eb719c8-f722-4853-9832-004de71df61c 27177 0 2020-03-08 11:32:12 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 93445489-09c9-4465-8b41-1be985f8a6f8 0xc002277480 0xc002277481}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6hkzz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6hkzz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6hkzz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-control-plane,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:32:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 11:32:12.268: INFO: Pod "webserver-deployment-c7997dcc8-pg4mr" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-pg4mr webserver-deployment-c7997dcc8- deployment-2469 /api/v1/namespaces/deployment-2469/pods/webserver-deployment-c7997dcc8-pg4mr a3d1a6ca-fb10-4358-91e7-17cf450bb991 27150 0 2020-03-08 11:32:12 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 93445489-09c9-4465-8b41-1be985f8a6f8 0xc002277590 0xc002277591}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6hkzz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6hkzz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6hkzz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-control-plane,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:32:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 11:32:12.268: INFO: Pod "webserver-deployment-c7997dcc8-sfzpf" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-sfzpf webserver-deployment-c7997dcc8- deployment-2469 /api/v1/namespaces/deployment-2469/pods/webserver-deployment-c7997dcc8-sfzpf a9db0d9f-f874-41a0-9bda-5854a400b767 27114 0 2020-03-08 11:32:10 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 93445489-09c9-4465-8b41-1be985f8a6f8 0xc002277790 0xc002277791}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6hkzz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6hkzz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6hkzz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-control-plane,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:32:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:32:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:32:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:32:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.2,PodIP:,StartTime:2020-03-08 11:32:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 11:32:12.269: INFO: Pod "webserver-deployment-c7997dcc8-spgfw" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-spgfw webserver-deployment-c7997dcc8- deployment-2469 /api/v1/namespaces/deployment-2469/pods/webserver-deployment-c7997dcc8-spgfw 65665b26-6415-494f-ae4c-cb6634e2c33e 27166 0 2020-03-08 11:32:12 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 93445489-09c9-4465-8b41-1be985f8a6f8 0xc002277900 0xc002277901}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6hkzz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6hkzz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6hkzz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-control-plane,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:32:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 11:32:12.269: INFO: Pod "webserver-deployment-c7997dcc8-ts7kw" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-ts7kw webserver-deployment-c7997dcc8- deployment-2469 /api/v1/namespaces/deployment-2469/pods/webserver-deployment-c7997dcc8-ts7kw 826c5683-0fa2-4642-a155-2874d6e19fc2 27170 0 2020-03-08 11:32:12 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 93445489-09c9-4465-8b41-1be985f8a6f8 0xc002277a50 0xc002277a51}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6hkzz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6hkzz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6hkzz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-control-plane,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:32:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 11:32:12.269: INFO: Pod "webserver-deployment-c7997dcc8-wznqz" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-wznqz webserver-deployment-c7997dcc8- deployment-2469 /api/v1/namespaces/deployment-2469/pods/webserver-deployment-c7997dcc8-wznqz 1f04225b-5592-4d4b-990e-9ccd4bad4ac6 27126 0 2020-03-08 11:32:09 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 93445489-09c9-4465-8b41-1be985f8a6f8 0xc002277d20 0xc002277d21}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6hkzz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6hkzz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6hkzz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-control-plane,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:32:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:32:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:32:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:32:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.2,PodIP:10.244.0.133,StartTime:2020-03-08 11:32:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.0.133,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:32:12.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2469" for this suite. • [SLOW TEST:10.572 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":278,"completed":231,"skipped":3693,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:32:12.397: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 8 11:32:12.603: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b1aa32a9-e1e9-46aa-b6d7-55ae9fdd9d65" in namespace "projected-8989" to be "success or failure" Mar 8 11:32:12.612: INFO: Pod "downwardapi-volume-b1aa32a9-e1e9-46aa-b6d7-55ae9fdd9d65": Phase="Pending", Reason="", readiness=false. Elapsed: 9.292185ms Mar 8 11:32:14.676: INFO: Pod "downwardapi-volume-b1aa32a9-e1e9-46aa-b6d7-55ae9fdd9d65": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073677712s Mar 8 11:32:16.693: INFO: Pod "downwardapi-volume-b1aa32a9-e1e9-46aa-b6d7-55ae9fdd9d65": Phase="Pending", Reason="", readiness=false. Elapsed: 4.090369043s Mar 8 11:32:18.753: INFO: Pod "downwardapi-volume-b1aa32a9-e1e9-46aa-b6d7-55ae9fdd9d65": Phase="Pending", Reason="", readiness=false. Elapsed: 6.1504576s Mar 8 11:32:20.774: INFO: Pod "downwardapi-volume-b1aa32a9-e1e9-46aa-b6d7-55ae9fdd9d65": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.171263346s STEP: Saw pod success Mar 8 11:32:20.774: INFO: Pod "downwardapi-volume-b1aa32a9-e1e9-46aa-b6d7-55ae9fdd9d65" satisfied condition "success or failure" Mar 8 11:32:20.786: INFO: Trying to get logs from node kind-control-plane pod downwardapi-volume-b1aa32a9-e1e9-46aa-b6d7-55ae9fdd9d65 container client-container: STEP: delete the pod Mar 8 11:32:21.198: INFO: Waiting for pod downwardapi-volume-b1aa32a9-e1e9-46aa-b6d7-55ae9fdd9d65 to disappear Mar 8 11:32:21.205: INFO: Pod downwardapi-volume-b1aa32a9-e1e9-46aa-b6d7-55ae9fdd9d65 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:32:21.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8989" for this suite. • [SLOW TEST:8.818 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":232,"skipped":3709,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:32:21.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs Mar 8 11:32:21.270: INFO: Waiting up to 5m0s for pod "pod-b5423b39-5620-43d1-8c6e-88dad3646395" in namespace "emptydir-9408" to be "success or failure" Mar 8 11:32:21.273: INFO: Pod "pod-b5423b39-5620-43d1-8c6e-88dad3646395": Phase="Pending", Reason="", readiness=false. Elapsed: 3.512886ms Mar 8 11:32:23.277: INFO: Pod "pod-b5423b39-5620-43d1-8c6e-88dad3646395": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007014795s Mar 8 11:32:25.281: INFO: Pod "pod-b5423b39-5620-43d1-8c6e-88dad3646395": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010648019s Mar 8 11:32:27.284: INFO: Pod "pod-b5423b39-5620-43d1-8c6e-88dad3646395": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013935789s Mar 8 11:32:29.288: INFO: Pod "pod-b5423b39-5620-43d1-8c6e-88dad3646395": Phase="Pending", Reason="", readiness=false. Elapsed: 8.017723758s Mar 8 11:32:31.291: INFO: Pod "pod-b5423b39-5620-43d1-8c6e-88dad3646395": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.021038742s STEP: Saw pod success Mar 8 11:32:31.291: INFO: Pod "pod-b5423b39-5620-43d1-8c6e-88dad3646395" satisfied condition "success or failure" Mar 8 11:32:31.293: INFO: Trying to get logs from node kind-control-plane pod pod-b5423b39-5620-43d1-8c6e-88dad3646395 container test-container: STEP: delete the pod Mar 8 11:32:31.315: INFO: Waiting for pod pod-b5423b39-5620-43d1-8c6e-88dad3646395 to disappear Mar 8 11:32:31.319: INFO: Pod pod-b5423b39-5620-43d1-8c6e-88dad3646395 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:32:31.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9408" for this suite. • [SLOW TEST:10.111 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":233,"skipped":3735,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:32:31.327: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 8 11:32:32.564: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 8 11:32:34.577: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719263952, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719263952, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719263952, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719263952, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 8 11:32:37.627: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:32:37.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7759" for this suite. STEP: Destroying namespace "webhook-7759-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.617 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":278,"completed":234,"skipped":3785,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} SSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:32:37.945: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:32:38.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4047" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":278,"completed":235,"skipped":3788,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:32:38.099: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 8 11:32:38.166: INFO: Waiting up to 5m0s for pod "busybox-user-65534-9f07aa89-1c79-4b53-b7d6-b19a10128c2f" in namespace "security-context-test-1910" to be "success or failure" Mar 8 11:32:38.231: INFO: Pod "busybox-user-65534-9f07aa89-1c79-4b53-b7d6-b19a10128c2f": Phase="Pending", Reason="", readiness=false. Elapsed: 64.559903ms Mar 8 11:32:40.234: INFO: Pod "busybox-user-65534-9f07aa89-1c79-4b53-b7d6-b19a10128c2f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.067909165s Mar 8 11:32:40.234: INFO: Pod "busybox-user-65534-9f07aa89-1c79-4b53-b7d6-b19a10128c2f" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:32:40.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-1910" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":236,"skipped":3804,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} SSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:32:40.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Mar 8 11:32:44.404: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 8 11:32:44.411: INFO: Pod pod-with-prestop-exec-hook still exists Mar 8 11:32:46.411: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 8 11:32:46.415: INFO: Pod pod-with-prestop-exec-hook still exists Mar 8 11:32:48.411: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 8 11:32:48.414: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:32:48.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-2386" for this suite. • [SLOW TEST:8.187 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":278,"completed":237,"skipped":3811,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} SSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:32:48.429: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 8 11:32:48.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Mar 8 11:32:49.119: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-08T11:32:49Z generation:1 name:name1 resourceVersion:27738 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:26881637-ea31-45fa-a86e-36bc2f5127e5] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Mar 8 11:32:59.125: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-08T11:32:59Z generation:1 name:name2 resourceVersion:27780 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:284c2f43-962b-4fa7-983c-92223c6d7237] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Mar 8 11:33:09.130: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-08T11:32:49Z generation:2 name:name1 resourceVersion:27810 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:26881637-ea31-45fa-a86e-36bc2f5127e5] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Mar 8 11:33:19.136: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-08T11:32:59Z generation:2 name:name2 resourceVersion:27840 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:284c2f43-962b-4fa7-983c-92223c6d7237] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Mar 8 11:33:29.143: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-08T11:32:49Z generation:2 name:name1 resourceVersion:27868 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:26881637-ea31-45fa-a86e-36bc2f5127e5] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Mar 8 11:33:39.151: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-08T11:32:59Z generation:2 name:name2 resourceVersion:27896 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:284c2f43-962b-4fa7-983c-92223c6d7237] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:33:49.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-6385" for this suite. • [SLOW TEST:61.241 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":278,"completed":238,"skipped":3822,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:33:49.670: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Mar 8 11:33:49.719: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 8 11:33:49.748: INFO: Waiting for terminating namespaces to be deleted... Mar 8 11:33:49.751: INFO: Logging pods the kubelet thinks is on node kind-control-plane before test Mar 8 11:33:49.758: INFO: coredns-6955765f44-2ncc6 from kube-system started at 2020-03-08 10:17:49 +0000 UTC (1 container statuses recorded) Mar 8 11:33:49.758: INFO: Container coredns ready: true, restart count 0 Mar 8 11:33:49.758: INFO: etcd-kind-control-plane from kube-system started at 2020-03-08 10:17:29 +0000 UTC (1 container statuses recorded) Mar 8 11:33:49.758: INFO: Container etcd ready: true, restart count 0 Mar 8 11:33:49.758: INFO: kube-controller-manager-kind-control-plane from kube-system started at 2020-03-08 10:17:29 +0000 UTC (1 container statuses recorded) Mar 8 11:33:49.758: INFO: Container kube-controller-manager ready: true, restart count 0 Mar 8 11:33:49.758: INFO: coredns-6955765f44-8lfgq from kube-system started at 2020-03-08 10:17:52 +0000 UTC (1 container statuses recorded) Mar 8 11:33:49.758: INFO: Container coredns ready: true, restart count 0 Mar 8 11:33:49.758: INFO: kube-scheduler-kind-control-plane from kube-system started at 2020-03-08 10:17:29 +0000 UTC (1 container statuses recorded) Mar 8 11:33:49.758: INFO: Container kube-scheduler ready: true, restart count 0 Mar 8 11:33:49.758: INFO: kube-proxy-9qrbc from kube-system started at 2020-03-08 10:17:45 +0000 UTC (1 container statuses recorded) Mar 8 11:33:49.758: INFO: Container kube-proxy ready: true, restart count 0 Mar 8 11:33:49.758: INFO: kindnet-rznts from kube-system started at 2020-03-08 10:17:45 +0000 UTC (1 container statuses recorded) Mar 8 11:33:49.758: INFO: Container kindnet-cni ready: true, restart count 0 Mar 8 11:33:49.758: INFO: kube-apiserver-kind-control-plane from kube-system started at 2020-03-08 10:17:29 +0000 UTC (1 container statuses recorded) Mar 8 11:33:49.758: INFO: Container kube-apiserver ready: true, restart count 0 Mar 8 11:33:49.758: INFO: local-path-provisioner-7745554f7f-5f2b8 from local-path-storage started at 2020-03-08 10:17:49 +0000 UTC (1 container statuses recorded) Mar 8 11:33:49.758: INFO: Container local-path-provisioner ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: verifying the node has the label node kind-control-plane Mar 8 11:33:49.794: INFO: Pod coredns-6955765f44-2ncc6 requesting resource cpu=100m on Node kind-control-plane Mar 8 11:33:49.794: INFO: Pod coredns-6955765f44-8lfgq requesting resource cpu=100m on Node kind-control-plane Mar 8 11:33:49.794: INFO: Pod etcd-kind-control-plane requesting resource cpu=0m on Node kind-control-plane Mar 8 11:33:49.794: INFO: Pod kindnet-rznts requesting resource cpu=100m on Node kind-control-plane Mar 8 11:33:49.794: INFO: Pod kube-apiserver-kind-control-plane requesting resource cpu=250m on Node kind-control-plane Mar 8 11:33:49.794: INFO: Pod kube-controller-manager-kind-control-plane requesting resource cpu=200m on Node kind-control-plane Mar 8 11:33:49.794: INFO: Pod kube-proxy-9qrbc requesting resource cpu=0m on Node kind-control-plane Mar 8 11:33:49.794: INFO: Pod kube-scheduler-kind-control-plane requesting resource cpu=100m on Node kind-control-plane Mar 8 11:33:49.794: INFO: Pod local-path-provisioner-7745554f7f-5f2b8 requesting resource cpu=0m on Node kind-control-plane STEP: Starting Pods to consume most of the cluster CPU. Mar 8 11:33:49.794: INFO: Creating a pod which consumes cpu=10605m on Node kind-control-plane STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-0ea563f1-978e-4905-b0b0-da06201d237b.15fa50f47417212c], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1163/filler-pod-0ea563f1-978e-4905-b0b0-da06201d237b to kind-control-plane] STEP: Considering event: Type = [Normal], Name = [filler-pod-0ea563f1-978e-4905-b0b0-da06201d237b.15fa50f49e6a74f5], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-0ea563f1-978e-4905-b0b0-da06201d237b.15fa50f4abd6a565], Reason = [Created], Message = [Created container filler-pod-0ea563f1-978e-4905-b0b0-da06201d237b] STEP: Considering event: Type = [Normal], Name = [filler-pod-0ea563f1-978e-4905-b0b0-da06201d237b.15fa50f4b6014038], Reason = [Started], Message = [Started container filler-pod-0ea563f1-978e-4905-b0b0-da06201d237b] STEP: Considering event: Type = [Warning], Name = [additional-pod.15fa50f4ec0916c1], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 Insufficient cpu.] STEP: removing the label node off the node kind-control-plane STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:33:52.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1163" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":278,"completed":239,"skipped":3851,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} SSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:33:52.899: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-projected-all-test-volume-fa0762a7-917f-4f37-b5c8-0ed6c9bc1b16 STEP: Creating secret with name secret-projected-all-test-volume-3129967c-bb77-4904-9d24-1ff47e5abefd STEP: Creating a pod to test Check all projections for projected volume plugin Mar 8 11:33:52.986: INFO: Waiting up to 5m0s for pod "projected-volume-543704b0-9be0-4703-a9c0-be4b031d90b9" in namespace "projected-7881" to be "success or failure" Mar 8 11:33:53.007: INFO: Pod "projected-volume-543704b0-9be0-4703-a9c0-be4b031d90b9": Phase="Pending", Reason="", readiness=false. Elapsed: 20.675699ms Mar 8 11:33:55.009: INFO: Pod "projected-volume-543704b0-9be0-4703-a9c0-be4b031d90b9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023336604s Mar 8 11:33:57.054: INFO: Pod "projected-volume-543704b0-9be0-4703-a9c0-be4b031d90b9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.068084706s STEP: Saw pod success Mar 8 11:33:57.054: INFO: Pod "projected-volume-543704b0-9be0-4703-a9c0-be4b031d90b9" satisfied condition "success or failure" Mar 8 11:33:57.057: INFO: Trying to get logs from node kind-control-plane pod projected-volume-543704b0-9be0-4703-a9c0-be4b031d90b9 container projected-all-volume-test: STEP: delete the pod Mar 8 11:33:57.096: INFO: Waiting for pod projected-volume-543704b0-9be0-4703-a9c0-be4b031d90b9 to disappear Mar 8 11:33:57.106: INFO: Pod projected-volume-543704b0-9be0-4703-a9c0-be4b031d90b9 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:33:57.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7881" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":278,"completed":240,"skipped":3858,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:33:57.113: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-a1a03a1d-e3d5-401a-93d6-9b4d22fc9f45 STEP: Creating a pod to test consume secrets Mar 8 11:33:57.185: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-07c172da-8fbf-4ecb-b696-01d10199cac9" in namespace "projected-5153" to be "success or failure" Mar 8 11:33:57.190: INFO: Pod "pod-projected-secrets-07c172da-8fbf-4ecb-b696-01d10199cac9": Phase="Pending", Reason="", readiness=false. Elapsed: 5.786524ms Mar 8 11:33:59.194: INFO: Pod "pod-projected-secrets-07c172da-8fbf-4ecb-b696-01d10199cac9": Phase="Running", Reason="", readiness=true. Elapsed: 2.009574487s Mar 8 11:34:01.198: INFO: Pod "pod-projected-secrets-07c172da-8fbf-4ecb-b696-01d10199cac9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012991343s STEP: Saw pod success Mar 8 11:34:01.198: INFO: Pod "pod-projected-secrets-07c172da-8fbf-4ecb-b696-01d10199cac9" satisfied condition "success or failure" Mar 8 11:34:01.200: INFO: Trying to get logs from node kind-control-plane pod pod-projected-secrets-07c172da-8fbf-4ecb-b696-01d10199cac9 container projected-secret-volume-test: STEP: delete the pod Mar 8 11:34:01.235: INFO: Waiting for pod pod-projected-secrets-07c172da-8fbf-4ecb-b696-01d10199cac9 to disappear Mar 8 11:34:01.244: INFO: Pod pod-projected-secrets-07c172da-8fbf-4ecb-b696-01d10199cac9 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:34:01.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5153" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":241,"skipped":3892,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:34:01.270: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:34:12.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7697" for this suite. • [SLOW TEST:11.547 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":278,"completed":242,"skipped":3990,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} SSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:34:12.817: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token Mar 8 11:34:13.396: INFO: created pod pod-service-account-defaultsa Mar 8 11:34:13.396: INFO: pod pod-service-account-defaultsa service account token volume mount: true Mar 8 11:34:13.400: INFO: created pod pod-service-account-mountsa Mar 8 11:34:13.400: INFO: pod pod-service-account-mountsa service account token volume mount: true Mar 8 11:34:13.444: INFO: created pod pod-service-account-nomountsa Mar 8 11:34:13.444: INFO: pod pod-service-account-nomountsa service account token volume mount: false Mar 8 11:34:13.453: INFO: created pod pod-service-account-defaultsa-mountspec Mar 8 11:34:13.453: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Mar 8 11:34:13.475: INFO: created pod pod-service-account-mountsa-mountspec Mar 8 11:34:13.475: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Mar 8 11:34:13.489: INFO: created pod pod-service-account-nomountsa-mountspec Mar 8 11:34:13.489: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Mar 8 11:34:13.501: INFO: created pod pod-service-account-defaultsa-nomountspec Mar 8 11:34:13.501: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Mar 8 11:34:13.544: INFO: created pod pod-service-account-mountsa-nomountspec Mar 8 11:34:13.544: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Mar 8 11:34:13.584: INFO: created pod pod-service-account-nomountsa-nomountspec Mar 8 11:34:13.584: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:34:13.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-7157" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":278,"completed":243,"skipped":3993,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:34:13.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:34:24.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7782" for this suite. • [SLOW TEST:11.250 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":278,"completed":244,"skipped":4018,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} SSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:34:24.865: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Mar 8 11:34:24.964: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8636 /api/v1/namespaces/watch-8636/configmaps/e2e-watch-test-watch-closed 4d556e08-b35e-4bb6-9bfc-c45f5c475e25 28239 0 2020-03-08 11:34:24 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 8 11:34:24.964: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8636 /api/v1/namespaces/watch-8636/configmaps/e2e-watch-test-watch-closed 4d556e08-b35e-4bb6-9bfc-c45f5c475e25 28240 0 2020-03-08 11:34:24 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Mar 8 11:34:24.983: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8636 /api/v1/namespaces/watch-8636/configmaps/e2e-watch-test-watch-closed 4d556e08-b35e-4bb6-9bfc-c45f5c475e25 28241 0 2020-03-08 11:34:24 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 8 11:34:24.983: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8636 /api/v1/namespaces/watch-8636/configmaps/e2e-watch-test-watch-closed 4d556e08-b35e-4bb6-9bfc-c45f5c475e25 28242 0 2020-03-08 11:34:24 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:34:24.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8636" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":278,"completed":245,"skipped":4025,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} SSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:34:24.989: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-5338 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 8 11:34:25.068: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 8 11:34:47.142: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.0.177:8080/dial?request=hostname&protocol=http&host=10.244.0.176&port=8080&tries=1'] Namespace:pod-network-test-5338 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 11:34:47.142: INFO: >>> kubeConfig: /root/.kube/config I0308 11:34:47.182635 6 log.go:172] (0xc004bfe420) (0xc000a3f860) Create stream I0308 11:34:47.182687 6 log.go:172] (0xc004bfe420) (0xc000a3f860) Stream added, broadcasting: 1 I0308 11:34:47.185208 6 log.go:172] (0xc004bfe420) Reply frame received for 1 I0308 11:34:47.185252 6 log.go:172] (0xc004bfe420) (0xc0008e03c0) Create stream I0308 11:34:47.185270 6 log.go:172] (0xc004bfe420) (0xc0008e03c0) Stream added, broadcasting: 3 I0308 11:34:47.186381 6 log.go:172] (0xc004bfe420) Reply frame received for 3 I0308 11:34:47.186426 6 log.go:172] (0xc004bfe420) (0xc000a3f900) Create stream I0308 11:34:47.186446 6 log.go:172] (0xc004bfe420) (0xc000a3f900) Stream added, broadcasting: 5 I0308 11:34:47.187339 6 log.go:172] (0xc004bfe420) Reply frame received for 5 I0308 11:34:47.263838 6 log.go:172] (0xc004bfe420) Data frame received for 3 I0308 11:34:47.263874 6 log.go:172] (0xc0008e03c0) (3) Data frame handling I0308 11:34:47.263891 6 log.go:172] (0xc0008e03c0) (3) Data frame sent I0308 11:34:47.264242 6 log.go:172] (0xc004bfe420) Data frame received for 3 I0308 11:34:47.264265 6 log.go:172] (0xc0008e03c0) (3) Data frame handling I0308 11:34:47.264351 6 log.go:172] (0xc004bfe420) Data frame received for 5 I0308 11:34:47.264372 6 log.go:172] (0xc000a3f900) (5) Data frame handling I0308 11:34:47.266084 6 log.go:172] (0xc004bfe420) Data frame received for 1 I0308 11:34:47.266109 6 log.go:172] (0xc000a3f860) (1) Data frame handling I0308 11:34:47.266146 6 log.go:172] (0xc000a3f860) (1) Data frame sent I0308 11:34:47.266179 6 log.go:172] (0xc004bfe420) (0xc000a3f860) Stream removed, broadcasting: 1 I0308 11:34:47.266258 6 log.go:172] (0xc004bfe420) (0xc000a3f860) Stream removed, broadcasting: 1 I0308 11:34:47.266273 6 log.go:172] (0xc004bfe420) (0xc0008e03c0) Stream removed, broadcasting: 3 I0308 11:34:47.266282 6 log.go:172] (0xc004bfe420) (0xc000a3f900) Stream removed, broadcasting: 5 Mar 8 11:34:47.266: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:34:47.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready I0308 11:34:47.266623 6 log.go:172] (0xc004bfe420) Go away received STEP: Destroying namespace "pod-network-test-5338" for this suite. • [SLOW TEST:22.284 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":246,"skipped":4032,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:34:47.274: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-e977498c-3329-46d0-886c-6bc0e3291c4c STEP: Creating configMap with name cm-test-opt-upd-2d4349be-46df-48ef-afa3-829fd1ff82dd STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-e977498c-3329-46d0-886c-6bc0e3291c4c STEP: Updating configmap cm-test-opt-upd-2d4349be-46df-48ef-afa3-829fd1ff82dd STEP: Creating configMap with name cm-test-opt-create-fea7555b-40b2-440b-843c-4724c07bb66a STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:34:55.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7911" for this suite. • [SLOW TEST:8.194 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":247,"skipped":4048,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:34:55.468: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 8 11:34:55.551: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f0d8739b-fb63-494e-b539-eee71c9594ec" in namespace "projected-8619" to be "success or failure" Mar 8 11:34:55.567: INFO: Pod "downwardapi-volume-f0d8739b-fb63-494e-b539-eee71c9594ec": Phase="Pending", Reason="", readiness=false. Elapsed: 16.42562ms Mar 8 11:34:57.612: INFO: Pod "downwardapi-volume-f0d8739b-fb63-494e-b539-eee71c9594ec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.060704763s STEP: Saw pod success Mar 8 11:34:57.612: INFO: Pod "downwardapi-volume-f0d8739b-fb63-494e-b539-eee71c9594ec" satisfied condition "success or failure" Mar 8 11:34:57.615: INFO: Trying to get logs from node kind-control-plane pod downwardapi-volume-f0d8739b-fb63-494e-b539-eee71c9594ec container client-container: STEP: delete the pod Mar 8 11:34:57.647: INFO: Waiting for pod downwardapi-volume-f0d8739b-fb63-494e-b539-eee71c9594ec to disappear Mar 8 11:34:57.651: INFO: Pod downwardapi-volume-f0d8739b-fb63-494e-b539-eee71c9594ec no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:34:57.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8619" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":278,"completed":248,"skipped":4088,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:34:57.660: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 8 11:34:57.707: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1887510e-8918-4f38-81e2-5e8e9b5d3dd0" in namespace "downward-api-5770" to be "success or failure" Mar 8 11:34:57.743: INFO: Pod "downwardapi-volume-1887510e-8918-4f38-81e2-5e8e9b5d3dd0": Phase="Pending", Reason="", readiness=false. Elapsed: 36.477443ms Mar 8 11:34:59.748: INFO: Pod "downwardapi-volume-1887510e-8918-4f38-81e2-5e8e9b5d3dd0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.040750875s STEP: Saw pod success Mar 8 11:34:59.748: INFO: Pod "downwardapi-volume-1887510e-8918-4f38-81e2-5e8e9b5d3dd0" satisfied condition "success or failure" Mar 8 11:34:59.750: INFO: Trying to get logs from node kind-control-plane pod downwardapi-volume-1887510e-8918-4f38-81e2-5e8e9b5d3dd0 container client-container: STEP: delete the pod Mar 8 11:34:59.787: INFO: Waiting for pod downwardapi-volume-1887510e-8918-4f38-81e2-5e8e9b5d3dd0 to disappear Mar 8 11:34:59.795: INFO: Pod downwardapi-volume-1887510e-8918-4f38-81e2-5e8e9b5d3dd0 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:34:59.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5770" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":249,"skipped":4115,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} S ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:34:59.806: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-5e691a10-2071-4db4-9acf-0a20f038773e STEP: Creating configMap with name cm-test-opt-upd-8277c91b-c662-4e7a-a37d-76d722848452 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-5e691a10-2071-4db4-9acf-0a20f038773e STEP: Updating configmap cm-test-opt-upd-8277c91b-c662-4e7a-a37d-76d722848452 STEP: Creating configMap with name cm-test-opt-create-d615138d-e831-4b21-8efb-92fe3ec30213 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:36:16.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9724" for this suite. • [SLOW TEST:76.484 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":250,"skipped":4116,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:36:16.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:36:18.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2706" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":278,"completed":251,"skipped":4181,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:36:18.390: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-7fe27638-0c2d-4923-8929-415d99302c14 STEP: Creating a pod to test consume configMaps Mar 8 11:36:18.474: INFO: Waiting up to 5m0s for pod "pod-configmaps-f38f6ce4-f3ce-47cd-96bf-2137b46d3b15" in namespace "configmap-9127" to be "success or failure" Mar 8 11:36:18.483: INFO: Pod "pod-configmaps-f38f6ce4-f3ce-47cd-96bf-2137b46d3b15": Phase="Pending", Reason="", readiness=false. Elapsed: 8.456978ms Mar 8 11:36:20.487: INFO: Pod "pod-configmaps-f38f6ce4-f3ce-47cd-96bf-2137b46d3b15": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012404002s STEP: Saw pod success Mar 8 11:36:20.487: INFO: Pod "pod-configmaps-f38f6ce4-f3ce-47cd-96bf-2137b46d3b15" satisfied condition "success or failure" Mar 8 11:36:20.490: INFO: Trying to get logs from node kind-control-plane pod pod-configmaps-f38f6ce4-f3ce-47cd-96bf-2137b46d3b15 container configmap-volume-test: STEP: delete the pod Mar 8 11:36:20.525: INFO: Waiting for pod pod-configmaps-f38f6ce4-f3ce-47cd-96bf-2137b46d3b15 to disappear Mar 8 11:36:20.544: INFO: Pod pod-configmaps-f38f6ce4-f3ce-47cd-96bf-2137b46d3b15 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:36:20.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9127" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":252,"skipped":4197,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:36:20.553: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-02802e77-fac5-4e54-b08c-68d7a1e1724e STEP: Creating a pod to test consume secrets Mar 8 11:36:20.666: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d1b1f946-4e52-42ea-af89-23e61e79608d" in namespace "projected-1480" to be "success or failure" Mar 8 11:36:20.670: INFO: Pod "pod-projected-secrets-d1b1f946-4e52-42ea-af89-23e61e79608d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.966384ms Mar 8 11:36:22.674: INFO: Pod "pod-projected-secrets-d1b1f946-4e52-42ea-af89-23e61e79608d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007834867s Mar 8 11:36:24.678: INFO: Pod "pod-projected-secrets-d1b1f946-4e52-42ea-af89-23e61e79608d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011747088s STEP: Saw pod success Mar 8 11:36:24.678: INFO: Pod "pod-projected-secrets-d1b1f946-4e52-42ea-af89-23e61e79608d" satisfied condition "success or failure" Mar 8 11:36:24.681: INFO: Trying to get logs from node kind-control-plane pod pod-projected-secrets-d1b1f946-4e52-42ea-af89-23e61e79608d container projected-secret-volume-test: STEP: delete the pod Mar 8 11:36:24.719: INFO: Waiting for pod pod-projected-secrets-d1b1f946-4e52-42ea-af89-23e61e79608d to disappear Mar 8 11:36:24.744: INFO: Pod pod-projected-secrets-d1b1f946-4e52-42ea-af89-23e61e79608d no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:36:24.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1480" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":253,"skipped":4221,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} S ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:36:24.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 8 11:36:24.798: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Mar 8 11:36:26.814: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9352 create -f -' Mar 8 11:36:28.803: INFO: stderr: "" Mar 8 11:36:28.803: INFO: stdout: "e2e-test-crd-publish-openapi-5182-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Mar 8 11:36:28.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9352 delete e2e-test-crd-publish-openapi-5182-crds test-cr' Mar 8 11:36:28.916: INFO: stderr: "" Mar 8 11:36:28.916: INFO: stdout: "e2e-test-crd-publish-openapi-5182-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Mar 8 11:36:28.916: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9352 apply -f -' Mar 8 11:36:29.154: INFO: stderr: "" Mar 8 11:36:29.154: INFO: stdout: "e2e-test-crd-publish-openapi-5182-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Mar 8 11:36:29.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9352 delete e2e-test-crd-publish-openapi-5182-crds test-cr' Mar 8 11:36:29.296: INFO: stderr: "" Mar 8 11:36:29.296: INFO: stdout: "e2e-test-crd-publish-openapi-5182-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Mar 8 11:36:29.296: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5182-crds' Mar 8 11:36:29.540: INFO: stderr: "" Mar 8 11:36:29.540: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5182-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:36:31.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9352" for this suite. • [SLOW TEST:6.914 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":278,"completed":254,"skipped":4222,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} SSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:36:31.667: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 8 11:36:33.860: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:36:33.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3885" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":278,"completed":255,"skipped":4234,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} SSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:36:33.886: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token STEP: reading a file in the container Mar 8 11:36:36.490: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8111 pod-service-account-5ad7fd5f-c522-43a5-842a-286184e5d966 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Mar 8 11:36:36.729: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8111 pod-service-account-5ad7fd5f-c522-43a5-842a-286184e5d966 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Mar 8 11:36:36.911: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8111 pod-service-account-5ad7fd5f-c522-43a5-842a-286184e5d966 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:36:37.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-8111" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":278,"completed":256,"skipped":4243,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} SS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:36:37.106: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Mar 8 11:36:39.194: INFO: &Pod{ObjectMeta:{send-events-dbc445e0-d731-472f-8bf4-7e6f74fe9d5b events-8546 /api/v1/namespaces/events-8546/pods/send-events-dbc445e0-d731-472f-8bf4-7e6f74fe9d5b f2fcc5be-368f-4aea-8448-f51b74445f72 28941 0 2020-03-08 11:36:37 +0000 UTC map[name:foo time:157448510] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-cmdx7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-cmdx7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-cmdx7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-control-plane,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:36:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:36:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:36:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 11:36:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.2,PodIP:10.244.0.187,StartTime:2020-03-08 11:36:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-08 11:36:38 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://4818f8c090a1bd00bcf9ef76389a53add90943a104669a1b499604363dd650fa,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.0.187,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Mar 8 11:36:41.199: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Mar 8 11:36:43.202: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:36:43.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-8546" for this suite. • [SLOW TEST:6.129 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":278,"completed":257,"skipped":4245,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:36:43.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-b7b04746-efb0-4845-a8e8-b9e20882a8d9 in namespace container-probe-556 Mar 8 11:36:45.310: INFO: Started pod busybox-b7b04746-efb0-4845-a8e8-b9e20882a8d9 in namespace container-probe-556 STEP: checking the pod's current state and verifying that restartCount is present Mar 8 11:36:45.314: INFO: Initial restart count of pod busybox-b7b04746-efb0-4845-a8e8-b9e20882a8d9 is 0 Mar 8 11:37:39.422: INFO: Restart count of pod container-probe-556/busybox-b7b04746-efb0-4845-a8e8-b9e20882a8d9 is now 1 (54.1079697s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:37:39.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-556" for this suite. • [SLOW TEST:56.210 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":258,"skipped":4259,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} S ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:37:39.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-e9517c05-9c39-48d0-a241-87590f99ae86 STEP: Creating a pod to test consume configMaps Mar 8 11:37:39.530: INFO: Waiting up to 5m0s for pod "pod-configmaps-5a94fc8d-b7b7-4ec5-bd7e-18253b5ddf7e" in namespace "configmap-4569" to be "success or failure" Mar 8 11:37:39.551: INFO: Pod "pod-configmaps-5a94fc8d-b7b7-4ec5-bd7e-18253b5ddf7e": Phase="Pending", Reason="", readiness=false. Elapsed: 20.358579ms Mar 8 11:37:41.554: INFO: Pod "pod-configmaps-5a94fc8d-b7b7-4ec5-bd7e-18253b5ddf7e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.023857123s STEP: Saw pod success Mar 8 11:37:41.554: INFO: Pod "pod-configmaps-5a94fc8d-b7b7-4ec5-bd7e-18253b5ddf7e" satisfied condition "success or failure" Mar 8 11:37:41.557: INFO: Trying to get logs from node kind-control-plane pod pod-configmaps-5a94fc8d-b7b7-4ec5-bd7e-18253b5ddf7e container configmap-volume-test: STEP: delete the pod Mar 8 11:37:41.582: INFO: Waiting for pod pod-configmaps-5a94fc8d-b7b7-4ec5-bd7e-18253b5ddf7e to disappear Mar 8 11:37:41.586: INFO: Pod pod-configmaps-5a94fc8d-b7b7-4ec5-bd7e-18253b5ddf7e no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:37:41.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4569" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":259,"skipped":4260,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:37:41.593: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name projected-secret-test-4d07f479-7644-4aeb-9c7d-61f9b035e3d0 STEP: Creating a pod to test consume secrets Mar 8 11:37:41.701: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e1e3cf64-8190-4ce2-b218-6b54cbf5f6ec" in namespace "projected-6740" to be "success or failure" Mar 8 11:37:41.713: INFO: Pod "pod-projected-secrets-e1e3cf64-8190-4ce2-b218-6b54cbf5f6ec": Phase="Pending", Reason="", readiness=false. Elapsed: 11.398776ms Mar 8 11:37:43.716: INFO: Pod "pod-projected-secrets-e1e3cf64-8190-4ce2-b218-6b54cbf5f6ec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.014992369s STEP: Saw pod success Mar 8 11:37:43.716: INFO: Pod "pod-projected-secrets-e1e3cf64-8190-4ce2-b218-6b54cbf5f6ec" satisfied condition "success or failure" Mar 8 11:37:43.719: INFO: Trying to get logs from node kind-control-plane pod pod-projected-secrets-e1e3cf64-8190-4ce2-b218-6b54cbf5f6ec container secret-volume-test: STEP: delete the pod Mar 8 11:37:43.744: INFO: Waiting for pod pod-projected-secrets-e1e3cf64-8190-4ce2-b218-6b54cbf5f6ec to disappear Mar 8 11:37:43.763: INFO: Pod pod-projected-secrets-e1e3cf64-8190-4ce2-b218-6b54cbf5f6ec no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:37:43.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6740" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":260,"skipped":4323,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:37:43.769: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Starting the proxy Mar 8 11:37:43.822: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix105906007/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:37:43.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4538" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":278,"completed":261,"skipped":4323,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} SSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:37:43.934: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Mar 8 11:37:48.503: INFO: Successfully updated pod "adopt-release-hvv48" STEP: Checking that the Job readopts the Pod Mar 8 11:37:48.503: INFO: Waiting up to 15m0s for pod "adopt-release-hvv48" in namespace "job-9213" to be "adopted" Mar 8 11:37:48.511: INFO: Pod "adopt-release-hvv48": Phase="Running", Reason="", readiness=true. Elapsed: 8.541833ms Mar 8 11:37:50.515: INFO: Pod "adopt-release-hvv48": Phase="Running", Reason="", readiness=true. Elapsed: 2.011969653s Mar 8 11:37:50.515: INFO: Pod "adopt-release-hvv48" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Mar 8 11:37:51.023: INFO: Successfully updated pod "adopt-release-hvv48" STEP: Checking that the Job releases the Pod Mar 8 11:37:51.023: INFO: Waiting up to 15m0s for pod "adopt-release-hvv48" in namespace "job-9213" to be "released" Mar 8 11:37:51.041: INFO: Pod "adopt-release-hvv48": Phase="Running", Reason="", readiness=true. Elapsed: 18.006966ms Mar 8 11:37:53.063: INFO: Pod "adopt-release-hvv48": Phase="Running", Reason="", readiness=true. Elapsed: 2.040262354s Mar 8 11:37:53.063: INFO: Pod "adopt-release-hvv48" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:37:53.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-9213" for this suite. • [SLOW TEST:9.137 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":278,"completed":262,"skipped":4326,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:37:53.072: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Mar 8 11:37:53.121: INFO: >>> kubeConfig: /root/.kube/config Mar 8 11:37:55.235: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:38:07.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1365" for this suite. • [SLOW TEST:14.283 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":278,"completed":263,"skipped":4361,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} SSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:38:07.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-downwardapi-rr4z STEP: Creating a pod to test atomic-volume-subpath Mar 8 11:38:07.446: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-rr4z" in namespace "subpath-9326" to be "success or failure" Mar 8 11:38:07.450: INFO: Pod "pod-subpath-test-downwardapi-rr4z": Phase="Pending", Reason="", readiness=false. Elapsed: 3.91868ms Mar 8 11:38:09.454: INFO: Pod "pod-subpath-test-downwardapi-rr4z": Phase="Running", Reason="", readiness=true. Elapsed: 2.007725188s Mar 8 11:38:11.458: INFO: Pod "pod-subpath-test-downwardapi-rr4z": Phase="Running", Reason="", readiness=true. Elapsed: 4.011522751s Mar 8 11:38:13.462: INFO: Pod "pod-subpath-test-downwardapi-rr4z": Phase="Running", Reason="", readiness=true. Elapsed: 6.016005693s Mar 8 11:38:15.466: INFO: Pod "pod-subpath-test-downwardapi-rr4z": Phase="Running", Reason="", readiness=true. Elapsed: 8.01972077s Mar 8 11:38:17.469: INFO: Pod "pod-subpath-test-downwardapi-rr4z": Phase="Running", Reason="", readiness=true. Elapsed: 10.023189319s Mar 8 11:38:19.473: INFO: Pod "pod-subpath-test-downwardapi-rr4z": Phase="Running", Reason="", readiness=true. Elapsed: 12.026589584s Mar 8 11:38:21.476: INFO: Pod "pod-subpath-test-downwardapi-rr4z": Phase="Running", Reason="", readiness=true. Elapsed: 14.030190661s Mar 8 11:38:23.480: INFO: Pod "pod-subpath-test-downwardapi-rr4z": Phase="Running", Reason="", readiness=true. Elapsed: 16.033935586s Mar 8 11:38:25.489: INFO: Pod "pod-subpath-test-downwardapi-rr4z": Phase="Running", Reason="", readiness=true. Elapsed: 18.043025579s Mar 8 11:38:27.493: INFO: Pod "pod-subpath-test-downwardapi-rr4z": Phase="Running", Reason="", readiness=true. Elapsed: 20.04702437s Mar 8 11:38:29.497: INFO: Pod "pod-subpath-test-downwardapi-rr4z": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.051197594s STEP: Saw pod success Mar 8 11:38:29.498: INFO: Pod "pod-subpath-test-downwardapi-rr4z" satisfied condition "success or failure" Mar 8 11:38:29.500: INFO: Trying to get logs from node kind-control-plane pod pod-subpath-test-downwardapi-rr4z container test-container-subpath-downwardapi-rr4z: STEP: delete the pod Mar 8 11:38:29.519: INFO: Waiting for pod pod-subpath-test-downwardapi-rr4z to disappear Mar 8 11:38:29.524: INFO: Pod pod-subpath-test-downwardapi-rr4z no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-rr4z Mar 8 11:38:29.524: INFO: Deleting pod "pod-subpath-test-downwardapi-rr4z" in namespace "subpath-9326" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:38:29.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9326" for this suite. • [SLOW TEST:22.180 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":278,"completed":264,"skipped":4364,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} SSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:38:29.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service multi-endpoint-test in namespace services-8986 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8986 to expose endpoints map[] Mar 8 11:38:29.636: INFO: Get endpoints failed (3.086203ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Mar 8 11:38:30.640: INFO: successfully validated that service multi-endpoint-test in namespace services-8986 exposes endpoints map[] (1.006690443s elapsed) STEP: Creating pod pod1 in namespace services-8986 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8986 to expose endpoints map[pod1:[100]] Mar 8 11:38:32.681: INFO: successfully validated that service multi-endpoint-test in namespace services-8986 exposes endpoints map[pod1:[100]] (2.033265691s elapsed) STEP: Creating pod pod2 in namespace services-8986 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8986 to expose endpoints map[pod1:[100] pod2:[101]] Mar 8 11:38:34.722: INFO: successfully validated that service multi-endpoint-test in namespace services-8986 exposes endpoints map[pod1:[100] pod2:[101]] (2.036831406s elapsed) STEP: Deleting pod pod1 in namespace services-8986 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8986 to expose endpoints map[pod2:[101]] Mar 8 11:38:34.758: INFO: successfully validated that service multi-endpoint-test in namespace services-8986 exposes endpoints map[pod2:[101]] (32.264033ms elapsed) STEP: Deleting pod pod2 in namespace services-8986 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8986 to expose endpoints map[] Mar 8 11:38:34.788: INFO: successfully validated that service multi-endpoint-test in namespace services-8986 exposes endpoints map[] (24.655932ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:38:34.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8986" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:5.332 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":278,"completed":265,"skipped":4372,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:38:34.868: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs Mar 8 11:38:34.925: INFO: Waiting up to 5m0s for pod "pod-5887c0a3-e7d1-4dfe-9fb8-a4ba78ea06b1" in namespace "emptydir-1602" to be "success or failure" Mar 8 11:38:34.930: INFO: Pod "pod-5887c0a3-e7d1-4dfe-9fb8-a4ba78ea06b1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.599479ms Mar 8 11:38:36.934: INFO: Pod "pod-5887c0a3-e7d1-4dfe-9fb8-a4ba78ea06b1": Phase="Running", Reason="", readiness=true. Elapsed: 2.008167635s Mar 8 11:38:38.937: INFO: Pod "pod-5887c0a3-e7d1-4dfe-9fb8-a4ba78ea06b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011663855s STEP: Saw pod success Mar 8 11:38:38.937: INFO: Pod "pod-5887c0a3-e7d1-4dfe-9fb8-a4ba78ea06b1" satisfied condition "success or failure" Mar 8 11:38:38.940: INFO: Trying to get logs from node kind-control-plane pod pod-5887c0a3-e7d1-4dfe-9fb8-a4ba78ea06b1 container test-container: STEP: delete the pod Mar 8 11:38:38.975: INFO: Waiting for pod pod-5887c0a3-e7d1-4dfe-9fb8-a4ba78ea06b1 to disappear Mar 8 11:38:38.984: INFO: Pod pod-5887c0a3-e7d1-4dfe-9fb8-a4ba78ea06b1 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:38:38.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1602" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":266,"skipped":4376,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} SSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:38:38.991: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 8 11:38:39.050: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Mar 8 11:38:41.844: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6037 create -f -' Mar 8 11:38:44.367: INFO: stderr: "" Mar 8 11:38:44.367: INFO: stdout: "e2e-test-crd-publish-openapi-8009-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Mar 8 11:38:44.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6037 delete e2e-test-crd-publish-openapi-8009-crds test-foo' Mar 8 11:38:44.484: INFO: stderr: "" Mar 8 11:38:44.484: INFO: stdout: "e2e-test-crd-publish-openapi-8009-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Mar 8 11:38:44.484: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6037 apply -f -' Mar 8 11:38:44.739: INFO: stderr: "" Mar 8 11:38:44.740: INFO: stdout: "e2e-test-crd-publish-openapi-8009-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Mar 8 11:38:44.740: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6037 delete e2e-test-crd-publish-openapi-8009-crds test-foo' Mar 8 11:38:44.861: INFO: stderr: "" Mar 8 11:38:44.861: INFO: stdout: "e2e-test-crd-publish-openapi-8009-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Mar 8 11:38:44.861: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6037 create -f -' Mar 8 11:38:45.119: INFO: rc: 1 Mar 8 11:38:45.119: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6037 apply -f -' Mar 8 11:38:45.386: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Mar 8 11:38:45.386: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6037 create -f -' Mar 8 11:38:45.665: INFO: rc: 1 Mar 8 11:38:45.665: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6037 apply -f -' Mar 8 11:38:45.947: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Mar 8 11:38:45.947: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8009-crds' Mar 8 11:38:46.237: INFO: stderr: "" Mar 8 11:38:46.237: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8009-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Mar 8 11:38:46.238: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8009-crds.metadata' Mar 8 11:38:46.496: INFO: stderr: "" Mar 8 11:38:46.496: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8009-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Mar 8 11:38:46.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8009-crds.spec' Mar 8 11:38:46.775: INFO: stderr: "" Mar 8 11:38:46.775: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8009-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Mar 8 11:38:46.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8009-crds.spec.bars' Mar 8 11:38:47.054: INFO: stderr: "" Mar 8 11:38:47.054: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8009-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Mar 8 11:38:47.055: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8009-crds.spec.bars2' Mar 8 11:38:47.331: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:38:50.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6037" for this suite. • [SLOW TEST:11.811 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":278,"completed":267,"skipped":4381,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} S ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:38:50.802: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 8 11:38:50.882: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Mar 8 11:38:50.890: INFO: Number of nodes with available pods: 0 Mar 8 11:38:50.890: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Mar 8 11:38:50.924: INFO: Number of nodes with available pods: 0 Mar 8 11:38:50.924: INFO: Node kind-control-plane is running more than one daemon pod Mar 8 11:38:51.928: INFO: Number of nodes with available pods: 0 Mar 8 11:38:51.928: INFO: Node kind-control-plane is running more than one daemon pod Mar 8 11:38:52.928: INFO: Number of nodes with available pods: 1 Mar 8 11:38:52.928: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Mar 8 11:38:52.957: INFO: Number of nodes with available pods: 1 Mar 8 11:38:52.957: INFO: Number of running nodes: 0, number of available pods: 1 Mar 8 11:38:53.960: INFO: Number of nodes with available pods: 0 Mar 8 11:38:53.960: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Mar 8 11:38:53.985: INFO: Number of nodes with available pods: 0 Mar 8 11:38:53.985: INFO: Node kind-control-plane is running more than one daemon pod Mar 8 11:38:54.989: INFO: Number of nodes with available pods: 0 Mar 8 11:38:54.989: INFO: Node kind-control-plane is running more than one daemon pod Mar 8 11:38:55.997: INFO: Number of nodes with available pods: 0 Mar 8 11:38:55.997: INFO: Node kind-control-plane is running more than one daemon pod Mar 8 11:38:57.011: INFO: Number of nodes with available pods: 0 Mar 8 11:38:57.011: INFO: Node kind-control-plane is running more than one daemon pod Mar 8 11:38:57.988: INFO: Number of nodes with available pods: 0 Mar 8 11:38:57.988: INFO: Node kind-control-plane is running more than one daemon pod Mar 8 11:38:58.989: INFO: Number of nodes with available pods: 1 Mar 8 11:38:58.989: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8917, will wait for the garbage collector to delete the pods Mar 8 11:38:59.053: INFO: Deleting DaemonSet.extensions daemon-set took: 5.696734ms Mar 8 11:38:59.354: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.322744ms Mar 8 11:39:02.257: INFO: Number of nodes with available pods: 0 Mar 8 11:39:02.257: INFO: Number of running nodes: 0, number of available pods: 0 Mar 8 11:39:02.280: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8917/daemonsets","resourceVersion":"29742"},"items":null} Mar 8 11:39:02.282: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8917/pods","resourceVersion":"29742"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:39:02.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8917" for this suite. • [SLOW TEST:11.514 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":278,"completed":268,"skipped":4382,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} SSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:39:02.316: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 8 11:39:02.413: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Mar 8 11:39:02.429: INFO: Number of nodes with available pods: 0 Mar 8 11:39:02.429: INFO: Node kind-control-plane is running more than one daemon pod Mar 8 11:39:03.446: INFO: Number of nodes with available pods: 0 Mar 8 11:39:03.446: INFO: Node kind-control-plane is running more than one daemon pod Mar 8 11:39:04.436: INFO: Number of nodes with available pods: 1 Mar 8 11:39:04.436: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Mar 8 11:39:04.459: INFO: Wrong image for pod: daemon-set-4jrv5. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 8 11:39:05.510: INFO: Wrong image for pod: daemon-set-4jrv5. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 8 11:39:06.510: INFO: Wrong image for pod: daemon-set-4jrv5. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 8 11:39:07.526: INFO: Wrong image for pod: daemon-set-4jrv5. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 8 11:39:07.526: INFO: Pod daemon-set-4jrv5 is not available Mar 8 11:39:08.510: INFO: Pod daemon-set-5vhmt is not available STEP: Check that daemon pods are still running on every node of the cluster. Mar 8 11:39:08.518: INFO: Number of nodes with available pods: 0 Mar 8 11:39:08.518: INFO: Node kind-control-plane is running more than one daemon pod Mar 8 11:39:09.524: INFO: Number of nodes with available pods: 0 Mar 8 11:39:09.524: INFO: Node kind-control-plane is running more than one daemon pod Mar 8 11:39:10.529: INFO: Number of nodes with available pods: 1 Mar 8 11:39:10.529: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7468, will wait for the garbage collector to delete the pods Mar 8 11:39:10.603: INFO: Deleting DaemonSet.extensions daemon-set took: 6.108388ms Mar 8 11:39:10.703: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.277036ms Mar 8 11:39:14.310: INFO: Number of nodes with available pods: 0 Mar 8 11:39:14.310: INFO: Number of running nodes: 0, number of available pods: 0 Mar 8 11:39:14.313: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7468/daemonsets","resourceVersion":"29842"},"items":null} Mar 8 11:39:14.315: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7468/pods","resourceVersion":"29842"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:39:14.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7468" for this suite. • [SLOW TEST:12.013 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":278,"completed":269,"skipped":4387,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} SSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:39:14.329: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:39:14.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-5352" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":278,"completed":270,"skipped":4390,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:39:14.460: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:39:16.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-7431" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":278,"completed":271,"skipped":4412,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:39:16.601: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 8 11:39:16.683: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Mar 8 11:39:19.669: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3996 create -f -' Mar 8 11:39:21.674: INFO: stderr: "" Mar 8 11:39:21.674: INFO: stdout: "e2e-test-crd-publish-openapi-3851-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Mar 8 11:39:21.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3996 delete e2e-test-crd-publish-openapi-3851-crds test-cr' Mar 8 11:39:21.792: INFO: stderr: "" Mar 8 11:39:21.792: INFO: stdout: "e2e-test-crd-publish-openapi-3851-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Mar 8 11:39:21.792: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3996 apply -f -' Mar 8 11:39:22.049: INFO: stderr: "" Mar 8 11:39:22.049: INFO: stdout: "e2e-test-crd-publish-openapi-3851-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Mar 8 11:39:22.049: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3996 delete e2e-test-crd-publish-openapi-3851-crds test-cr' Mar 8 11:39:22.163: INFO: stderr: "" Mar 8 11:39:22.163: INFO: stdout: "e2e-test-crd-publish-openapi-3851-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Mar 8 11:39:22.163: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3851-crds' Mar 8 11:39:22.419: INFO: stderr: "" Mar 8 11:39:22.419: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3851-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:39:25.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3996" for this suite. • [SLOW TEST:8.847 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":278,"completed":272,"skipped":4426,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:39:25.448: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-35be2205-e643-43ce-a56e-0a5cd4419512 STEP: Creating a pod to test consume secrets Mar 8 11:39:25.511: INFO: Waiting up to 5m0s for pod "pod-secrets-011cc4c2-7765-4963-b9f9-9ef7f0c2fad1" in namespace "secrets-238" to be "success or failure" Mar 8 11:39:25.514: INFO: Pod "pod-secrets-011cc4c2-7765-4963-b9f9-9ef7f0c2fad1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.678456ms Mar 8 11:39:27.518: INFO: Pod "pod-secrets-011cc4c2-7765-4963-b9f9-9ef7f0c2fad1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006518792s STEP: Saw pod success Mar 8 11:39:27.518: INFO: Pod "pod-secrets-011cc4c2-7765-4963-b9f9-9ef7f0c2fad1" satisfied condition "success or failure" Mar 8 11:39:27.520: INFO: Trying to get logs from node kind-control-plane pod pod-secrets-011cc4c2-7765-4963-b9f9-9ef7f0c2fad1 container secret-volume-test: STEP: delete the pod Mar 8 11:39:27.570: INFO: Waiting for pod pod-secrets-011cc4c2-7765-4963-b9f9-9ef7f0c2fad1 to disappear Mar 8 11:39:27.580: INFO: Pod pod-secrets-011cc4c2-7765-4963-b9f9-9ef7f0c2fad1 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:39:27.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-238" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":273,"skipped":4462,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:39:27.593: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium Mar 8 11:39:27.672: INFO: Waiting up to 5m0s for pod "pod-7a1fd185-cb08-4a95-b83a-77cd9e0aa39d" in namespace "emptydir-3171" to be "success or failure" Mar 8 11:39:27.676: INFO: Pod "pod-7a1fd185-cb08-4a95-b83a-77cd9e0aa39d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.742701ms Mar 8 11:39:29.680: INFO: Pod "pod-7a1fd185-cb08-4a95-b83a-77cd9e0aa39d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007606362s STEP: Saw pod success Mar 8 11:39:29.680: INFO: Pod "pod-7a1fd185-cb08-4a95-b83a-77cd9e0aa39d" satisfied condition "success or failure" Mar 8 11:39:29.683: INFO: Trying to get logs from node kind-control-plane pod pod-7a1fd185-cb08-4a95-b83a-77cd9e0aa39d container test-container: STEP: delete the pod Mar 8 11:39:29.725: INFO: Waiting for pod pod-7a1fd185-cb08-4a95-b83a-77cd9e0aa39d to disappear Mar 8 11:39:29.730: INFO: Pod pod-7a1fd185-cb08-4a95-b83a-77cd9e0aa39d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:39:29.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3171" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":274,"skipped":4493,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:39:29.738: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-2835 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating stateful set ss in namespace statefulset-2835 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-2835 Mar 8 11:39:29.849: INFO: Found 0 stateful pods, waiting for 1 Mar 8 11:39:39.853: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Mar 8 11:39:39.857: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2835 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 8 11:39:40.128: INFO: stderr: "I0308 11:39:40.026699 3751 log.go:172] (0xc0001182c0) (0xc0006a5ae0) Create stream\nI0308 11:39:40.026755 3751 log.go:172] (0xc0001182c0) (0xc0006a5ae0) Stream added, broadcasting: 1\nI0308 11:39:40.029741 3751 log.go:172] (0xc0001182c0) Reply frame received for 1\nI0308 11:39:40.029784 3751 log.go:172] (0xc0001182c0) (0xc0006a5b80) Create stream\nI0308 11:39:40.029796 3751 log.go:172] (0xc0001182c0) (0xc0006a5b80) Stream added, broadcasting: 3\nI0308 11:39:40.030871 3751 log.go:172] (0xc0001182c0) Reply frame received for 3\nI0308 11:39:40.030924 3751 log.go:172] (0xc0001182c0) (0xc0006a5c20) Create stream\nI0308 11:39:40.030939 3751 log.go:172] (0xc0001182c0) (0xc0006a5c20) Stream added, broadcasting: 5\nI0308 11:39:40.031868 3751 log.go:172] (0xc0001182c0) Reply frame received for 5\nI0308 11:39:40.093804 3751 log.go:172] (0xc0001182c0) Data frame received for 5\nI0308 11:39:40.093828 3751 log.go:172] (0xc0006a5c20) (5) Data frame handling\nI0308 11:39:40.093845 3751 log.go:172] (0xc0006a5c20) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0308 11:39:40.122913 3751 log.go:172] (0xc0001182c0) Data frame received for 3\nI0308 11:39:40.123029 3751 log.go:172] (0xc0006a5b80) (3) Data frame handling\nI0308 11:39:40.123063 3751 log.go:172] (0xc0006a5b80) (3) Data frame sent\nI0308 11:39:40.123078 3751 log.go:172] (0xc0001182c0) Data frame received for 3\nI0308 11:39:40.123154 3751 log.go:172] (0xc0006a5b80) (3) Data frame handling\nI0308 11:39:40.123185 3751 log.go:172] (0xc0001182c0) Data frame received for 5\nI0308 11:39:40.123201 3751 log.go:172] (0xc0006a5c20) (5) Data frame handling\nI0308 11:39:40.124954 3751 log.go:172] (0xc0001182c0) Data frame received for 1\nI0308 11:39:40.124989 3751 log.go:172] (0xc0006a5ae0) (1) Data frame handling\nI0308 11:39:40.125004 3751 log.go:172] (0xc0006a5ae0) (1) Data frame sent\nI0308 11:39:40.125018 3751 log.go:172] (0xc0001182c0) (0xc0006a5ae0) Stream removed, broadcasting: 1\nI0308 11:39:40.125036 3751 log.go:172] (0xc0001182c0) Go away received\nI0308 11:39:40.125409 3751 log.go:172] (0xc0001182c0) (0xc0006a5ae0) Stream removed, broadcasting: 1\nI0308 11:39:40.125434 3751 log.go:172] (0xc0001182c0) (0xc0006a5b80) Stream removed, broadcasting: 3\nI0308 11:39:40.125447 3751 log.go:172] (0xc0001182c0) (0xc0006a5c20) Stream removed, broadcasting: 5\n" Mar 8 11:39:40.128: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 8 11:39:40.128: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 8 11:39:40.132: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Mar 8 11:39:50.136: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 8 11:39:50.136: INFO: Waiting for statefulset status.replicas updated to 0 Mar 8 11:39:50.176: INFO: POD NODE PHASE GRACE CONDITIONS Mar 8 11:39:50.176: INFO: ss-0 kind-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:39:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:39:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:39:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:39:29 +0000 UTC }] Mar 8 11:39:50.176: INFO: ss-1 Pending [] Mar 8 11:39:50.176: INFO: Mar 8 11:39:50.176: INFO: StatefulSet ss has not reached scale 3, at 2 Mar 8 11:39:51.181: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.96894953s Mar 8 11:39:52.185: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.964741127s Mar 8 11:39:53.189: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.960691292s Mar 8 11:39:54.193: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.956395803s Mar 8 11:39:55.198: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.951801486s Mar 8 11:39:56.202: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.947213053s Mar 8 11:39:57.206: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.943036841s Mar 8 11:39:58.211: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.938915165s Mar 8 11:39:59.215: INFO: Verifying statefulset ss doesn't scale past 3 for another 934.726788ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-2835 Mar 8 11:40:00.219: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2835 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 8 11:40:00.470: INFO: stderr: "I0308 11:40:00.399518 3771 log.go:172] (0xc0009dd970) (0xc000a60960) Create stream\nI0308 11:40:00.399572 3771 log.go:172] (0xc0009dd970) (0xc000a60960) Stream added, broadcasting: 1\nI0308 11:40:00.403904 3771 log.go:172] (0xc0009dd970) Reply frame received for 1\nI0308 11:40:00.403943 3771 log.go:172] (0xc0009dd970) (0xc0006b46e0) Create stream\nI0308 11:40:00.403955 3771 log.go:172] (0xc0009dd970) (0xc0006b46e0) Stream added, broadcasting: 3\nI0308 11:40:00.405131 3771 log.go:172] (0xc0009dd970) Reply frame received for 3\nI0308 11:40:00.405177 3771 log.go:172] (0xc0009dd970) (0xc00052d4a0) Create stream\nI0308 11:40:00.405192 3771 log.go:172] (0xc0009dd970) (0xc00052d4a0) Stream added, broadcasting: 5\nI0308 11:40:00.406178 3771 log.go:172] (0xc0009dd970) Reply frame received for 5\nI0308 11:40:00.465125 3771 log.go:172] (0xc0009dd970) Data frame received for 3\nI0308 11:40:00.465162 3771 log.go:172] (0xc0009dd970) Data frame received for 5\nI0308 11:40:00.465199 3771 log.go:172] (0xc00052d4a0) (5) Data frame handling\nI0308 11:40:00.465216 3771 log.go:172] (0xc00052d4a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0308 11:40:00.465230 3771 log.go:172] (0xc0006b46e0) (3) Data frame handling\nI0308 11:40:00.465238 3771 log.go:172] (0xc0006b46e0) (3) Data frame sent\nI0308 11:40:00.465339 3771 log.go:172] (0xc0009dd970) Data frame received for 5\nI0308 11:40:00.465397 3771 log.go:172] (0xc00052d4a0) (5) Data frame handling\nI0308 11:40:00.465664 3771 log.go:172] (0xc0009dd970) Data frame received for 3\nI0308 11:40:00.465682 3771 log.go:172] (0xc0006b46e0) (3) Data frame handling\nI0308 11:40:00.467193 3771 log.go:172] (0xc0009dd970) Data frame received for 1\nI0308 11:40:00.467218 3771 log.go:172] (0xc000a60960) (1) Data frame handling\nI0308 11:40:00.467231 3771 log.go:172] (0xc000a60960) (1) Data frame sent\nI0308 11:40:00.467245 3771 log.go:172] (0xc0009dd970) (0xc000a60960) Stream removed, broadcasting: 1\nI0308 11:40:00.467598 3771 log.go:172] (0xc0009dd970) Go away received\nI0308 11:40:00.467633 3771 log.go:172] (0xc0009dd970) (0xc000a60960) Stream removed, broadcasting: 1\nI0308 11:40:00.467651 3771 log.go:172] (0xc0009dd970) (0xc0006b46e0) Stream removed, broadcasting: 3\nI0308 11:40:00.467667 3771 log.go:172] (0xc0009dd970) (0xc00052d4a0) Stream removed, broadcasting: 5\n" Mar 8 11:40:00.470: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 8 11:40:00.470: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 8 11:40:00.470: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2835 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 8 11:40:00.691: INFO: stderr: "I0308 11:40:00.612414 3786 log.go:172] (0xc000918630) (0xc0005eb360) Create stream\nI0308 11:40:00.612463 3786 log.go:172] (0xc000918630) (0xc0005eb360) Stream added, broadcasting: 1\nI0308 11:40:00.615482 3786 log.go:172] (0xc000918630) Reply frame received for 1\nI0308 11:40:00.615523 3786 log.go:172] (0xc000918630) (0xc0008f6000) Create stream\nI0308 11:40:00.615536 3786 log.go:172] (0xc000918630) (0xc0008f6000) Stream added, broadcasting: 3\nI0308 11:40:00.616275 3786 log.go:172] (0xc000918630) Reply frame received for 3\nI0308 11:40:00.616303 3786 log.go:172] (0xc000918630) (0xc000996000) Create stream\nI0308 11:40:00.616313 3786 log.go:172] (0xc000918630) (0xc000996000) Stream added, broadcasting: 5\nI0308 11:40:00.617104 3786 log.go:172] (0xc000918630) Reply frame received for 5\nI0308 11:40:00.685952 3786 log.go:172] (0xc000918630) Data frame received for 5\nI0308 11:40:00.686001 3786 log.go:172] (0xc000918630) Data frame received for 3\nI0308 11:40:00.686028 3786 log.go:172] (0xc0008f6000) (3) Data frame handling\nI0308 11:40:00.686047 3786 log.go:172] (0xc0008f6000) (3) Data frame sent\nI0308 11:40:00.686057 3786 log.go:172] (0xc000918630) Data frame received for 3\nI0308 11:40:00.686065 3786 log.go:172] (0xc0008f6000) (3) Data frame handling\nI0308 11:40:00.686094 3786 log.go:172] (0xc000996000) (5) Data frame handling\nI0308 11:40:00.686109 3786 log.go:172] (0xc000996000) (5) Data frame sent\nI0308 11:40:00.686144 3786 log.go:172] (0xc000918630) Data frame received for 5\nI0308 11:40:00.686153 3786 log.go:172] (0xc000996000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0308 11:40:00.687886 3786 log.go:172] (0xc000918630) Data frame received for 1\nI0308 11:40:00.687909 3786 log.go:172] (0xc0005eb360) (1) Data frame handling\nI0308 11:40:00.687923 3786 log.go:172] (0xc0005eb360) (1) Data frame sent\nI0308 11:40:00.688029 3786 log.go:172] (0xc000918630) (0xc0005eb360) Stream removed, broadcasting: 1\nI0308 11:40:00.688114 3786 log.go:172] (0xc000918630) Go away received\nI0308 11:40:00.688332 3786 log.go:172] (0xc000918630) (0xc0005eb360) Stream removed, broadcasting: 1\nI0308 11:40:00.688351 3786 log.go:172] (0xc000918630) (0xc0008f6000) Stream removed, broadcasting: 3\nI0308 11:40:00.688360 3786 log.go:172] (0xc000918630) (0xc000996000) Stream removed, broadcasting: 5\n" Mar 8 11:40:00.691: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 8 11:40:00.691: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 8 11:40:00.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2835 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 8 11:40:00.907: INFO: stderr: "I0308 11:40:00.838706 3809 log.go:172] (0xc000994000) (0xc0006b9cc0) Create stream\nI0308 11:40:00.838753 3809 log.go:172] (0xc000994000) (0xc0006b9cc0) Stream added, broadcasting: 1\nI0308 11:40:00.841677 3809 log.go:172] (0xc000994000) Reply frame received for 1\nI0308 11:40:00.841704 3809 log.go:172] (0xc000994000) (0xc0006b9d60) Create stream\nI0308 11:40:00.841712 3809 log.go:172] (0xc000994000) (0xc0006b9d60) Stream added, broadcasting: 3\nI0308 11:40:00.842456 3809 log.go:172] (0xc000994000) Reply frame received for 3\nI0308 11:40:00.842483 3809 log.go:172] (0xc000994000) (0xc00061c780) Create stream\nI0308 11:40:00.842494 3809 log.go:172] (0xc000994000) (0xc00061c780) Stream added, broadcasting: 5\nI0308 11:40:00.843186 3809 log.go:172] (0xc000994000) Reply frame received for 5\nI0308 11:40:00.902149 3809 log.go:172] (0xc000994000) Data frame received for 5\nI0308 11:40:00.902170 3809 log.go:172] (0xc00061c780) (5) Data frame handling\nI0308 11:40:00.902189 3809 log.go:172] (0xc00061c780) (5) Data frame sent\nI0308 11:40:00.902196 3809 log.go:172] (0xc000994000) Data frame received for 5\nI0308 11:40:00.902202 3809 log.go:172] (0xc00061c780) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0308 11:40:00.902658 3809 log.go:172] (0xc000994000) Data frame received for 3\nI0308 11:40:00.902686 3809 log.go:172] (0xc0006b9d60) (3) Data frame handling\nI0308 11:40:00.902719 3809 log.go:172] (0xc0006b9d60) (3) Data frame sent\nI0308 11:40:00.902732 3809 log.go:172] (0xc000994000) Data frame received for 3\nI0308 11:40:00.902742 3809 log.go:172] (0xc0006b9d60) (3) Data frame handling\nI0308 11:40:00.904310 3809 log.go:172] (0xc000994000) Data frame received for 1\nI0308 11:40:00.904350 3809 log.go:172] (0xc0006b9cc0) (1) Data frame handling\nI0308 11:40:00.904383 3809 log.go:172] (0xc0006b9cc0) (1) Data frame sent\nI0308 11:40:00.904480 3809 log.go:172] (0xc000994000) (0xc0006b9cc0) Stream removed, broadcasting: 1\nI0308 11:40:00.904534 3809 log.go:172] (0xc000994000) Go away received\nI0308 11:40:00.904841 3809 log.go:172] (0xc000994000) (0xc0006b9cc0) Stream removed, broadcasting: 1\nI0308 11:40:00.904864 3809 log.go:172] (0xc000994000) (0xc0006b9d60) Stream removed, broadcasting: 3\nI0308 11:40:00.904889 3809 log.go:172] (0xc000994000) (0xc00061c780) Stream removed, broadcasting: 5\n" Mar 8 11:40:00.907: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 8 11:40:00.907: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 8 11:40:00.911: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Mar 8 11:40:10.914: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Mar 8 11:40:10.914: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Mar 8 11:40:10.914: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Mar 8 11:40:10.917: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2835 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 8 11:40:11.115: INFO: stderr: "I0308 11:40:11.059308 3829 log.go:172] (0xc000800000) (0xc0006f6280) Create stream\nI0308 11:40:11.059364 3829 log.go:172] (0xc000800000) (0xc0006f6280) Stream added, broadcasting: 1\nI0308 11:40:11.061233 3829 log.go:172] (0xc000800000) Reply frame received for 1\nI0308 11:40:11.061301 3829 log.go:172] (0xc000800000) (0xc0008d8000) Create stream\nI0308 11:40:11.061312 3829 log.go:172] (0xc000800000) (0xc0008d8000) Stream added, broadcasting: 3\nI0308 11:40:11.062098 3829 log.go:172] (0xc000800000) Reply frame received for 3\nI0308 11:40:11.062146 3829 log.go:172] (0xc000800000) (0xc0008f4000) Create stream\nI0308 11:40:11.062158 3829 log.go:172] (0xc000800000) (0xc0008f4000) Stream added, broadcasting: 5\nI0308 11:40:11.062854 3829 log.go:172] (0xc000800000) Reply frame received for 5\nI0308 11:40:11.112267 3829 log.go:172] (0xc000800000) Data frame received for 3\nI0308 11:40:11.112288 3829 log.go:172] (0xc0008d8000) (3) Data frame handling\nI0308 11:40:11.112297 3829 log.go:172] (0xc0008d8000) (3) Data frame sent\nI0308 11:40:11.112303 3829 log.go:172] (0xc000800000) Data frame received for 3\nI0308 11:40:11.112308 3829 log.go:172] (0xc0008d8000) (3) Data frame handling\nI0308 11:40:11.112327 3829 log.go:172] (0xc000800000) Data frame received for 5\nI0308 11:40:11.112333 3829 log.go:172] (0xc0008f4000) (5) Data frame handling\nI0308 11:40:11.112339 3829 log.go:172] (0xc0008f4000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0308 11:40:11.112639 3829 log.go:172] (0xc000800000) Data frame received for 5\nI0308 11:40:11.112665 3829 log.go:172] (0xc0008f4000) (5) Data frame handling\nI0308 11:40:11.113758 3829 log.go:172] (0xc000800000) Data frame received for 1\nI0308 11:40:11.113770 3829 log.go:172] (0xc0006f6280) (1) Data frame handling\nI0308 11:40:11.113782 3829 log.go:172] (0xc0006f6280) (1) Data frame sent\nI0308 11:40:11.113790 3829 log.go:172] (0xc000800000) (0xc0006f6280) Stream removed, broadcasting: 1\nI0308 11:40:11.113850 3829 log.go:172] (0xc000800000) Go away received\nI0308 11:40:11.114029 3829 log.go:172] (0xc000800000) (0xc0006f6280) Stream removed, broadcasting: 1\nI0308 11:40:11.114044 3829 log.go:172] (0xc000800000) (0xc0008d8000) Stream removed, broadcasting: 3\nI0308 11:40:11.114050 3829 log.go:172] (0xc000800000) (0xc0008f4000) Stream removed, broadcasting: 5\n" Mar 8 11:40:11.115: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 8 11:40:11.116: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 8 11:40:11.116: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2835 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 8 11:40:11.351: INFO: stderr: "I0308 11:40:11.237770 3848 log.go:172] (0xc000a76bb0) (0xc00061ff40) Create stream\nI0308 11:40:11.237806 3848 log.go:172] (0xc000a76bb0) (0xc00061ff40) Stream added, broadcasting: 1\nI0308 11:40:11.239443 3848 log.go:172] (0xc000a76bb0) Reply frame received for 1\nI0308 11:40:11.239693 3848 log.go:172] (0xc000a76bb0) (0xc000ab6000) Create stream\nI0308 11:40:11.239720 3848 log.go:172] (0xc000a76bb0) (0xc000ab6000) Stream added, broadcasting: 3\nI0308 11:40:11.242187 3848 log.go:172] (0xc000a76bb0) Reply frame received for 3\nI0308 11:40:11.242214 3848 log.go:172] (0xc000a76bb0) (0xc0005b8780) Create stream\nI0308 11:40:11.242222 3848 log.go:172] (0xc000a76bb0) (0xc0005b8780) Stream added, broadcasting: 5\nI0308 11:40:11.242962 3848 log.go:172] (0xc000a76bb0) Reply frame received for 5\nI0308 11:40:11.316769 3848 log.go:172] (0xc000a76bb0) Data frame received for 5\nI0308 11:40:11.316786 3848 log.go:172] (0xc0005b8780) (5) Data frame handling\nI0308 11:40:11.316799 3848 log.go:172] (0xc0005b8780) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0308 11:40:11.346973 3848 log.go:172] (0xc000a76bb0) Data frame received for 3\nI0308 11:40:11.346995 3848 log.go:172] (0xc000ab6000) (3) Data frame handling\nI0308 11:40:11.347018 3848 log.go:172] (0xc000ab6000) (3) Data frame sent\nI0308 11:40:11.347215 3848 log.go:172] (0xc000a76bb0) Data frame received for 5\nI0308 11:40:11.347234 3848 log.go:172] (0xc0005b8780) (5) Data frame handling\nI0308 11:40:11.347504 3848 log.go:172] (0xc000a76bb0) Data frame received for 3\nI0308 11:40:11.347523 3848 log.go:172] (0xc000ab6000) (3) Data frame handling\nI0308 11:40:11.348983 3848 log.go:172] (0xc000a76bb0) Data frame received for 1\nI0308 11:40:11.349006 3848 log.go:172] (0xc00061ff40) (1) Data frame handling\nI0308 11:40:11.349022 3848 log.go:172] (0xc00061ff40) (1) Data frame sent\nI0308 11:40:11.349038 3848 log.go:172] (0xc000a76bb0) (0xc00061ff40) Stream removed, broadcasting: 1\nI0308 11:40:11.349080 3848 log.go:172] (0xc000a76bb0) Go away received\nI0308 11:40:11.349430 3848 log.go:172] (0xc000a76bb0) (0xc00061ff40) Stream removed, broadcasting: 1\nI0308 11:40:11.349446 3848 log.go:172] (0xc000a76bb0) (0xc000ab6000) Stream removed, broadcasting: 3\nI0308 11:40:11.349456 3848 log.go:172] (0xc000a76bb0) (0xc0005b8780) Stream removed, broadcasting: 5\n" Mar 8 11:40:11.352: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 8 11:40:11.352: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 8 11:40:11.352: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2835 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 8 11:40:11.567: INFO: stderr: "I0308 11:40:11.487462 3870 log.go:172] (0xc000105290) (0xc0008dc000) Create stream\nI0308 11:40:11.487529 3870 log.go:172] (0xc000105290) (0xc0008dc000) Stream added, broadcasting: 1\nI0308 11:40:11.489395 3870 log.go:172] (0xc000105290) Reply frame received for 1\nI0308 11:40:11.489425 3870 log.go:172] (0xc000105290) (0xc000657ae0) Create stream\nI0308 11:40:11.489435 3870 log.go:172] (0xc000105290) (0xc000657ae0) Stream added, broadcasting: 3\nI0308 11:40:11.490247 3870 log.go:172] (0xc000105290) Reply frame received for 3\nI0308 11:40:11.490282 3870 log.go:172] (0xc000105290) (0xc000657cc0) Create stream\nI0308 11:40:11.490296 3870 log.go:172] (0xc000105290) (0xc000657cc0) Stream added, broadcasting: 5\nI0308 11:40:11.491065 3870 log.go:172] (0xc000105290) Reply frame received for 5\nI0308 11:40:11.540298 3870 log.go:172] (0xc000105290) Data frame received for 5\nI0308 11:40:11.540319 3870 log.go:172] (0xc000657cc0) (5) Data frame handling\nI0308 11:40:11.540331 3870 log.go:172] (0xc000657cc0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0308 11:40:11.562716 3870 log.go:172] (0xc000105290) Data frame received for 3\nI0308 11:40:11.562733 3870 log.go:172] (0xc000657ae0) (3) Data frame handling\nI0308 11:40:11.562741 3870 log.go:172] (0xc000657ae0) (3) Data frame sent\nI0308 11:40:11.562908 3870 log.go:172] (0xc000105290) Data frame received for 5\nI0308 11:40:11.562934 3870 log.go:172] (0xc000657cc0) (5) Data frame handling\nI0308 11:40:11.563191 3870 log.go:172] (0xc000105290) Data frame received for 3\nI0308 11:40:11.563212 3870 log.go:172] (0xc000657ae0) (3) Data frame handling\nI0308 11:40:11.564535 3870 log.go:172] (0xc000105290) Data frame received for 1\nI0308 11:40:11.564556 3870 log.go:172] (0xc0008dc000) (1) Data frame handling\nI0308 11:40:11.564565 3870 log.go:172] (0xc0008dc000) (1) Data frame sent\nI0308 11:40:11.564577 3870 log.go:172] (0xc000105290) (0xc0008dc000) Stream removed, broadcasting: 1\nI0308 11:40:11.564673 3870 log.go:172] (0xc000105290) Go away received\nI0308 11:40:11.564954 3870 log.go:172] (0xc000105290) (0xc0008dc000) Stream removed, broadcasting: 1\nI0308 11:40:11.564972 3870 log.go:172] (0xc000105290) (0xc000657ae0) Stream removed, broadcasting: 3\nI0308 11:40:11.564981 3870 log.go:172] (0xc000105290) (0xc000657cc0) Stream removed, broadcasting: 5\n" Mar 8 11:40:11.567: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 8 11:40:11.567: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 8 11:40:11.567: INFO: Waiting for statefulset status.replicas updated to 0 Mar 8 11:40:11.571: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Mar 8 11:40:21.578: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 8 11:40:21.578: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Mar 8 11:40:21.578: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Mar 8 11:40:21.593: INFO: POD NODE PHASE GRACE CONDITIONS Mar 8 11:40:21.593: INFO: ss-0 kind-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:39:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:40:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:40:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:39:29 +0000 UTC }] Mar 8 11:40:21.593: INFO: ss-1 kind-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:39:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:40:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:40:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:39:50 +0000 UTC }] Mar 8 11:40:21.593: INFO: ss-2 kind-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:39:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:40:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:40:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:39:50 +0000 UTC }] Mar 8 11:40:21.593: INFO: Mar 8 11:40:21.593: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 8 11:40:22.600: INFO: POD NODE PHASE GRACE CONDITIONS Mar 8 11:40:22.600: INFO: ss-0 kind-control-plane Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:39:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:40:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:40:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:39:29 +0000 UTC }] Mar 8 11:40:22.600: INFO: ss-1 kind-control-plane Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:39:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:40:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:40:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:39:50 +0000 UTC }] Mar 8 11:40:22.600: INFO: ss-2 kind-control-plane Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:39:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:40:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:40:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:39:50 +0000 UTC }] Mar 8 11:40:22.600: INFO: Mar 8 11:40:22.600: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 8 11:40:23.604: INFO: POD NODE PHASE GRACE CONDITIONS Mar 8 11:40:23.604: INFO: ss-0 kind-control-plane Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:39:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:40:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:40:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:39:29 +0000 UTC }] Mar 8 11:40:23.604: INFO: ss-1 kind-control-plane Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:39:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:40:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:40:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:39:50 +0000 UTC }] Mar 8 11:40:23.604: INFO: ss-2 kind-control-plane Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:39:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:40:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:40:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:39:50 +0000 UTC }] Mar 8 11:40:23.604: INFO: Mar 8 11:40:23.604: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 8 11:40:24.612: INFO: POD NODE PHASE GRACE CONDITIONS Mar 8 11:40:24.612: INFO: ss-0 kind-control-plane Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:39:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:40:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:40:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:39:29 +0000 UTC }] Mar 8 11:40:24.612: INFO: ss-1 kind-control-plane Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:39:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:40:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:40:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:39:50 +0000 UTC }] Mar 8 11:40:24.612: INFO: ss-2 kind-control-plane Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:39:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:40:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:40:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:39:50 +0000 UTC }] Mar 8 11:40:24.612: INFO: Mar 8 11:40:24.612: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 8 11:40:25.616: INFO: POD NODE PHASE GRACE CONDITIONS Mar 8 11:40:25.616: INFO: ss-0 kind-control-plane Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:39:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:40:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:40:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:39:29 +0000 UTC }] Mar 8 11:40:25.616: INFO: ss-1 kind-control-plane Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:39:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:40:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:40:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:39:50 +0000 UTC }] Mar 8 11:40:25.617: INFO: ss-2 kind-control-plane Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:39:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:40:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:40:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:39:50 +0000 UTC }] Mar 8 11:40:25.617: INFO: Mar 8 11:40:25.617: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 8 11:40:26.621: INFO: POD NODE PHASE GRACE CONDITIONS Mar 8 11:40:26.621: INFO: ss-0 kind-control-plane Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:39:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:40:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:40:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:39:29 +0000 UTC }] Mar 8 11:40:26.621: INFO: ss-1 kind-control-plane Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:39:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:40:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:40:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:39:50 +0000 UTC }] Mar 8 11:40:26.621: INFO: ss-2 kind-control-plane Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:39:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:40:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:40:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:39:50 +0000 UTC }] Mar 8 11:40:26.621: INFO: Mar 8 11:40:26.621: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 8 11:40:27.625: INFO: POD NODE PHASE GRACE CONDITIONS Mar 8 11:40:27.625: INFO: ss-0 kind-control-plane Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:39:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:40:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:40:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:39:29 +0000 UTC }] Mar 8 11:40:27.625: INFO: ss-1 kind-control-plane Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:39:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:40:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:40:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:39:50 +0000 UTC }] Mar 8 11:40:27.626: INFO: ss-2 kind-control-plane Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:39:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:40:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:40:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:39:50 +0000 UTC }] Mar 8 11:40:27.626: INFO: Mar 8 11:40:27.626: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 8 11:40:28.635: INFO: POD NODE PHASE GRACE CONDITIONS Mar 8 11:40:28.635: INFO: ss-0 kind-control-plane Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:39:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:40:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:40:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:39:29 +0000 UTC }] Mar 8 11:40:28.635: INFO: ss-1 kind-control-plane Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:39:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:40:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:40:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:39:50 +0000 UTC }] Mar 8 11:40:28.635: INFO: ss-2 kind-control-plane Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:39:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:40:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:40:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 11:39:50 +0000 UTC }] Mar 8 11:40:28.635: INFO: Mar 8 11:40:28.635: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 8 11:40:29.638: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.950260872s Mar 8 11:40:30.642: INFO: Verifying statefulset ss doesn't scale past 0 for another 946.600632ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-2835 Mar 8 11:40:31.646: INFO: Scaling statefulset ss to 0 Mar 8 11:40:31.662: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Mar 8 11:40:31.665: INFO: Deleting all statefulset in ns statefulset-2835 Mar 8 11:40:31.668: INFO: Scaling statefulset ss to 0 Mar 8 11:40:31.675: INFO: Waiting for statefulset status.replicas updated to 0 Mar 8 11:40:31.678: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:40:31.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2835" for this suite. • [SLOW TEST:61.959 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":278,"completed":275,"skipped":4517,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} SSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:40:31.698: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-364.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-364.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-364.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-364.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-364.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-364.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 8 11:40:37.833: INFO: DNS probes using dns-364/dns-test-2fcc8675-3f83-47c1-83af-cb7e42aa059b succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:40:37.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-364" for this suite. • [SLOW TEST:6.292 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":278,"completed":276,"skipped":4520,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 11:40:37.991: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating api versions Mar 8 11:40:38.036: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Mar 8 11:40:38.211: INFO: stderr: "" Mar 8 11:40:38.211: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 11:40:38.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3776" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":278,"completed":277,"skipped":4531,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} SSSSSMar 8 11:40:38.219: INFO: Running AfterSuite actions on all nodes Mar 8 11:40:38.219: INFO: Running AfterSuite actions on node 1 Mar 8 11:40:38.219: INFO: Skipping dumping logs from cluster {"msg":"Test Suite completed","total":278,"completed":277,"skipped":4536,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} Summarizing 1 Failure: [Fail] [sig-apps] Daemon set [Serial] [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:417 Ran 278 of 4814 Specs in 3738.764 seconds FAIL! -- 277 Passed | 1 Failed | 0 Pending | 4536 Skipped --- FAIL: TestE2E (3738.88s) FAIL