I0804 10:31:15.504510 7 test_context.go:423] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0804 10:31:15.504705 7 e2e.go:124] Starting e2e run "bab4f067-a7ad-46ba-b07c-2e01c836795f" on Ginkgo node 1 {"msg":"Test Suite starting","total":275,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1596537074 - Will randomize all specs Will run 275 of 4992 specs Aug 4 10:31:15.557: INFO: >>> kubeConfig: /root/.kube/config Aug 4 10:31:15.559: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Aug 4 10:31:15.580: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Aug 4 10:31:15.611: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Aug 4 10:31:15.611: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Aug 4 10:31:15.611: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Aug 4 10:31:15.620: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Aug 4 10:31:15.620: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Aug 4 10:31:15.620: INFO: e2e test version: v1.18.5 Aug 4 10:31:15.622: INFO: kube-apiserver version: v1.18.4 Aug 4 10:31:15.622: INFO: >>> kubeConfig: /root/.kube/config Aug 4 10:31:15.627: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:31:15.627: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected Aug 4 10:31:15.696: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-map-041578f4-f22c-4d06-9387-2da4203c3fd8 STEP: Creating a pod to test consume secrets Aug 4 10:31:15.707: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-dfc6537d-3a94-46f2-b245-6b6d0a1e37cc" in namespace "projected-5791" to be "Succeeded or Failed" Aug 4 10:31:15.719: INFO: Pod "pod-projected-secrets-dfc6537d-3a94-46f2-b245-6b6d0a1e37cc": Phase="Pending", Reason="", readiness=false. Elapsed: 12.11549ms Aug 4 10:31:17.723: INFO: Pod "pod-projected-secrets-dfc6537d-3a94-46f2-b245-6b6d0a1e37cc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016433483s Aug 4 10:31:19.728: INFO: Pod "pod-projected-secrets-dfc6537d-3a94-46f2-b245-6b6d0a1e37cc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020959986s Aug 4 10:31:21.732: INFO: Pod "pod-projected-secrets-dfc6537d-3a94-46f2-b245-6b6d0a1e37cc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.025746959s STEP: Saw pod success Aug 4 10:31:21.732: INFO: Pod "pod-projected-secrets-dfc6537d-3a94-46f2-b245-6b6d0a1e37cc" satisfied condition "Succeeded or Failed" Aug 4 10:31:21.736: INFO: Trying to get logs from node kali-worker pod pod-projected-secrets-dfc6537d-3a94-46f2-b245-6b6d0a1e37cc container projected-secret-volume-test: STEP: delete the pod Aug 4 10:31:21.826: INFO: Waiting for pod pod-projected-secrets-dfc6537d-3a94-46f2-b245-6b6d0a1e37cc to disappear Aug 4 10:31:21.835: INFO: Pod pod-projected-secrets-dfc6537d-3a94-46f2-b245-6b6d0a1e37cc no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:31:21.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5791" for this suite. • [SLOW TEST:6.218 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":1,"skipped":44,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:31:21.846: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Aug 4 10:31:29.994: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 4 10:31:30.041: INFO: Pod pod-with-prestop-exec-hook still exists Aug 4 10:31:32.041: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 4 10:31:32.046: INFO: Pod pod-with-prestop-exec-hook still exists Aug 4 10:31:34.041: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 4 10:31:34.045: INFO: Pod pod-with-prestop-exec-hook still exists Aug 4 10:31:36.041: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 4 10:31:36.045: INFO: Pod pod-with-prestop-exec-hook still exists Aug 4 10:31:38.041: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 4 10:31:38.046: INFO: Pod pod-with-prestop-exec-hook still exists Aug 4 10:31:40.041: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 4 10:31:40.045: INFO: Pod pod-with-prestop-exec-hook still exists Aug 4 10:31:42.041: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 4 10:31:42.045: INFO: Pod pod-with-prestop-exec-hook still exists Aug 4 10:31:44.041: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 4 10:31:44.045: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:31:44.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-7305" for this suite. • [SLOW TEST:22.211 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":275,"completed":2,"skipped":66,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:31:44.058: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:31:44.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2883" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":275,"completed":3,"skipped":116,"failed":0} SSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:31:44.154: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service externalname-service with the type=ExternalName in namespace services-6478 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-6478 I0804 10:31:44.412145 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-6478, replica count: 2 I0804 10:31:47.462705 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0804 10:31:50.463037 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 4 10:31:50.463: INFO: Creating new exec pod Aug 4 10:31:55.615: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=services-6478 execpod6jgd4 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Aug 4 10:31:58.319: INFO: stderr: "I0804 10:31:58.228132 29 log.go:172] (0xc0005b0160) (0xc000d661e0) Create stream\nI0804 10:31:58.228207 29 log.go:172] (0xc0005b0160) (0xc000d661e0) Stream added, broadcasting: 1\nI0804 10:31:58.235304 29 log.go:172] (0xc0005b0160) Reply frame received for 1\nI0804 10:31:58.235379 29 log.go:172] (0xc0005b0160) (0xc000d320a0) Create stream\nI0804 10:31:58.235404 29 log.go:172] (0xc0005b0160) (0xc000d320a0) Stream added, broadcasting: 3\nI0804 10:31:58.237067 29 log.go:172] (0xc0005b0160) Reply frame received for 3\nI0804 10:31:58.237103 29 log.go:172] (0xc0005b0160) (0xc000cee0a0) Create stream\nI0804 10:31:58.237114 29 log.go:172] (0xc0005b0160) (0xc000cee0a0) Stream added, broadcasting: 5\nI0804 10:31:58.237981 29 log.go:172] (0xc0005b0160) Reply frame received for 5\nI0804 10:31:58.295385 29 log.go:172] (0xc0005b0160) Data frame received for 5\nI0804 10:31:58.295421 29 log.go:172] (0xc000cee0a0) (5) Data frame handling\nI0804 10:31:58.295445 29 log.go:172] (0xc000cee0a0) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0804 10:31:58.308602 29 log.go:172] (0xc0005b0160) Data frame received for 5\nI0804 10:31:58.308625 29 log.go:172] (0xc000cee0a0) (5) Data frame handling\nI0804 10:31:58.308648 29 log.go:172] (0xc000cee0a0) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0804 10:31:58.309022 29 log.go:172] (0xc0005b0160) Data frame received for 5\nI0804 10:31:58.309061 29 log.go:172] (0xc000cee0a0) (5) Data frame handling\nI0804 10:31:58.309276 29 log.go:172] (0xc0005b0160) Data frame received for 3\nI0804 10:31:58.309325 29 log.go:172] (0xc000d320a0) (3) Data frame handling\nI0804 10:31:58.311224 29 log.go:172] (0xc0005b0160) Data frame received for 1\nI0804 10:31:58.311267 29 log.go:172] (0xc000d661e0) (1) Data frame handling\nI0804 10:31:58.311291 29 log.go:172] (0xc000d661e0) (1) Data frame sent\nI0804 10:31:58.311311 29 log.go:172] (0xc0005b0160) (0xc000d661e0) Stream removed, broadcasting: 1\nI0804 10:31:58.311497 29 log.go:172] (0xc0005b0160) Go away received\nI0804 10:31:58.311790 29 log.go:172] (0xc0005b0160) (0xc000d661e0) Stream removed, broadcasting: 1\nI0804 10:31:58.311815 29 log.go:172] (0xc0005b0160) (0xc000d320a0) Stream removed, broadcasting: 3\nI0804 10:31:58.311834 29 log.go:172] (0xc0005b0160) (0xc000cee0a0) Stream removed, broadcasting: 5\n" Aug 4 10:31:58.319: INFO: stdout: "" Aug 4 10:31:58.320: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=services-6478 execpod6jgd4 -- /bin/sh -x -c nc -zv -t -w 2 10.98.101.33 80' Aug 4 10:31:58.522: INFO: stderr: "I0804 10:31:58.440375 63 log.go:172] (0xc00003a6e0) (0xc000707180) Create stream\nI0804 10:31:58.440432 63 log.go:172] (0xc00003a6e0) (0xc000707180) Stream added, broadcasting: 1\nI0804 10:31:58.442580 63 log.go:172] (0xc00003a6e0) Reply frame received for 1\nI0804 10:31:58.442621 63 log.go:172] (0xc00003a6e0) (0xc000296000) Create stream\nI0804 10:31:58.442640 63 log.go:172] (0xc00003a6e0) (0xc000296000) Stream added, broadcasting: 3\nI0804 10:31:58.445027 63 log.go:172] (0xc00003a6e0) Reply frame received for 3\nI0804 10:31:58.445071 63 log.go:172] (0xc00003a6e0) (0xc000360000) Create stream\nI0804 10:31:58.445092 63 log.go:172] (0xc00003a6e0) (0xc000360000) Stream added, broadcasting: 5\nI0804 10:31:58.445851 63 log.go:172] (0xc00003a6e0) Reply frame received for 5\nI0804 10:31:58.516067 63 log.go:172] (0xc00003a6e0) Data frame received for 3\nI0804 10:31:58.516090 63 log.go:172] (0xc000296000) (3) Data frame handling\nI0804 10:31:58.516116 63 log.go:172] (0xc00003a6e0) Data frame received for 5\nI0804 10:31:58.516132 63 log.go:172] (0xc000360000) (5) Data frame handling\nI0804 10:31:58.516145 63 log.go:172] (0xc000360000) (5) Data frame sent\nI0804 10:31:58.516155 63 log.go:172] (0xc00003a6e0) Data frame received for 5\n+ nc -zv -t -w 2 10.98.101.33 80\nConnection to 10.98.101.33 80 port [tcp/http] succeeded!\nI0804 10:31:58.516180 63 log.go:172] (0xc000360000) (5) Data frame handling\nI0804 10:31:58.517394 63 log.go:172] (0xc00003a6e0) Data frame received for 1\nI0804 10:31:58.517413 63 log.go:172] (0xc000707180) (1) Data frame handling\nI0804 10:31:58.517424 63 log.go:172] (0xc000707180) (1) Data frame sent\nI0804 10:31:58.517438 63 log.go:172] (0xc00003a6e0) (0xc000707180) Stream removed, broadcasting: 1\nI0804 10:31:58.517469 63 log.go:172] (0xc00003a6e0) Go away received\nI0804 10:31:58.517796 63 log.go:172] (0xc00003a6e0) (0xc000707180) Stream removed, broadcasting: 1\nI0804 10:31:58.517818 63 log.go:172] (0xc00003a6e0) (0xc000296000) Stream removed, broadcasting: 3\nI0804 10:31:58.517830 63 log.go:172] (0xc00003a6e0) (0xc000360000) Stream removed, broadcasting: 5\n" Aug 4 10:31:58.523: INFO: stdout: "" Aug 4 10:31:58.523: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:31:58.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6478" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:14.415 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":275,"completed":4,"skipped":122,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:31:58.569: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: set up a multi version CRD Aug 4 10:31:58.646: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:32:15.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-836" for this suite. • [SLOW TEST:17.120 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":275,"completed":5,"skipped":127,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:32:15.690: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name secret-emptykey-test-25881b7e-50f3-415d-b5e6-8e5cdb30425c [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:32:15.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2751" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":275,"completed":6,"skipped":135,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:32:15.794: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-233.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-233.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-233.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-233.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-233.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-233.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-233.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-233.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-233.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-233.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-233.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-233.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-233.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-233.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-233.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-233.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-233.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-233.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 4 10:32:23.926: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-233.svc.cluster.local from pod dns-233/dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe: the server could not find the requested resource (get pods dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe) Aug 4 10:32:23.930: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-233.svc.cluster.local from pod dns-233/dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe: the server could not find the requested resource (get pods dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe) Aug 4 10:32:23.933: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-233.svc.cluster.local from pod dns-233/dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe: the server could not find the requested resource (get pods dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe) Aug 4 10:32:23.936: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-233.svc.cluster.local from pod dns-233/dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe: the server could not find the requested resource (get pods dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe) Aug 4 10:32:23.967: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-233.svc.cluster.local from pod dns-233/dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe: the server could not find the requested resource (get pods dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe) Aug 4 10:32:23.970: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-233.svc.cluster.local from pod dns-233/dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe: the server could not find the requested resource (get pods dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe) Aug 4 10:32:23.973: INFO: Unable to read jessie_udp@dns-test-service-2.dns-233.svc.cluster.local from pod dns-233/dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe: the server could not find the requested resource (get pods dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe) Aug 4 10:32:23.975: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-233.svc.cluster.local from pod dns-233/dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe: the server could not find the requested resource (get pods dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe) Aug 4 10:32:23.981: INFO: Lookups using dns-233/dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-233.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-233.svc.cluster.local wheezy_udp@dns-test-service-2.dns-233.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-233.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-233.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-233.svc.cluster.local jessie_udp@dns-test-service-2.dns-233.svc.cluster.local jessie_tcp@dns-test-service-2.dns-233.svc.cluster.local] Aug 4 10:32:28.986: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-233.svc.cluster.local from pod dns-233/dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe: the server could not find the requested resource (get pods dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe) Aug 4 10:32:28.990: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-233.svc.cluster.local from pod dns-233/dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe: the server could not find the requested resource (get pods dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe) Aug 4 10:32:28.994: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-233.svc.cluster.local from pod dns-233/dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe: the server could not find the requested resource (get pods dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe) Aug 4 10:32:28.997: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-233.svc.cluster.local from pod dns-233/dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe: the server could not find the requested resource (get pods dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe) Aug 4 10:32:29.007: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-233.svc.cluster.local from pod dns-233/dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe: the server could not find the requested resource (get pods dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe) Aug 4 10:32:29.011: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-233.svc.cluster.local from pod dns-233/dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe: the server could not find the requested resource (get pods dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe) Aug 4 10:32:29.014: INFO: Unable to read jessie_udp@dns-test-service-2.dns-233.svc.cluster.local from pod dns-233/dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe: the server could not find the requested resource (get pods dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe) Aug 4 10:32:29.017: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-233.svc.cluster.local from pod dns-233/dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe: the server could not find the requested resource (get pods dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe) Aug 4 10:32:29.023: INFO: Lookups using dns-233/dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-233.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-233.svc.cluster.local wheezy_udp@dns-test-service-2.dns-233.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-233.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-233.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-233.svc.cluster.local jessie_udp@dns-test-service-2.dns-233.svc.cluster.local jessie_tcp@dns-test-service-2.dns-233.svc.cluster.local] Aug 4 10:32:33.986: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-233.svc.cluster.local from pod dns-233/dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe: the server could not find the requested resource (get pods dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe) Aug 4 10:32:33.989: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-233.svc.cluster.local from pod dns-233/dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe: the server could not find the requested resource (get pods dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe) Aug 4 10:32:33.993: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-233.svc.cluster.local from pod dns-233/dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe: the server could not find the requested resource (get pods dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe) Aug 4 10:32:33.996: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-233.svc.cluster.local from pod dns-233/dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe: the server could not find the requested resource (get pods dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe) Aug 4 10:32:34.005: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-233.svc.cluster.local from pod dns-233/dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe: the server could not find the requested resource (get pods dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe) Aug 4 10:32:34.007: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-233.svc.cluster.local from pod dns-233/dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe: the server could not find the requested resource (get pods dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe) Aug 4 10:32:34.010: INFO: Unable to read jessie_udp@dns-test-service-2.dns-233.svc.cluster.local from pod dns-233/dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe: the server could not find the requested resource (get pods dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe) Aug 4 10:32:34.013: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-233.svc.cluster.local from pod dns-233/dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe: the server could not find the requested resource (get pods dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe) Aug 4 10:32:34.019: INFO: Lookups using dns-233/dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-233.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-233.svc.cluster.local wheezy_udp@dns-test-service-2.dns-233.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-233.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-233.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-233.svc.cluster.local jessie_udp@dns-test-service-2.dns-233.svc.cluster.local jessie_tcp@dns-test-service-2.dns-233.svc.cluster.local] Aug 4 10:32:38.986: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-233.svc.cluster.local from pod dns-233/dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe: the server could not find the requested resource (get pods dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe) Aug 4 10:32:38.989: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-233.svc.cluster.local from pod dns-233/dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe: the server could not find the requested resource (get pods dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe) Aug 4 10:32:38.992: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-233.svc.cluster.local from pod dns-233/dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe: the server could not find the requested resource (get pods dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe) Aug 4 10:32:39.000: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-233.svc.cluster.local from pod dns-233/dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe: the server could not find the requested resource (get pods dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe) Aug 4 10:32:39.009: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-233.svc.cluster.local from pod dns-233/dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe: the server could not find the requested resource (get pods dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe) Aug 4 10:32:39.012: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-233.svc.cluster.local from pod dns-233/dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe: the server could not find the requested resource (get pods dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe) Aug 4 10:32:39.014: INFO: Unable to read jessie_udp@dns-test-service-2.dns-233.svc.cluster.local from pod dns-233/dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe: the server could not find the requested resource (get pods dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe) Aug 4 10:32:39.017: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-233.svc.cluster.local from pod dns-233/dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe: the server could not find the requested resource (get pods dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe) Aug 4 10:32:39.023: INFO: Lookups using dns-233/dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-233.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-233.svc.cluster.local wheezy_udp@dns-test-service-2.dns-233.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-233.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-233.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-233.svc.cluster.local jessie_udp@dns-test-service-2.dns-233.svc.cluster.local jessie_tcp@dns-test-service-2.dns-233.svc.cluster.local] Aug 4 10:32:43.986: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-233.svc.cluster.local from pod dns-233/dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe: the server could not find the requested resource (get pods dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe) Aug 4 10:32:43.988: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-233.svc.cluster.local from pod dns-233/dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe: the server could not find the requested resource (get pods dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe) Aug 4 10:32:43.991: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-233.svc.cluster.local from pod dns-233/dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe: the server could not find the requested resource (get pods dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe) Aug 4 10:32:43.994: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-233.svc.cluster.local from pod dns-233/dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe: the server could not find the requested resource (get pods dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe) Aug 4 10:32:44.003: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-233.svc.cluster.local from pod dns-233/dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe: the server could not find the requested resource (get pods dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe) Aug 4 10:32:44.006: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-233.svc.cluster.local from pod dns-233/dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe: the server could not find the requested resource (get pods dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe) Aug 4 10:32:44.009: INFO: Unable to read jessie_udp@dns-test-service-2.dns-233.svc.cluster.local from pod dns-233/dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe: the server could not find the requested resource (get pods dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe) Aug 4 10:32:44.011: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-233.svc.cluster.local from pod dns-233/dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe: the server could not find the requested resource (get pods dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe) Aug 4 10:32:44.017: INFO: Lookups using dns-233/dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-233.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-233.svc.cluster.local wheezy_udp@dns-test-service-2.dns-233.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-233.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-233.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-233.svc.cluster.local jessie_udp@dns-test-service-2.dns-233.svc.cluster.local jessie_tcp@dns-test-service-2.dns-233.svc.cluster.local] Aug 4 10:32:48.986: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-233.svc.cluster.local from pod dns-233/dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe: the server could not find the requested resource (get pods dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe) Aug 4 10:32:48.990: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-233.svc.cluster.local from pod dns-233/dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe: the server could not find the requested resource (get pods dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe) Aug 4 10:32:48.994: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-233.svc.cluster.local from pod dns-233/dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe: the server could not find the requested resource (get pods dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe) Aug 4 10:32:48.998: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-233.svc.cluster.local from pod dns-233/dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe: the server could not find the requested resource (get pods dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe) Aug 4 10:32:49.008: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-233.svc.cluster.local from pod dns-233/dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe: the server could not find the requested resource (get pods dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe) Aug 4 10:32:49.011: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-233.svc.cluster.local from pod dns-233/dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe: the server could not find the requested resource (get pods dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe) Aug 4 10:32:49.015: INFO: Unable to read jessie_udp@dns-test-service-2.dns-233.svc.cluster.local from pod dns-233/dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe: the server could not find the requested resource (get pods dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe) Aug 4 10:32:49.018: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-233.svc.cluster.local from pod dns-233/dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe: the server could not find the requested resource (get pods dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe) Aug 4 10:32:49.024: INFO: Lookups using dns-233/dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-233.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-233.svc.cluster.local wheezy_udp@dns-test-service-2.dns-233.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-233.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-233.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-233.svc.cluster.local jessie_udp@dns-test-service-2.dns-233.svc.cluster.local jessie_tcp@dns-test-service-2.dns-233.svc.cluster.local] Aug 4 10:32:54.026: INFO: DNS probes using dns-233/dns-test-cceee3cd-d51b-4320-b003-b3738548a9fe succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:32:54.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-233" for this suite. • [SLOW TEST:38.980 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":275,"completed":7,"skipped":163,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:32:54.775: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 4 10:32:55.454: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 4 10:32:57.822: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732133975, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732133975, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732133975, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732133975, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 4 10:33:00.844: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:33:00.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5420" for this suite. STEP: Destroying namespace "webhook-5420-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.689 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":275,"completed":8,"skipped":167,"failed":0} SSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:33:01.464: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Aug 4 10:33:01.607: INFO: Waiting up to 5m0s for pod "downward-api-c026b611-4e83-4785-afa5-3ac6c89ccfe2" in namespace "downward-api-6333" to be "Succeeded or Failed" Aug 4 10:33:01.610: INFO: Pod "downward-api-c026b611-4e83-4785-afa5-3ac6c89ccfe2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.223303ms Aug 4 10:33:03.616: INFO: Pod "downward-api-c026b611-4e83-4785-afa5-3ac6c89ccfe2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008542397s Aug 4 10:33:05.620: INFO: Pod "downward-api-c026b611-4e83-4785-afa5-3ac6c89ccfe2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012901737s Aug 4 10:33:07.624: INFO: Pod "downward-api-c026b611-4e83-4785-afa5-3ac6c89ccfe2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.017202618s STEP: Saw pod success Aug 4 10:33:07.624: INFO: Pod "downward-api-c026b611-4e83-4785-afa5-3ac6c89ccfe2" satisfied condition "Succeeded or Failed" Aug 4 10:33:07.628: INFO: Trying to get logs from node kali-worker pod downward-api-c026b611-4e83-4785-afa5-3ac6c89ccfe2 container dapi-container: STEP: delete the pod Aug 4 10:33:07.823: INFO: Waiting for pod downward-api-c026b611-4e83-4785-afa5-3ac6c89ccfe2 to disappear Aug 4 10:33:07.859: INFO: Pod downward-api-c026b611-4e83-4785-afa5-3ac6c89ccfe2 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:33:07.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6333" for this suite. • [SLOW TEST:6.463 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":275,"completed":9,"skipped":177,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:33:07.929: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Aug 4 10:33:12.188: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:33:12.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8836" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":275,"completed":10,"skipped":202,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:33:12.235: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating pod Aug 4 10:33:16.722: INFO: Pod pod-hostip-0f95ac7b-2957-4e13-8bc0-3636b6716ff6 has hostIP: 172.18.0.13 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:33:16.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8895" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":275,"completed":11,"skipped":222,"failed":0} SSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:33:16.731: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 4 10:33:17.099: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-ec68b743-365b-4d31-b2ff-78069a96d22c" in namespace "security-context-test-9065" to be "Succeeded or Failed" Aug 4 10:33:17.121: INFO: Pod "alpine-nnp-false-ec68b743-365b-4d31-b2ff-78069a96d22c": Phase="Pending", Reason="", readiness=false. Elapsed: 21.525368ms Aug 4 10:33:19.136: INFO: Pod "alpine-nnp-false-ec68b743-365b-4d31-b2ff-78069a96d22c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037121071s Aug 4 10:33:21.141: INFO: Pod "alpine-nnp-false-ec68b743-365b-4d31-b2ff-78069a96d22c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041354381s Aug 4 10:33:21.141: INFO: Pod "alpine-nnp-false-ec68b743-365b-4d31-b2ff-78069a96d22c" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:33:21.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-9065" for this suite. •{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":12,"skipped":227,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:33:21.172: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 4 10:33:21.775: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 4 10:33:23.792: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732134001, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732134001, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732134001, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732134001, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 4 10:33:25.796: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732134001, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732134001, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732134001, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732134001, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 4 10:33:28.869: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:33:29.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3311" for this suite. STEP: Destroying namespace "webhook-3311-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.940 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":275,"completed":13,"skipped":233,"failed":0} S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:33:29.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-map-e6b1b28e-1dd3-48cd-92f9-5cf762017c33 STEP: Creating a pod to test consume configMaps Aug 4 10:33:29.218: INFO: Waiting up to 5m0s for pod "pod-configmaps-15741ac8-d1ce-40cc-b8c5-15185e6d2e36" in namespace "configmap-1230" to be "Succeeded or Failed" Aug 4 10:33:29.283: INFO: Pod "pod-configmaps-15741ac8-d1ce-40cc-b8c5-15185e6d2e36": Phase="Pending", Reason="", readiness=false. Elapsed: 64.60791ms Aug 4 10:33:31.304: INFO: Pod "pod-configmaps-15741ac8-d1ce-40cc-b8c5-15185e6d2e36": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086019631s Aug 4 10:33:33.441: INFO: Pod "pod-configmaps-15741ac8-d1ce-40cc-b8c5-15185e6d2e36": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.222327595s STEP: Saw pod success Aug 4 10:33:33.441: INFO: Pod "pod-configmaps-15741ac8-d1ce-40cc-b8c5-15185e6d2e36" satisfied condition "Succeeded or Failed" Aug 4 10:33:33.518: INFO: Trying to get logs from node kali-worker pod pod-configmaps-15741ac8-d1ce-40cc-b8c5-15185e6d2e36 container configmap-volume-test: STEP: delete the pod Aug 4 10:33:33.638: INFO: Waiting for pod pod-configmaps-15741ac8-d1ce-40cc-b8c5-15185e6d2e36 to disappear Aug 4 10:33:33.692: INFO: Pod pod-configmaps-15741ac8-d1ce-40cc-b8c5-15185e6d2e36 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:33:33.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1230" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":275,"completed":14,"skipped":234,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:33:33.700: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Aug 4 10:33:33.826: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2474 /api/v1/namespaces/watch-2474/configmaps/e2e-watch-test-configmap-a 944c90d2-e6c8-4d0d-b250-d4d6cda1594d 6660273 0 2020-08-04 10:33:33 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-08-04 10:33:33 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Aug 4 10:33:33.826: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2474 /api/v1/namespaces/watch-2474/configmaps/e2e-watch-test-configmap-a 944c90d2-e6c8-4d0d-b250-d4d6cda1594d 6660273 0 2020-08-04 10:33:33 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-08-04 10:33:33 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification Aug 4 10:33:43.835: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2474 /api/v1/namespaces/watch-2474/configmaps/e2e-watch-test-configmap-a 944c90d2-e6c8-4d0d-b250-d4d6cda1594d 6660327 0 2020-08-04 10:33:33 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-08-04 10:33:43 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Aug 4 10:33:43.836: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2474 /api/v1/namespaces/watch-2474/configmaps/e2e-watch-test-configmap-a 944c90d2-e6c8-4d0d-b250-d4d6cda1594d 6660327 0 2020-08-04 10:33:33 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-08-04 10:33:43 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Aug 4 10:33:53.846: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2474 /api/v1/namespaces/watch-2474/configmaps/e2e-watch-test-configmap-a 944c90d2-e6c8-4d0d-b250-d4d6cda1594d 6660373 0 2020-08-04 10:33:33 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-08-04 10:33:53 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Aug 4 10:33:53.846: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2474 /api/v1/namespaces/watch-2474/configmaps/e2e-watch-test-configmap-a 944c90d2-e6c8-4d0d-b250-d4d6cda1594d 6660373 0 2020-08-04 10:33:33 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-08-04 10:33:53 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification Aug 4 10:34:03.865: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2474 /api/v1/namespaces/watch-2474/configmaps/e2e-watch-test-configmap-a 944c90d2-e6c8-4d0d-b250-d4d6cda1594d 6660442 0 2020-08-04 10:33:33 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-08-04 10:33:53 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Aug 4 10:34:03.865: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2474 /api/v1/namespaces/watch-2474/configmaps/e2e-watch-test-configmap-a 944c90d2-e6c8-4d0d-b250-d4d6cda1594d 6660442 0 2020-08-04 10:33:33 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-08-04 10:33:53 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Aug 4 10:34:13.871: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2474 /api/v1/namespaces/watch-2474/configmaps/e2e-watch-test-configmap-b d1c973af-0d9c-42b7-b0f4-e69b81d1d8de 6660486 0 2020-08-04 10:34:13 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-08-04 10:34:13 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Aug 4 10:34:13.871: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2474 /api/v1/namespaces/watch-2474/configmaps/e2e-watch-test-configmap-b d1c973af-0d9c-42b7-b0f4-e69b81d1d8de 6660486 0 2020-08-04 10:34:13 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-08-04 10:34:13 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification Aug 4 10:34:23.880: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2474 /api/v1/namespaces/watch-2474/configmaps/e2e-watch-test-configmap-b d1c973af-0d9c-42b7-b0f4-e69b81d1d8de 6660539 0 2020-08-04 10:34:13 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-08-04 10:34:13 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Aug 4 10:34:23.880: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2474 /api/v1/namespaces/watch-2474/configmaps/e2e-watch-test-configmap-b d1c973af-0d9c-42b7-b0f4-e69b81d1d8de 6660539 0 2020-08-04 10:34:13 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-08-04 10:34:13 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:34:33.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2474" for this suite. • [SLOW TEST:60.191 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":275,"completed":15,"skipped":277,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:34:33.892: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Aug 4 10:34:34.001: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Aug 4 10:34:34.011: INFO: Waiting for terminating namespaces to be deleted... Aug 4 10:34:34.013: INFO: Logging pods the kubelet thinks is on node kali-worker before test Aug 4 10:34:34.018: INFO: rally-19e4df10-30wkw9yu-glqpf from c-rally-19e4df10-7fs771wk started at 2020-08-01 11:12:55 +0000 UTC (1 container statuses recorded) Aug 4 10:34:34.018: INFO: Container rally-19e4df10-30wkw9yu ready: true, restart count 0 Aug 4 10:34:34.019: INFO: kube-proxy-qwsfx from kube-system started at 2020-07-10 10:28:31 +0000 UTC (1 container statuses recorded) Aug 4 10:34:34.019: INFO: Container kube-proxy ready: true, restart count 0 Aug 4 10:34:34.019: INFO: kindnet-njbgt from kube-system started at 2020-07-10 10:28:30 +0000 UTC (1 container statuses recorded) Aug 4 10:34:34.019: INFO: Container kindnet-cni ready: true, restart count 1 Aug 4 10:34:34.019: INFO: rally-824618b1-6cukkjuh-lb7rq from c-rally-824618b1-4lzsfcdd started at 2020-08-01 10:57:26 +0000 UTC (1 container statuses recorded) Aug 4 10:34:34.019: INFO: Container rally-824618b1-6cukkjuh ready: true, restart count 3 Aug 4 10:34:34.019: INFO: Logging pods the kubelet thinks is on node kali-worker2 before test Aug 4 10:34:34.024: INFO: rally-824618b1-6cukkjuh-m84l4 from c-rally-824618b1-4lzsfcdd started at 2020-08-01 10:57:24 +0000 UTC (1 container statuses recorded) Aug 4 10:34:34.024: INFO: Container rally-824618b1-6cukkjuh ready: true, restart count 3 Aug 4 10:34:34.024: INFO: rally-aecc557f-k18gvfvt-cc9q7 from c-rally-aecc557f-oxrdl2f9 started at 2020-08-04 10:34:31 +0000 UTC (1 container statuses recorded) Aug 4 10:34:34.024: INFO: Container rally-aecc557f-k18gvfvt ready: false, restart count 0 Aug 4 10:34:34.024: INFO: rally-19e4df10-30wkw9yu-qbmr7 from c-rally-19e4df10-7fs771wk started at 2020-08-01 11:12:55 +0000 UTC (1 container statuses recorded) Aug 4 10:34:34.024: INFO: Container rally-19e4df10-30wkw9yu ready: true, restart count 0 Aug 4 10:34:34.024: INFO: kube-proxy-vk6jr from kube-system started at 2020-07-10 10:28:28 +0000 UTC (1 container statuses recorded) Aug 4 10:34:34.024: INFO: Container kube-proxy ready: true, restart count 0 Aug 4 10:34:34.024: INFO: kindnet-pk4xb from kube-system started at 2020-07-10 10:28:28 +0000 UTC (1 container statuses recorded) Aug 4 10:34:34.024: INFO: Container kindnet-cni ready: true, restart count 1 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-d7ae69c2-2486-46ab-aaa2-5bf39f6b8a88 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-d7ae69c2-2486-46ab-aaa2-5bf39f6b8a88 off the node kali-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-d7ae69c2-2486-46ab-aaa2-5bf39f6b8a88 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:39:44.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3160" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:310.465 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":275,"completed":16,"skipped":317,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:39:44.357: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-ef22efa8-a8c2-4c2f-9193-74d080869eda STEP: Creating a pod to test consume configMaps Aug 4 10:39:44.473: INFO: Waiting up to 5m0s for pod "pod-configmaps-904a75d4-0adc-41d4-840a-e72840a59972" in namespace "configmap-6559" to be "Succeeded or Failed" Aug 4 10:39:44.529: INFO: Pod "pod-configmaps-904a75d4-0adc-41d4-840a-e72840a59972": Phase="Pending", Reason="", readiness=false. Elapsed: 55.664594ms Aug 4 10:39:46.533: INFO: Pod "pod-configmaps-904a75d4-0adc-41d4-840a-e72840a59972": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059362688s Aug 4 10:39:48.539: INFO: Pod "pod-configmaps-904a75d4-0adc-41d4-840a-e72840a59972": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.06601412s STEP: Saw pod success Aug 4 10:39:48.539: INFO: Pod "pod-configmaps-904a75d4-0adc-41d4-840a-e72840a59972" satisfied condition "Succeeded or Failed" Aug 4 10:39:48.542: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-904a75d4-0adc-41d4-840a-e72840a59972 container configmap-volume-test: STEP: delete the pod Aug 4 10:39:48.604: INFO: Waiting for pod pod-configmaps-904a75d4-0adc-41d4-840a-e72840a59972 to disappear Aug 4 10:39:48.630: INFO: Pod pod-configmaps-904a75d4-0adc-41d4-840a-e72840a59972 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:39:48.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6559" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":17,"skipped":337,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:39:48.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:39:54.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4955" for this suite. • [SLOW TEST:6.170 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:188 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":18,"skipped":359,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:39:54.810: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name projected-secret-test-9784c53b-820d-45e9-bf48-8d7be7ef77f0 STEP: Creating a pod to test consume secrets Aug 4 10:39:54.977: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-394497ef-f089-4ebb-b3cb-ebeaf24aaff7" in namespace "projected-8896" to be "Succeeded or Failed" Aug 4 10:39:54.983: INFO: Pod "pod-projected-secrets-394497ef-f089-4ebb-b3cb-ebeaf24aaff7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.148497ms Aug 4 10:39:56.988: INFO: Pod "pod-projected-secrets-394497ef-f089-4ebb-b3cb-ebeaf24aaff7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010502583s Aug 4 10:39:58.992: INFO: Pod "pod-projected-secrets-394497ef-f089-4ebb-b3cb-ebeaf24aaff7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01502879s STEP: Saw pod success Aug 4 10:39:58.992: INFO: Pod "pod-projected-secrets-394497ef-f089-4ebb-b3cb-ebeaf24aaff7" satisfied condition "Succeeded or Failed" Aug 4 10:39:58.995: INFO: Trying to get logs from node kali-worker2 pod pod-projected-secrets-394497ef-f089-4ebb-b3cb-ebeaf24aaff7 container secret-volume-test: STEP: delete the pod Aug 4 10:39:59.072: INFO: Waiting for pod pod-projected-secrets-394497ef-f089-4ebb-b3cb-ebeaf24aaff7 to disappear Aug 4 10:39:59.098: INFO: Pod pod-projected-secrets-394497ef-f089-4ebb-b3cb-ebeaf24aaff7 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:39:59.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8896" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":275,"completed":19,"skipped":368,"failed":0} S ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:39:59.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Aug 4 10:39:59.178: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e526ab4f-1f74-4854-a682-7d82b4334c7e" in namespace "projected-6470" to be "Succeeded or Failed" Aug 4 10:39:59.230: INFO: Pod "downwardapi-volume-e526ab4f-1f74-4854-a682-7d82b4334c7e": Phase="Pending", Reason="", readiness=false. Elapsed: 51.941714ms Aug 4 10:40:01.234: INFO: Pod "downwardapi-volume-e526ab4f-1f74-4854-a682-7d82b4334c7e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056150621s Aug 4 10:40:03.238: INFO: Pod "downwardapi-volume-e526ab4f-1f74-4854-a682-7d82b4334c7e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.060144888s STEP: Saw pod success Aug 4 10:40:03.238: INFO: Pod "downwardapi-volume-e526ab4f-1f74-4854-a682-7d82b4334c7e" satisfied condition "Succeeded or Failed" Aug 4 10:40:03.241: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-e526ab4f-1f74-4854-a682-7d82b4334c7e container client-container: STEP: delete the pod Aug 4 10:40:03.399: INFO: Waiting for pod downwardapi-volume-e526ab4f-1f74-4854-a682-7d82b4334c7e to disappear Aug 4 10:40:03.454: INFO: Pod downwardapi-volume-e526ab4f-1f74-4854-a682-7d82b4334c7e no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:40:03.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6470" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":275,"completed":20,"skipped":369,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:40:03.555: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Aug 4 10:40:03.727: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-9713 /api/v1/namespaces/watch-9713/configmaps/e2e-watch-test-resource-version 97bf5166-04a8-46dc-9c5b-4c14f0f92ae9 6662101 0 2020-08-04 10:40:03 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-08-04 10:40:03 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Aug 4 10:40:03.727: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-9713 /api/v1/namespaces/watch-9713/configmaps/e2e-watch-test-resource-version 97bf5166-04a8-46dc-9c5b-4c14f0f92ae9 6662102 0 2020-08-04 10:40:03 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-08-04 10:40:03 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:40:03.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9713" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":275,"completed":21,"skipped":374,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:40:03.735: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:41:03.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6252" for this suite. • [SLOW TEST:60.104 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":275,"completed":22,"skipped":400,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:41:03.840: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Aug 4 10:41:03.913: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fe77428b-23f5-436b-af1c-70f853cbffa8" in namespace "downward-api-811" to be "Succeeded or Failed" Aug 4 10:41:03.929: INFO: Pod "downwardapi-volume-fe77428b-23f5-436b-af1c-70f853cbffa8": Phase="Pending", Reason="", readiness=false. Elapsed: 15.620592ms Aug 4 10:41:05.932: INFO: Pod "downwardapi-volume-fe77428b-23f5-436b-af1c-70f853cbffa8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019045023s Aug 4 10:41:07.936: INFO: Pod "downwardapi-volume-fe77428b-23f5-436b-af1c-70f853cbffa8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022520137s STEP: Saw pod success Aug 4 10:41:07.936: INFO: Pod "downwardapi-volume-fe77428b-23f5-436b-af1c-70f853cbffa8" satisfied condition "Succeeded or Failed" Aug 4 10:41:07.939: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-fe77428b-23f5-436b-af1c-70f853cbffa8 container client-container: STEP: delete the pod Aug 4 10:41:08.011: INFO: Waiting for pod downwardapi-volume-fe77428b-23f5-436b-af1c-70f853cbffa8 to disappear Aug 4 10:41:08.022: INFO: Pod downwardapi-volume-fe77428b-23f5-436b-af1c-70f853cbffa8 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:41:08.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-811" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":275,"completed":23,"skipped":414,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:41:08.059: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-574aa3f3-abea-442e-8d68-aaef7c55d759 STEP: Creating a pod to test consume configMaps Aug 4 10:41:08.159: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6f74284a-7893-447c-8370-566e4b5481cd" in namespace "projected-1752" to be "Succeeded or Failed" Aug 4 10:41:08.178: INFO: Pod "pod-projected-configmaps-6f74284a-7893-447c-8370-566e4b5481cd": Phase="Pending", Reason="", readiness=false. Elapsed: 18.565389ms Aug 4 10:41:10.203: INFO: Pod "pod-projected-configmaps-6f74284a-7893-447c-8370-566e4b5481cd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043231179s Aug 4 10:41:12.285: INFO: Pod "pod-projected-configmaps-6f74284a-7893-447c-8370-566e4b5481cd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.125323342s STEP: Saw pod success Aug 4 10:41:12.285: INFO: Pod "pod-projected-configmaps-6f74284a-7893-447c-8370-566e4b5481cd" satisfied condition "Succeeded or Failed" Aug 4 10:41:12.288: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-6f74284a-7893-447c-8370-566e4b5481cd container projected-configmap-volume-test: STEP: delete the pod Aug 4 10:41:12.366: INFO: Waiting for pod pod-projected-configmaps-6f74284a-7893-447c-8370-566e4b5481cd to disappear Aug 4 10:41:12.421: INFO: Pod pod-projected-configmaps-6f74284a-7893-447c-8370-566e4b5481cd no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:41:12.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1752" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":24,"skipped":442,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:41:12.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-4474e7dc-4d3e-479d-be3c-e29659da5c41 STEP: Creating a pod to test consume configMaps Aug 4 10:41:12.517: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5447c0ae-1f2a-4fac-b41e-18c8140095d1" in namespace "projected-4941" to be "Succeeded or Failed" Aug 4 10:41:12.566: INFO: Pod "pod-projected-configmaps-5447c0ae-1f2a-4fac-b41e-18c8140095d1": Phase="Pending", Reason="", readiness=false. Elapsed: 48.88886ms Aug 4 10:41:14.626: INFO: Pod "pod-projected-configmaps-5447c0ae-1f2a-4fac-b41e-18c8140095d1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.109413076s Aug 4 10:41:16.631: INFO: Pod "pod-projected-configmaps-5447c0ae-1f2a-4fac-b41e-18c8140095d1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.113798665s STEP: Saw pod success Aug 4 10:41:16.631: INFO: Pod "pod-projected-configmaps-5447c0ae-1f2a-4fac-b41e-18c8140095d1" satisfied condition "Succeeded or Failed" Aug 4 10:41:16.634: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-5447c0ae-1f2a-4fac-b41e-18c8140095d1 container projected-configmap-volume-test: STEP: delete the pod Aug 4 10:41:16.686: INFO: Waiting for pod pod-projected-configmaps-5447c0ae-1f2a-4fac-b41e-18c8140095d1 to disappear Aug 4 10:41:16.696: INFO: Pod pod-projected-configmaps-5447c0ae-1f2a-4fac-b41e-18c8140095d1 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:41:16.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4941" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":25,"skipped":458,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:41:16.729: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-434d8235-b31f-49b2-8f7c-c8ce65aa7917 STEP: Creating a pod to test consume secrets Aug 4 10:41:16.855: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3dc9d8fe-3f1d-416b-ad61-d7821bc7a613" in namespace "projected-4613" to be "Succeeded or Failed" Aug 4 10:41:16.870: INFO: Pod "pod-projected-secrets-3dc9d8fe-3f1d-416b-ad61-d7821bc7a613": Phase="Pending", Reason="", readiness=false. Elapsed: 14.866941ms Aug 4 10:41:18.875: INFO: Pod "pod-projected-secrets-3dc9d8fe-3f1d-416b-ad61-d7821bc7a613": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019578395s Aug 4 10:41:20.879: INFO: Pod "pod-projected-secrets-3dc9d8fe-3f1d-416b-ad61-d7821bc7a613": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024102856s STEP: Saw pod success Aug 4 10:41:20.879: INFO: Pod "pod-projected-secrets-3dc9d8fe-3f1d-416b-ad61-d7821bc7a613" satisfied condition "Succeeded or Failed" Aug 4 10:41:20.883: INFO: Trying to get logs from node kali-worker pod pod-projected-secrets-3dc9d8fe-3f1d-416b-ad61-d7821bc7a613 container projected-secret-volume-test: STEP: delete the pod Aug 4 10:41:20.932: INFO: Waiting for pod pod-projected-secrets-3dc9d8fe-3f1d-416b-ad61-d7821bc7a613 to disappear Aug 4 10:41:20.951: INFO: Pod pod-projected-secrets-3dc9d8fe-3f1d-416b-ad61-d7821bc7a613 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:41:20.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4613" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":26,"skipped":486,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:41:20.959: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:41:25.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7145" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":275,"completed":27,"skipped":536,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:41:25.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:271 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a replication controller Aug 4 10:41:25.285: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6800' Aug 4 10:41:25.644: INFO: stderr: "" Aug 4 10:41:25.644: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Aug 4 10:41:25.644: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6800' Aug 4 10:41:25.817: INFO: stderr: "" Aug 4 10:41:25.817: INFO: stdout: "update-demo-nautilus-fwljr update-demo-nautilus-lzm8n " Aug 4 10:41:25.817: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fwljr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6800' Aug 4 10:41:25.911: INFO: stderr: "" Aug 4 10:41:25.911: INFO: stdout: "" Aug 4 10:41:25.911: INFO: update-demo-nautilus-fwljr is created but not running Aug 4 10:41:30.911: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6800' Aug 4 10:41:31.018: INFO: stderr: "" Aug 4 10:41:31.018: INFO: stdout: "update-demo-nautilus-fwljr update-demo-nautilus-lzm8n " Aug 4 10:41:31.018: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fwljr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6800' Aug 4 10:41:31.179: INFO: stderr: "" Aug 4 10:41:31.179: INFO: stdout: "true" Aug 4 10:41:31.180: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fwljr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6800' Aug 4 10:41:31.304: INFO: stderr: "" Aug 4 10:41:31.304: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 4 10:41:31.304: INFO: validating pod update-demo-nautilus-fwljr Aug 4 10:41:31.308: INFO: got data: { "image": "nautilus.jpg" } Aug 4 10:41:31.308: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 4 10:41:31.308: INFO: update-demo-nautilus-fwljr is verified up and running Aug 4 10:41:31.308: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lzm8n -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6800' Aug 4 10:41:31.407: INFO: stderr: "" Aug 4 10:41:31.407: INFO: stdout: "true" Aug 4 10:41:31.407: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lzm8n -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6800' Aug 4 10:41:31.497: INFO: stderr: "" Aug 4 10:41:31.497: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 4 10:41:31.497: INFO: validating pod update-demo-nautilus-lzm8n Aug 4 10:41:31.505: INFO: got data: { "image": "nautilus.jpg" } Aug 4 10:41:31.505: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 4 10:41:31.505: INFO: update-demo-nautilus-lzm8n is verified up and running STEP: using delete to clean up resources Aug 4 10:41:31.505: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6800' Aug 4 10:41:31.631: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 4 10:41:31.631: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Aug 4 10:41:31.631: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-6800' Aug 4 10:41:31.728: INFO: stderr: "No resources found in kubectl-6800 namespace.\n" Aug 4 10:41:31.728: INFO: stdout: "" Aug 4 10:41:31.728: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-6800 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Aug 4 10:41:31.824: INFO: stderr: "" Aug 4 10:41:31.824: INFO: stdout: "update-demo-nautilus-fwljr\nupdate-demo-nautilus-lzm8n\n" Aug 4 10:41:32.325: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-6800' Aug 4 10:41:32.468: INFO: stderr: "No resources found in kubectl-6800 namespace.\n" Aug 4 10:41:32.468: INFO: stdout: "" Aug 4 10:41:32.468: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-6800 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Aug 4 10:41:32.654: INFO: stderr: "" Aug 4 10:41:32.654: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:41:32.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6800" for this suite. • [SLOW TEST:7.451 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:269 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":275,"completed":28,"skipped":557,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:41:32.662: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 4 10:41:33.828: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 4 10:41:35.867: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732134493, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732134493, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732134493, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732134493, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 4 10:41:37.889: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732134493, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732134493, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732134493, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732134493, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 4 10:41:40.938: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Aug 4 10:41:40.960: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:41:41.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-391" for this suite. STEP: Destroying namespace "webhook-391-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.506 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":275,"completed":29,"skipped":569,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:41:41.168: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-6305 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating stateful set ss in namespace statefulset-6305 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-6305 Aug 4 10:41:41.803: INFO: Found 0 stateful pods, waiting for 1 Aug 4 10:41:51.807: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Aug 4 10:41:51.810: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6305 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 4 10:41:52.078: INFO: stderr: "I0804 10:41:51.941498 347 log.go:172] (0xc00003a630) (0xc0004b6b40) Create stream\nI0804 10:41:51.941555 347 log.go:172] (0xc00003a630) (0xc0004b6b40) Stream added, broadcasting: 1\nI0804 10:41:51.943644 347 log.go:172] (0xc00003a630) Reply frame received for 1\nI0804 10:41:51.943687 347 log.go:172] (0xc00003a630) (0xc0006cf2c0) Create stream\nI0804 10:41:51.943698 347 log.go:172] (0xc00003a630) (0xc0006cf2c0) Stream added, broadcasting: 3\nI0804 10:41:51.944534 347 log.go:172] (0xc00003a630) Reply frame received for 3\nI0804 10:41:51.944563 347 log.go:172] (0xc00003a630) (0xc0009d8000) Create stream\nI0804 10:41:51.944573 347 log.go:172] (0xc00003a630) (0xc0009d8000) Stream added, broadcasting: 5\nI0804 10:41:51.945282 347 log.go:172] (0xc00003a630) Reply frame received for 5\nI0804 10:41:52.011084 347 log.go:172] (0xc00003a630) Data frame received for 5\nI0804 10:41:52.011145 347 log.go:172] (0xc0009d8000) (5) Data frame handling\nI0804 10:41:52.011172 347 log.go:172] (0xc0009d8000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0804 10:41:52.069961 347 log.go:172] (0xc00003a630) Data frame received for 5\nI0804 10:41:52.070010 347 log.go:172] (0xc0009d8000) (5) Data frame handling\nI0804 10:41:52.070044 347 log.go:172] (0xc00003a630) Data frame received for 3\nI0804 10:41:52.070059 347 log.go:172] (0xc0006cf2c0) (3) Data frame handling\nI0804 10:41:52.070077 347 log.go:172] (0xc0006cf2c0) (3) Data frame sent\nI0804 10:41:52.070094 347 log.go:172] (0xc00003a630) Data frame received for 3\nI0804 10:41:52.070111 347 log.go:172] (0xc0006cf2c0) (3) Data frame handling\nI0804 10:41:52.071980 347 log.go:172] (0xc00003a630) Data frame received for 1\nI0804 10:41:52.072007 347 log.go:172] (0xc0004b6b40) (1) Data frame handling\nI0804 10:41:52.072033 347 log.go:172] (0xc0004b6b40) (1) Data frame sent\nI0804 10:41:52.072065 347 log.go:172] (0xc00003a630) (0xc0004b6b40) Stream removed, broadcasting: 1\nI0804 10:41:52.072113 347 log.go:172] (0xc00003a630) Go away received\nI0804 10:41:52.072539 347 log.go:172] (0xc00003a630) (0xc0004b6b40) Stream removed, broadcasting: 1\nI0804 10:41:52.072562 347 log.go:172] (0xc00003a630) (0xc0006cf2c0) Stream removed, broadcasting: 3\nI0804 10:41:52.072574 347 log.go:172] (0xc00003a630) (0xc0009d8000) Stream removed, broadcasting: 5\n" Aug 4 10:41:52.078: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 4 10:41:52.078: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 4 10:41:52.082: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Aug 4 10:42:02.086: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Aug 4 10:42:02.086: INFO: Waiting for statefulset status.replicas updated to 0 Aug 4 10:42:02.106: INFO: POD NODE PHASE GRACE CONDITIONS Aug 4 10:42:02.106: INFO: ss-0 kali-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-04 10:41:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-04 10:41:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-04 10:41:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-04 10:41:41 +0000 UTC }] Aug 4 10:42:02.106: INFO: Aug 4 10:42:02.106: INFO: StatefulSet ss has not reached scale 3, at 1 Aug 4 10:42:03.112: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.991698797s Aug 4 10:42:04.382: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.986004425s Aug 4 10:42:05.387: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.71581284s Aug 4 10:42:06.392: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.710364411s Aug 4 10:42:07.398: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.705502412s Aug 4 10:42:08.427: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.700221693s Aug 4 10:42:09.433: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.67047883s Aug 4 10:42:10.442: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.665251259s Aug 4 10:42:11.447: INFO: Verifying statefulset ss doesn't scale past 3 for another 656.061931ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-6305 Aug 4 10:42:12.471: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6305 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 4 10:42:15.677: INFO: stderr: "I0804 10:42:15.563633 370 log.go:172] (0xc0008fe370) (0xc0008ec1e0) Create stream\nI0804 10:42:15.563706 370 log.go:172] (0xc0008fe370) (0xc0008ec1e0) Stream added, broadcasting: 1\nI0804 10:42:15.566543 370 log.go:172] (0xc0008fe370) Reply frame received for 1\nI0804 10:42:15.566589 370 log.go:172] (0xc0008fe370) (0xc000594000) Create stream\nI0804 10:42:15.566616 370 log.go:172] (0xc0008fe370) (0xc000594000) Stream added, broadcasting: 3\nI0804 10:42:15.567778 370 log.go:172] (0xc0008fe370) Reply frame received for 3\nI0804 10:42:15.567817 370 log.go:172] (0xc0008fe370) (0xc0005c2000) Create stream\nI0804 10:42:15.567832 370 log.go:172] (0xc0008fe370) (0xc0005c2000) Stream added, broadcasting: 5\nI0804 10:42:15.569073 370 log.go:172] (0xc0008fe370) Reply frame received for 5\nI0804 10:42:15.663610 370 log.go:172] (0xc0008fe370) Data frame received for 5\nI0804 10:42:15.663648 370 log.go:172] (0xc0005c2000) (5) Data frame handling\nI0804 10:42:15.663663 370 log.go:172] (0xc0005c2000) (5) Data frame sent\nI0804 10:42:15.663675 370 log.go:172] (0xc0008fe370) Data frame received for 5\nI0804 10:42:15.663685 370 log.go:172] (0xc0005c2000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0804 10:42:15.663739 370 log.go:172] (0xc0008fe370) Data frame received for 3\nI0804 10:42:15.663788 370 log.go:172] (0xc000594000) (3) Data frame handling\nI0804 10:42:15.663812 370 log.go:172] (0xc000594000) (3) Data frame sent\nI0804 10:42:15.663839 370 log.go:172] (0xc0008fe370) Data frame received for 3\nI0804 10:42:15.663850 370 log.go:172] (0xc000594000) (3) Data frame handling\nI0804 10:42:15.669823 370 log.go:172] (0xc0008fe370) Data frame received for 1\nI0804 10:42:15.669839 370 log.go:172] (0xc0008ec1e0) (1) Data frame handling\nI0804 10:42:15.669848 370 log.go:172] (0xc0008ec1e0) (1) Data frame sent\nI0804 10:42:15.669858 370 log.go:172] (0xc0008fe370) (0xc0008ec1e0) Stream removed, broadcasting: 1\nI0804 10:42:15.669916 370 log.go:172] (0xc0008fe370) Go away received\nI0804 10:42:15.670141 370 log.go:172] (0xc0008fe370) (0xc0008ec1e0) Stream removed, broadcasting: 1\nI0804 10:42:15.670155 370 log.go:172] (0xc0008fe370) (0xc000594000) Stream removed, broadcasting: 3\nI0804 10:42:15.670163 370 log.go:172] (0xc0008fe370) (0xc0005c2000) Stream removed, broadcasting: 5\n" Aug 4 10:42:15.677: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 4 10:42:15.677: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 4 10:42:15.677: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6305 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 4 10:42:15.889: INFO: stderr: "I0804 10:42:15.823083 403 log.go:172] (0xc00059a840) (0xc000633400) Create stream\nI0804 10:42:15.823148 403 log.go:172] (0xc00059a840) (0xc000633400) Stream added, broadcasting: 1\nI0804 10:42:15.825958 403 log.go:172] (0xc00059a840) Reply frame received for 1\nI0804 10:42:15.826008 403 log.go:172] (0xc00059a840) (0xc000970000) Create stream\nI0804 10:42:15.826025 403 log.go:172] (0xc00059a840) (0xc000970000) Stream added, broadcasting: 3\nI0804 10:42:15.827089 403 log.go:172] (0xc00059a840) Reply frame received for 3\nI0804 10:42:15.827124 403 log.go:172] (0xc00059a840) (0xc0009700a0) Create stream\nI0804 10:42:15.827151 403 log.go:172] (0xc00059a840) (0xc0009700a0) Stream added, broadcasting: 5\nI0804 10:42:15.828093 403 log.go:172] (0xc00059a840) Reply frame received for 5\nI0804 10:42:15.881476 403 log.go:172] (0xc00059a840) Data frame received for 3\nI0804 10:42:15.881500 403 log.go:172] (0xc000970000) (3) Data frame handling\nI0804 10:42:15.881509 403 log.go:172] (0xc000970000) (3) Data frame sent\nI0804 10:42:15.881527 403 log.go:172] (0xc00059a840) Data frame received for 5\nI0804 10:42:15.881546 403 log.go:172] (0xc0009700a0) (5) Data frame handling\nI0804 10:42:15.881556 403 log.go:172] (0xc0009700a0) (5) Data frame sent\nI0804 10:42:15.881567 403 log.go:172] (0xc00059a840) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0804 10:42:15.881576 403 log.go:172] (0xc0009700a0) (5) Data frame handling\nI0804 10:42:15.881617 403 log.go:172] (0xc00059a840) Data frame received for 3\nI0804 10:42:15.881631 403 log.go:172] (0xc000970000) (3) Data frame handling\nI0804 10:42:15.883345 403 log.go:172] (0xc00059a840) Data frame received for 1\nI0804 10:42:15.883371 403 log.go:172] (0xc000633400) (1) Data frame handling\nI0804 10:42:15.883392 403 log.go:172] (0xc000633400) (1) Data frame sent\nI0804 10:42:15.883415 403 log.go:172] (0xc00059a840) (0xc000633400) Stream removed, broadcasting: 1\nI0804 10:42:15.883432 403 log.go:172] (0xc00059a840) Go away received\nI0804 10:42:15.883866 403 log.go:172] (0xc00059a840) (0xc000633400) Stream removed, broadcasting: 1\nI0804 10:42:15.883885 403 log.go:172] (0xc00059a840) (0xc000970000) Stream removed, broadcasting: 3\nI0804 10:42:15.883894 403 log.go:172] (0xc00059a840) (0xc0009700a0) Stream removed, broadcasting: 5\n" Aug 4 10:42:15.889: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 4 10:42:15.889: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 4 10:42:15.889: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6305 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 4 10:42:16.097: INFO: stderr: "I0804 10:42:16.026083 424 log.go:172] (0xc000a2fb80) (0xc000adc8c0) Create stream\nI0804 10:42:16.026135 424 log.go:172] (0xc000a2fb80) (0xc000adc8c0) Stream added, broadcasting: 1\nI0804 10:42:16.031740 424 log.go:172] (0xc000a2fb80) Reply frame received for 1\nI0804 10:42:16.031802 424 log.go:172] (0xc000a2fb80) (0xc0006a1680) Create stream\nI0804 10:42:16.031817 424 log.go:172] (0xc000a2fb80) (0xc0006a1680) Stream added, broadcasting: 3\nI0804 10:42:16.032915 424 log.go:172] (0xc000a2fb80) Reply frame received for 3\nI0804 10:42:16.032966 424 log.go:172] (0xc000a2fb80) (0xc00058caa0) Create stream\nI0804 10:42:16.032980 424 log.go:172] (0xc000a2fb80) (0xc00058caa0) Stream added, broadcasting: 5\nI0804 10:42:16.034008 424 log.go:172] (0xc000a2fb80) Reply frame received for 5\nI0804 10:42:16.090092 424 log.go:172] (0xc000a2fb80) Data frame received for 3\nI0804 10:42:16.090150 424 log.go:172] (0xc0006a1680) (3) Data frame handling\nI0804 10:42:16.090180 424 log.go:172] (0xc0006a1680) (3) Data frame sent\nI0804 10:42:16.090200 424 log.go:172] (0xc000a2fb80) Data frame received for 3\nI0804 10:42:16.090219 424 log.go:172] (0xc0006a1680) (3) Data frame handling\nI0804 10:42:16.090258 424 log.go:172] (0xc000a2fb80) Data frame received for 5\nI0804 10:42:16.090282 424 log.go:172] (0xc00058caa0) (5) Data frame handling\nI0804 10:42:16.090323 424 log.go:172] (0xc00058caa0) (5) Data frame sent\nI0804 10:42:16.090360 424 log.go:172] (0xc000a2fb80) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0804 10:42:16.090391 424 log.go:172] (0xc00058caa0) (5) Data frame handling\nI0804 10:42:16.092102 424 log.go:172] (0xc000a2fb80) Data frame received for 1\nI0804 10:42:16.092158 424 log.go:172] (0xc000adc8c0) (1) Data frame handling\nI0804 10:42:16.092204 424 log.go:172] (0xc000adc8c0) (1) Data frame sent\nI0804 10:42:16.092238 424 log.go:172] (0xc000a2fb80) (0xc000adc8c0) Stream removed, broadcasting: 1\nI0804 10:42:16.092336 424 log.go:172] (0xc000a2fb80) Go away received\nI0804 10:42:16.092691 424 log.go:172] (0xc000a2fb80) (0xc000adc8c0) Stream removed, broadcasting: 1\nI0804 10:42:16.092712 424 log.go:172] (0xc000a2fb80) (0xc0006a1680) Stream removed, broadcasting: 3\nI0804 10:42:16.092835 424 log.go:172] (0xc000a2fb80) (0xc00058caa0) Stream removed, broadcasting: 5\n" Aug 4 10:42:16.097: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 4 10:42:16.097: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 4 10:42:16.101: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Aug 4 10:42:26.107: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Aug 4 10:42:26.107: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Aug 4 10:42:26.107: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Aug 4 10:42:26.111: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6305 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 4 10:42:26.329: INFO: stderr: "I0804 10:42:26.242276 443 log.go:172] (0xc0000e1080) (0xc0008b61e0) Create stream\nI0804 10:42:26.242326 443 log.go:172] (0xc0000e1080) (0xc0008b61e0) Stream added, broadcasting: 1\nI0804 10:42:26.244715 443 log.go:172] (0xc0000e1080) Reply frame received for 1\nI0804 10:42:26.244819 443 log.go:172] (0xc0000e1080) (0xc0005d9220) Create stream\nI0804 10:42:26.244831 443 log.go:172] (0xc0000e1080) (0xc0005d9220) Stream added, broadcasting: 3\nI0804 10:42:26.245673 443 log.go:172] (0xc0000e1080) Reply frame received for 3\nI0804 10:42:26.245728 443 log.go:172] (0xc0000e1080) (0xc0008b6320) Create stream\nI0804 10:42:26.245747 443 log.go:172] (0xc0000e1080) (0xc0008b6320) Stream added, broadcasting: 5\nI0804 10:42:26.246501 443 log.go:172] (0xc0000e1080) Reply frame received for 5\nI0804 10:42:26.322168 443 log.go:172] (0xc0000e1080) Data frame received for 3\nI0804 10:42:26.322208 443 log.go:172] (0xc0005d9220) (3) Data frame handling\nI0804 10:42:26.322229 443 log.go:172] (0xc0005d9220) (3) Data frame sent\nI0804 10:42:26.322242 443 log.go:172] (0xc0000e1080) Data frame received for 3\nI0804 10:42:26.322249 443 log.go:172] (0xc0005d9220) (3) Data frame handling\nI0804 10:42:26.322301 443 log.go:172] (0xc0000e1080) Data frame received for 5\nI0804 10:42:26.322348 443 log.go:172] (0xc0008b6320) (5) Data frame handling\nI0804 10:42:26.322367 443 log.go:172] (0xc0008b6320) (5) Data frame sent\nI0804 10:42:26.322376 443 log.go:172] (0xc0000e1080) Data frame received for 5\nI0804 10:42:26.322386 443 log.go:172] (0xc0008b6320) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0804 10:42:26.324074 443 log.go:172] (0xc0000e1080) Data frame received for 1\nI0804 10:42:26.324104 443 log.go:172] (0xc0008b61e0) (1) Data frame handling\nI0804 10:42:26.324138 443 log.go:172] (0xc0008b61e0) (1) Data frame sent\nI0804 10:42:26.324234 443 log.go:172] (0xc0000e1080) (0xc0008b61e0) Stream removed, broadcasting: 1\nI0804 10:42:26.324313 443 log.go:172] (0xc0000e1080) Go away received\nI0804 10:42:26.324638 443 log.go:172] (0xc0000e1080) (0xc0008b61e0) Stream removed, broadcasting: 1\nI0804 10:42:26.324665 443 log.go:172] (0xc0000e1080) (0xc0005d9220) Stream removed, broadcasting: 3\nI0804 10:42:26.324685 443 log.go:172] (0xc0000e1080) (0xc0008b6320) Stream removed, broadcasting: 5\n" Aug 4 10:42:26.330: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 4 10:42:26.330: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 4 10:42:26.330: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6305 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 4 10:42:26.569: INFO: stderr: "I0804 10:42:26.454958 467 log.go:172] (0xc000ac2000) (0xc000968000) Create stream\nI0804 10:42:26.455010 467 log.go:172] (0xc000ac2000) (0xc000968000) Stream added, broadcasting: 1\nI0804 10:42:26.457564 467 log.go:172] (0xc000ac2000) Reply frame received for 1\nI0804 10:42:26.457622 467 log.go:172] (0xc000ac2000) (0xc000ab21e0) Create stream\nI0804 10:42:26.457642 467 log.go:172] (0xc000ac2000) (0xc000ab21e0) Stream added, broadcasting: 3\nI0804 10:42:26.458736 467 log.go:172] (0xc000ac2000) Reply frame received for 3\nI0804 10:42:26.458770 467 log.go:172] (0xc000ac2000) (0xc000a28460) Create stream\nI0804 10:42:26.458781 467 log.go:172] (0xc000ac2000) (0xc000a28460) Stream added, broadcasting: 5\nI0804 10:42:26.459779 467 log.go:172] (0xc000ac2000) Reply frame received for 5\nI0804 10:42:26.523834 467 log.go:172] (0xc000ac2000) Data frame received for 5\nI0804 10:42:26.523857 467 log.go:172] (0xc000a28460) (5) Data frame handling\nI0804 10:42:26.523872 467 log.go:172] (0xc000a28460) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0804 10:42:26.560222 467 log.go:172] (0xc000ac2000) Data frame received for 3\nI0804 10:42:26.560266 467 log.go:172] (0xc000ab21e0) (3) Data frame handling\nI0804 10:42:26.560320 467 log.go:172] (0xc000ab21e0) (3) Data frame sent\nI0804 10:42:26.560663 467 log.go:172] (0xc000ac2000) Data frame received for 3\nI0804 10:42:26.560691 467 log.go:172] (0xc000ab21e0) (3) Data frame handling\nI0804 10:42:26.560851 467 log.go:172] (0xc000ac2000) Data frame received for 5\nI0804 10:42:26.560887 467 log.go:172] (0xc000a28460) (5) Data frame handling\nI0804 10:42:26.562858 467 log.go:172] (0xc000ac2000) Data frame received for 1\nI0804 10:42:26.562876 467 log.go:172] (0xc000968000) (1) Data frame handling\nI0804 10:42:26.562903 467 log.go:172] (0xc000968000) (1) Data frame sent\nI0804 10:42:26.562925 467 log.go:172] (0xc000ac2000) (0xc000968000) Stream removed, broadcasting: 1\nI0804 10:42:26.563053 467 log.go:172] (0xc000ac2000) Go away received\nI0804 10:42:26.563942 467 log.go:172] (0xc000ac2000) (0xc000968000) Stream removed, broadcasting: 1\nI0804 10:42:26.563979 467 log.go:172] (0xc000ac2000) (0xc000ab21e0) Stream removed, broadcasting: 3\nI0804 10:42:26.563994 467 log.go:172] (0xc000ac2000) (0xc000a28460) Stream removed, broadcasting: 5\n" Aug 4 10:42:26.569: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 4 10:42:26.569: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 4 10:42:26.569: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6305 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 4 10:42:26.801: INFO: stderr: "I0804 10:42:26.705679 486 log.go:172] (0xc0008f2000) (0xc00066b7c0) Create stream\nI0804 10:42:26.705733 486 log.go:172] (0xc0008f2000) (0xc00066b7c0) Stream added, broadcasting: 1\nI0804 10:42:26.707395 486 log.go:172] (0xc0008f2000) Reply frame received for 1\nI0804 10:42:26.707447 486 log.go:172] (0xc0008f2000) (0xc0004f6be0) Create stream\nI0804 10:42:26.707463 486 log.go:172] (0xc0008f2000) (0xc0004f6be0) Stream added, broadcasting: 3\nI0804 10:42:26.708345 486 log.go:172] (0xc0008f2000) Reply frame received for 3\nI0804 10:42:26.708376 486 log.go:172] (0xc0008f2000) (0xc00081f400) Create stream\nI0804 10:42:26.708385 486 log.go:172] (0xc0008f2000) (0xc00081f400) Stream added, broadcasting: 5\nI0804 10:42:26.709402 486 log.go:172] (0xc0008f2000) Reply frame received for 5\nI0804 10:42:26.758944 486 log.go:172] (0xc0008f2000) Data frame received for 5\nI0804 10:42:26.758972 486 log.go:172] (0xc00081f400) (5) Data frame handling\nI0804 10:42:26.759007 486 log.go:172] (0xc00081f400) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0804 10:42:26.794635 486 log.go:172] (0xc0008f2000) Data frame received for 5\nI0804 10:42:26.794689 486 log.go:172] (0xc00081f400) (5) Data frame handling\nI0804 10:42:26.794731 486 log.go:172] (0xc0008f2000) Data frame received for 3\nI0804 10:42:26.794755 486 log.go:172] (0xc0004f6be0) (3) Data frame handling\nI0804 10:42:26.794782 486 log.go:172] (0xc0004f6be0) (3) Data frame sent\nI0804 10:42:26.794801 486 log.go:172] (0xc0008f2000) Data frame received for 3\nI0804 10:42:26.794817 486 log.go:172] (0xc0004f6be0) (3) Data frame handling\nI0804 10:42:26.796869 486 log.go:172] (0xc0008f2000) Data frame received for 1\nI0804 10:42:26.796886 486 log.go:172] (0xc00066b7c0) (1) Data frame handling\nI0804 10:42:26.796892 486 log.go:172] (0xc00066b7c0) (1) Data frame sent\nI0804 10:42:26.796903 486 log.go:172] (0xc0008f2000) (0xc00066b7c0) Stream removed, broadcasting: 1\nI0804 10:42:26.796974 486 log.go:172] (0xc0008f2000) Go away received\nI0804 10:42:26.797169 486 log.go:172] (0xc0008f2000) (0xc00066b7c0) Stream removed, broadcasting: 1\nI0804 10:42:26.797182 486 log.go:172] (0xc0008f2000) (0xc0004f6be0) Stream removed, broadcasting: 3\nI0804 10:42:26.797191 486 log.go:172] (0xc0008f2000) (0xc00081f400) Stream removed, broadcasting: 5\n" Aug 4 10:42:26.801: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 4 10:42:26.801: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 4 10:42:26.801: INFO: Waiting for statefulset status.replicas updated to 0 Aug 4 10:42:26.805: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Aug 4 10:42:36.812: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Aug 4 10:42:36.812: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Aug 4 10:42:36.812: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Aug 4 10:42:36.846: INFO: POD NODE PHASE GRACE CONDITIONS Aug 4 10:42:36.846: INFO: ss-0 kali-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-04 10:41:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-04 10:42:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-04 10:42:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-04 10:41:41 +0000 UTC }] Aug 4 10:42:36.846: INFO: ss-1 kali-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-04 10:42:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-04 10:42:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-04 10:42:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-04 10:42:02 +0000 UTC }] Aug 4 10:42:36.846: INFO: ss-2 kali-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-04 10:42:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-04 10:42:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-04 10:42:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-04 10:42:02 +0000 UTC }] Aug 4 10:42:36.846: INFO: Aug 4 10:42:36.846: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 4 10:42:37.881: INFO: POD NODE PHASE GRACE CONDITIONS Aug 4 10:42:37.881: INFO: ss-0 kali-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-04 10:41:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-04 10:42:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-04 10:42:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-04 10:41:41 +0000 UTC }] Aug 4 10:42:37.881: INFO: ss-1 kali-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-04 10:42:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-04 10:42:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-04 10:42:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-04 10:42:02 +0000 UTC }] Aug 4 10:42:37.881: INFO: ss-2 kali-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-04 10:42:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-04 10:42:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-04 10:42:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-04 10:42:02 +0000 UTC }] Aug 4 10:42:37.881: INFO: Aug 4 10:42:37.881: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 4 10:42:38.885: INFO: POD NODE PHASE GRACE CONDITIONS Aug 4 10:42:38.886: INFO: ss-0 kali-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-04 10:41:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-04 10:42:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-04 10:42:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-04 10:41:41 +0000 UTC }] Aug 4 10:42:38.886: INFO: ss-1 kali-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-04 10:42:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-04 10:42:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-04 10:42:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-04 10:42:02 +0000 UTC }] Aug 4 10:42:38.886: INFO: ss-2 kali-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-04 10:42:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-04 10:42:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-04 10:42:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-04 10:42:02 +0000 UTC }] Aug 4 10:42:38.886: INFO: Aug 4 10:42:38.886: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 4 10:42:39.912: INFO: POD NODE PHASE GRACE CONDITIONS Aug 4 10:42:39.912: INFO: ss-0 kali-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-04 10:41:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-04 10:42:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-04 10:42:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-04 10:41:41 +0000 UTC }] Aug 4 10:42:39.912: INFO: ss-2 kali-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-04 10:42:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-04 10:42:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-04 10:42:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-04 10:42:02 +0000 UTC }] Aug 4 10:42:39.912: INFO: Aug 4 10:42:39.912: INFO: StatefulSet ss has not reached scale 0, at 2 Aug 4 10:42:40.917: INFO: POD NODE PHASE GRACE CONDITIONS Aug 4 10:42:40.917: INFO: ss-0 kali-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-04 10:41:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-04 10:42:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-04 10:42:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-04 10:41:41 +0000 UTC }] Aug 4 10:42:40.918: INFO: ss-2 kali-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-04 10:42:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-04 10:42:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-04 10:42:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-04 10:42:02 +0000 UTC }] Aug 4 10:42:40.918: INFO: Aug 4 10:42:40.918: INFO: StatefulSet ss has not reached scale 0, at 2 Aug 4 10:42:41.932: INFO: POD NODE PHASE GRACE CONDITIONS Aug 4 10:42:41.932: INFO: ss-0 kali-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-04 10:41:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-04 10:42:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-04 10:42:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-04 10:41:41 +0000 UTC }] Aug 4 10:42:41.932: INFO: ss-2 kali-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-04 10:42:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-04 10:42:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-04 10:42:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-04 10:42:02 +0000 UTC }] Aug 4 10:42:41.932: INFO: Aug 4 10:42:41.932: INFO: StatefulSet ss has not reached scale 0, at 2 Aug 4 10:42:42.937: INFO: POD NODE PHASE GRACE CONDITIONS Aug 4 10:42:42.937: INFO: ss-0 kali-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-04 10:41:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-04 10:42:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-04 10:42:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-04 10:41:41 +0000 UTC }] Aug 4 10:42:42.937: INFO: ss-2 kali-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-04 10:42:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-04 10:42:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-04 10:42:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-04 10:42:02 +0000 UTC }] Aug 4 10:42:42.937: INFO: Aug 4 10:42:42.937: INFO: StatefulSet ss has not reached scale 0, at 2 Aug 4 10:42:43.940: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.88134282s Aug 4 10:42:44.944: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.877546401s Aug 4 10:42:45.947: INFO: Verifying statefulset ss doesn't scale past 0 for another 874.243547ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-6305 Aug 4 10:42:46.952: INFO: Scaling statefulset ss to 0 Aug 4 10:42:46.964: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Aug 4 10:42:46.967: INFO: Deleting all statefulset in ns statefulset-6305 Aug 4 10:42:46.969: INFO: Scaling statefulset ss to 0 Aug 4 10:42:46.977: INFO: Waiting for statefulset status.replicas updated to 0 Aug 4 10:42:46.979: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:42:46.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6305" for this suite. • [SLOW TEST:65.833 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":275,"completed":30,"skipped":583,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:42:47.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 4 10:42:47.735: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 4 10:42:49.747: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732134567, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732134567, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732134567, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732134567, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 4 10:42:51.752: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732134567, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732134567, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732134567, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732134567, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 4 10:42:54.860: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:43:05.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1599" for this suite. STEP: Destroying namespace "webhook-1599-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:18.149 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":275,"completed":31,"skipped":616,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:43:05.152: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 4 10:43:05.302: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-4dd6bf04-7618-4fd6-af74-44591336278c" in namespace "security-context-test-6951" to be "Succeeded or Failed" Aug 4 10:43:05.375: INFO: Pod "busybox-privileged-false-4dd6bf04-7618-4fd6-af74-44591336278c": Phase="Pending", Reason="", readiness=false. Elapsed: 72.76536ms Aug 4 10:43:07.477: INFO: Pod "busybox-privileged-false-4dd6bf04-7618-4fd6-af74-44591336278c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.174607872s Aug 4 10:43:09.481: INFO: Pod "busybox-privileged-false-4dd6bf04-7618-4fd6-af74-44591336278c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.179215203s Aug 4 10:43:09.482: INFO: Pod "busybox-privileged-false-4dd6bf04-7618-4fd6-af74-44591336278c" satisfied condition "Succeeded or Failed" Aug 4 10:43:09.496: INFO: Got logs for pod "busybox-privileged-false-4dd6bf04-7618-4fd6-af74-44591336278c": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:43:09.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-6951" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":32,"skipped":648,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:43:09.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 4 10:43:10.911: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 4 10:43:12.927: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732134590, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732134590, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732134591, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732134590, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 4 10:43:14.949: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732134590, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732134590, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732134591, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732134590, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 4 10:43:17.998: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:43:18.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2715" for this suite. STEP: Destroying namespace "webhook-2715-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.194 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":275,"completed":33,"skipped":653,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:43:18.697: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 4 10:43:19.742: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 4 10:43:21.980: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732134599, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732134599, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732134600, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732134599, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 4 10:43:25.256: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Aug 4 10:43:29.551: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config attach --namespace=webhook-2541 to-be-attached-pod -i -c=container1' Aug 4 10:43:29.711: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:43:29.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2541" for this suite. STEP: Destroying namespace "webhook-2541-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:11.805 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":275,"completed":34,"skipped":658,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:43:30.502: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-3772f5ad-8cdf-4b24-b5bd-d9dcdff3e20f STEP: Creating a pod to test consume secrets Aug 4 10:43:31.207: INFO: Waiting up to 5m0s for pod "pod-secrets-283ae9b6-eb59-4689-aceb-1e9aebac1d5a" in namespace "secrets-1595" to be "Succeeded or Failed" Aug 4 10:43:31.210: INFO: Pod "pod-secrets-283ae9b6-eb59-4689-aceb-1e9aebac1d5a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.853495ms Aug 4 10:43:33.343: INFO: Pod "pod-secrets-283ae9b6-eb59-4689-aceb-1e9aebac1d5a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.135291208s Aug 4 10:43:35.345: INFO: Pod "pod-secrets-283ae9b6-eb59-4689-aceb-1e9aebac1d5a": Phase="Running", Reason="", readiness=true. Elapsed: 4.137895389s Aug 4 10:43:37.375: INFO: Pod "pod-secrets-283ae9b6-eb59-4689-aceb-1e9aebac1d5a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.167590583s STEP: Saw pod success Aug 4 10:43:37.375: INFO: Pod "pod-secrets-283ae9b6-eb59-4689-aceb-1e9aebac1d5a" satisfied condition "Succeeded or Failed" Aug 4 10:43:37.378: INFO: Trying to get logs from node kali-worker pod pod-secrets-283ae9b6-eb59-4689-aceb-1e9aebac1d5a container secret-volume-test: STEP: delete the pod Aug 4 10:43:37.424: INFO: Waiting for pod pod-secrets-283ae9b6-eb59-4689-aceb-1e9aebac1d5a to disappear Aug 4 10:43:37.433: INFO: Pod pod-secrets-283ae9b6-eb59-4689-aceb-1e9aebac1d5a no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:43:37.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1595" for this suite. • [SLOW TEST:6.939 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":35,"skipped":704,"failed":0} SSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:43:37.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Aug 4 10:43:37.514: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Aug 4 10:43:37.560: INFO: Waiting for terminating namespaces to be deleted... Aug 4 10:43:37.563: INFO: Logging pods the kubelet thinks is on node kali-worker before test Aug 4 10:43:37.569: INFO: kindnet-njbgt from kube-system started at 2020-07-10 10:28:30 +0000 UTC (1 container statuses recorded) Aug 4 10:43:37.569: INFO: Container kindnet-cni ready: true, restart count 1 Aug 4 10:43:37.569: INFO: kube-proxy-qwsfx from kube-system started at 2020-07-10 10:28:31 +0000 UTC (1 container statuses recorded) Aug 4 10:43:37.569: INFO: Container kube-proxy ready: true, restart count 0 Aug 4 10:43:37.569: INFO: rally-78930d2e-0152qohr-6bc8d96dbb-wrddw from c-rally-78930d2e-faqr1eh0 started at 2020-08-04 10:43:21 +0000 UTC (1 container statuses recorded) Aug 4 10:43:37.569: INFO: Container rally-78930d2e-0152qohr ready: true, restart count 0 Aug 4 10:43:37.569: INFO: rally-824618b1-6cukkjuh-lb7rq from c-rally-824618b1-4lzsfcdd started at 2020-08-01 10:57:26 +0000 UTC (1 container statuses recorded) Aug 4 10:43:37.569: INFO: Container rally-824618b1-6cukkjuh ready: true, restart count 3 Aug 4 10:43:37.569: INFO: rally-19e4df10-30wkw9yu-glqpf from c-rally-19e4df10-7fs771wk started at 2020-08-01 11:12:55 +0000 UTC (1 container statuses recorded) Aug 4 10:43:37.569: INFO: Container rally-19e4df10-30wkw9yu ready: true, restart count 0 Aug 4 10:43:37.569: INFO: Logging pods the kubelet thinks is on node kali-worker2 before test Aug 4 10:43:37.574: INFO: rally-824618b1-6cukkjuh-m84l4 from c-rally-824618b1-4lzsfcdd started at 2020-08-01 10:57:24 +0000 UTC (1 container statuses recorded) Aug 4 10:43:37.574: INFO: Container rally-824618b1-6cukkjuh ready: true, restart count 3 Aug 4 10:43:37.574: INFO: rally-78930d2e-0152qohr-5bcc8df967-5fjs6 from c-rally-78930d2e-faqr1eh0 started at 2020-08-04 10:43:26 +0000 UTC (1 container statuses recorded) Aug 4 10:43:37.574: INFO: Container rally-78930d2e-0152qohr ready: true, restart count 0 Aug 4 10:43:37.574: INFO: kube-proxy-vk6jr from kube-system started at 2020-07-10 10:28:28 +0000 UTC (1 container statuses recorded) Aug 4 10:43:37.574: INFO: Container kube-proxy ready: true, restart count 0 Aug 4 10:43:37.574: INFO: kindnet-pk4xb from kube-system started at 2020-07-10 10:28:28 +0000 UTC (1 container statuses recorded) Aug 4 10:43:37.574: INFO: Container kindnet-cni ready: true, restart count 1 Aug 4 10:43:37.574: INFO: rally-19e4df10-30wkw9yu-qbmr7 from c-rally-19e4df10-7fs771wk started at 2020-08-01 11:12:55 +0000 UTC (1 container statuses recorded) Aug 4 10:43:37.574: INFO: Container rally-19e4df10-30wkw9yu ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-2e741ee2-6513-48ec-8a2a-d38b622cdff7 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-2e741ee2-6513-48ec-8a2a-d38b622cdff7 off the node kali-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-2e741ee2-6513-48ec-8a2a-d38b622cdff7 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:43:53.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7300" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:16.450 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":275,"completed":36,"skipped":707,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:43:53.892: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:44:05.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3799" for this suite. • [SLOW TEST:11.261 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":275,"completed":37,"skipped":724,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:44:05.154: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Aug 4 10:44:05.221: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Aug 4 10:44:05.231: INFO: Waiting for terminating namespaces to be deleted... Aug 4 10:44:05.234: INFO: Logging pods the kubelet thinks is on node kali-worker before test Aug 4 10:44:05.240: INFO: rally-19e4df10-30wkw9yu-glqpf from c-rally-19e4df10-7fs771wk started at 2020-08-01 11:12:55 +0000 UTC (1 container statuses recorded) Aug 4 10:44:05.241: INFO: Container rally-19e4df10-30wkw9yu ready: true, restart count 0 Aug 4 10:44:05.241: INFO: rally-78930d2e-0152qohr-6bc8d96dbb-wrddw from c-rally-78930d2e-faqr1eh0 started at 2020-08-04 10:43:21 +0000 UTC (1 container statuses recorded) Aug 4 10:44:05.241: INFO: Container rally-78930d2e-0152qohr ready: false, restart count 0 Aug 4 10:44:05.241: INFO: kindnet-njbgt from kube-system started at 2020-07-10 10:28:30 +0000 UTC (1 container statuses recorded) Aug 4 10:44:05.241: INFO: Container kindnet-cni ready: true, restart count 1 Aug 4 10:44:05.241: INFO: kube-proxy-qwsfx from kube-system started at 2020-07-10 10:28:31 +0000 UTC (1 container statuses recorded) Aug 4 10:44:05.241: INFO: Container kube-proxy ready: true, restart count 0 Aug 4 10:44:05.241: INFO: rally-824618b1-6cukkjuh-lb7rq from c-rally-824618b1-4lzsfcdd started at 2020-08-01 10:57:26 +0000 UTC (1 container statuses recorded) Aug 4 10:44:05.241: INFO: Container rally-824618b1-6cukkjuh ready: true, restart count 3 Aug 4 10:44:05.241: INFO: Logging pods the kubelet thinks is on node kali-worker2 before test Aug 4 10:44:05.249: INFO: rally-19e4df10-30wkw9yu-qbmr7 from c-rally-19e4df10-7fs771wk started at 2020-08-01 11:12:55 +0000 UTC (1 container statuses recorded) Aug 4 10:44:05.249: INFO: Container rally-19e4df10-30wkw9yu ready: true, restart count 0 Aug 4 10:44:05.249: INFO: kube-proxy-vk6jr from kube-system started at 2020-07-10 10:28:28 +0000 UTC (1 container statuses recorded) Aug 4 10:44:05.249: INFO: Container kube-proxy ready: true, restart count 0 Aug 4 10:44:05.249: INFO: kindnet-pk4xb from kube-system started at 2020-07-10 10:28:28 +0000 UTC (1 container statuses recorded) Aug 4 10:44:05.249: INFO: Container kindnet-cni ready: true, restart count 1 Aug 4 10:44:05.249: INFO: pod3 from sched-pred-7300 started at 2020-08-04 10:43:49 +0000 UTC (1 container statuses recorded) Aug 4 10:44:05.249: INFO: Container pod3 ready: false, restart count 0 Aug 4 10:44:05.249: INFO: rally-824618b1-6cukkjuh-m84l4 from c-rally-824618b1-4lzsfcdd started at 2020-08-01 10:57:24 +0000 UTC (1 container statuses recorded) Aug 4 10:44:05.249: INFO: Container rally-824618b1-6cukkjuh ready: true, restart count 3 Aug 4 10:44:05.249: INFO: pod2 from sched-pred-7300 started at 2020-08-04 10:43:45 +0000 UTC (1 container statuses recorded) Aug 4 10:44:05.249: INFO: Container pod2 ready: false, restart count 0 Aug 4 10:44:05.249: INFO: pod1 from sched-pred-7300 started at 2020-08-04 10:43:41 +0000 UTC (1 container statuses recorded) Aug 4 10:44:05.249: INFO: Container pod1 ready: false, restart count 0 Aug 4 10:44:05.249: INFO: rally-78930d2e-0152qohr-5bcc8df967-5fjs6 from c-rally-78930d2e-faqr1eh0 started at 2020-08-04 10:43:26 +0000 UTC (1 container statuses recorded) Aug 4 10:44:05.249: INFO: Container rally-78930d2e-0152qohr ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-9432951f-0b29-4a41-9723-36e98f7d63db 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-9432951f-0b29-4a41-9723-36e98f7d63db off the node kali-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-9432951f-0b29-4a41-9723-36e98f7d63db [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:44:13.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8968" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:8.359 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":275,"completed":38,"skipped":765,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:44:13.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:44:17.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-434" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":275,"completed":39,"skipped":807,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:44:17.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 4 10:44:17.772: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Aug 4 10:44:20.715: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6105 create -f -' Aug 4 10:44:24.421: INFO: stderr: "" Aug 4 10:44:24.421: INFO: stdout: "e2e-test-crd-publish-openapi-7816-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Aug 4 10:44:24.421: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6105 delete e2e-test-crd-publish-openapi-7816-crds test-cr' Aug 4 10:44:24.538: INFO: stderr: "" Aug 4 10:44:24.538: INFO: stdout: "e2e-test-crd-publish-openapi-7816-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Aug 4 10:44:24.538: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6105 apply -f -' Aug 4 10:44:24.780: INFO: stderr: "" Aug 4 10:44:24.780: INFO: stdout: "e2e-test-crd-publish-openapi-7816-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Aug 4 10:44:24.780: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6105 delete e2e-test-crd-publish-openapi-7816-crds test-cr' Aug 4 10:44:24.883: INFO: stderr: "" Aug 4 10:44:24.884: INFO: stdout: "e2e-test-crd-publish-openapi-7816-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Aug 4 10:44:24.884: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7816-crds' Aug 4 10:44:25.101: INFO: stderr: "" Aug 4 10:44:25.101: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7816-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:44:28.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6105" for this suite. • [SLOW TEST:10.350 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":275,"completed":40,"skipped":845,"failed":0} [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:44:28.038: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 4 10:44:28.669: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 4 10:44:30.679: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732134668, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732134668, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732134668, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732134668, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 4 10:44:32.729: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732134668, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732134668, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732134668, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732134668, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 4 10:44:35.714: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 4 10:44:35.718: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-9801-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:44:37.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2599" for this suite. STEP: Destroying namespace "webhook-2599-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.091 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":275,"completed":41,"skipped":845,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:44:37.130: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1206 STEP: creating the pod Aug 4 10:44:37.259: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5009' Aug 4 10:44:38.039: INFO: stderr: "" Aug 4 10:44:38.039: INFO: stdout: "pod/pause created\n" Aug 4 10:44:38.039: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Aug 4 10:44:38.039: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-5009" to be "running and ready" Aug 4 10:44:38.117: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 77.966898ms Aug 4 10:44:40.136: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.097412458s Aug 4 10:44:42.139: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.100070354s Aug 4 10:44:42.139: INFO: Pod "pause" satisfied condition "running and ready" Aug 4 10:44:42.139: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: adding the label testing-label with value testing-label-value to a pod Aug 4 10:44:42.139: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-5009' Aug 4 10:44:42.267: INFO: stderr: "" Aug 4 10:44:42.267: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Aug 4 10:44:42.267: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-5009' Aug 4 10:44:42.428: INFO: stderr: "" Aug 4 10:44:42.428: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod Aug 4 10:44:42.428: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-5009' Aug 4 10:44:42.590: INFO: stderr: "" Aug 4 10:44:42.590: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Aug 4 10:44:42.590: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-5009' Aug 4 10:44:42.682: INFO: stderr: "" Aug 4 10:44:42.682: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1213 STEP: using delete to clean up resources Aug 4 10:44:42.682: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5009' Aug 4 10:44:42.878: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 4 10:44:42.878: INFO: stdout: "pod \"pause\" force deleted\n" Aug 4 10:44:42.878: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-5009' Aug 4 10:44:43.100: INFO: stderr: "No resources found in kubectl-5009 namespace.\n" Aug 4 10:44:43.100: INFO: stdout: "" Aug 4 10:44:43.100: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-5009 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Aug 4 10:44:43.196: INFO: stderr: "" Aug 4 10:44:43.196: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:44:43.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5009" for this suite. • [SLOW TEST:6.079 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1203 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":275,"completed":42,"skipped":902,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:44:43.210: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-303 STEP: creating a selector STEP: Creating the service pods in kubernetes Aug 4 10:44:43.833: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Aug 4 10:44:44.762: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 4 10:44:47.196: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 4 10:44:48.933: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 4 10:44:50.766: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 4 10:44:52.766: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 4 10:44:54.766: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 4 10:44:56.765: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 4 10:44:58.766: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 4 10:45:00.767: INFO: The status of Pod netserver-0 is Running (Ready = true) Aug 4 10:45:00.773: INFO: The status of Pod netserver-1 is Running (Ready = false) Aug 4 10:45:02.777: INFO: The status of Pod netserver-1 is Running (Ready = false) Aug 4 10:45:04.777: INFO: The status of Pod netserver-1 is Running (Ready = false) Aug 4 10:45:06.777: INFO: The status of Pod netserver-1 is Running (Ready = false) Aug 4 10:45:08.777: INFO: The status of Pod netserver-1 is Running (Ready = false) Aug 4 10:45:10.789: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Aug 4 10:45:14.818: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.175:8080/dial?request=hostname&protocol=udp&host=10.244.2.174&port=8081&tries=1'] Namespace:pod-network-test-303 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 4 10:45:14.818: INFO: >>> kubeConfig: /root/.kube/config I0804 10:45:14.850700 7 log.go:172] (0xc00285b1e0) (0xc001315cc0) Create stream I0804 10:45:14.850728 7 log.go:172] (0xc00285b1e0) (0xc001315cc0) Stream added, broadcasting: 1 I0804 10:45:14.853906 7 log.go:172] (0xc00285b1e0) Reply frame received for 1 I0804 10:45:14.853957 7 log.go:172] (0xc00285b1e0) (0xc000f2a280) Create stream I0804 10:45:14.853973 7 log.go:172] (0xc00285b1e0) (0xc000f2a280) Stream added, broadcasting: 3 I0804 10:45:14.855313 7 log.go:172] (0xc00285b1e0) Reply frame received for 3 I0804 10:45:14.855367 7 log.go:172] (0xc00285b1e0) (0xc000f2a320) Create stream I0804 10:45:14.855386 7 log.go:172] (0xc00285b1e0) (0xc000f2a320) Stream added, broadcasting: 5 I0804 10:45:14.856946 7 log.go:172] (0xc00285b1e0) Reply frame received for 5 I0804 10:45:14.926894 7 log.go:172] (0xc00285b1e0) Data frame received for 3 I0804 10:45:14.926939 7 log.go:172] (0xc000f2a280) (3) Data frame handling I0804 10:45:14.926961 7 log.go:172] (0xc000f2a280) (3) Data frame sent I0804 10:45:14.927516 7 log.go:172] (0xc00285b1e0) Data frame received for 5 I0804 10:45:14.927557 7 log.go:172] (0xc000f2a320) (5) Data frame handling I0804 10:45:14.927630 7 log.go:172] (0xc00285b1e0) Data frame received for 3 I0804 10:45:14.927640 7 log.go:172] (0xc000f2a280) (3) Data frame handling I0804 10:45:14.929531 7 log.go:172] (0xc00285b1e0) Data frame received for 1 I0804 10:45:14.929555 7 log.go:172] (0xc001315cc0) (1) Data frame handling I0804 10:45:14.929575 7 log.go:172] (0xc001315cc0) (1) Data frame sent I0804 10:45:14.929589 7 log.go:172] (0xc00285b1e0) (0xc001315cc0) Stream removed, broadcasting: 1 I0804 10:45:14.929655 7 log.go:172] (0xc00285b1e0) Go away received I0804 10:45:14.929935 7 log.go:172] (0xc00285b1e0) (0xc001315cc0) Stream removed, broadcasting: 1 I0804 10:45:14.929955 7 log.go:172] (0xc00285b1e0) (0xc000f2a280) Stream removed, broadcasting: 3 I0804 10:45:14.929964 7 log.go:172] (0xc00285b1e0) (0xc000f2a320) Stream removed, broadcasting: 5 Aug 4 10:45:14.929: INFO: Waiting for responses: map[] Aug 4 10:45:14.932: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.175:8080/dial?request=hostname&protocol=udp&host=10.244.1.83&port=8081&tries=1'] Namespace:pod-network-test-303 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 4 10:45:14.932: INFO: >>> kubeConfig: /root/.kube/config I0804 10:45:14.962982 7 log.go:172] (0xc0029e53f0) (0xc001533720) Create stream I0804 10:45:14.963022 7 log.go:172] (0xc0029e53f0) (0xc001533720) Stream added, broadcasting: 1 I0804 10:45:14.966717 7 log.go:172] (0xc0029e53f0) Reply frame received for 1 I0804 10:45:14.966763 7 log.go:172] (0xc0029e53f0) (0xc001533860) Create stream I0804 10:45:14.966778 7 log.go:172] (0xc0029e53f0) (0xc001533860) Stream added, broadcasting: 3 I0804 10:45:14.967676 7 log.go:172] (0xc0029e53f0) Reply frame received for 3 I0804 10:45:14.967718 7 log.go:172] (0xc0029e53f0) (0xc001533900) Create stream I0804 10:45:14.967731 7 log.go:172] (0xc0029e53f0) (0xc001533900) Stream added, broadcasting: 5 I0804 10:45:14.968610 7 log.go:172] (0xc0029e53f0) Reply frame received for 5 I0804 10:45:15.025397 7 log.go:172] (0xc0029e53f0) Data frame received for 3 I0804 10:45:15.025422 7 log.go:172] (0xc001533860) (3) Data frame handling I0804 10:45:15.025442 7 log.go:172] (0xc001533860) (3) Data frame sent I0804 10:45:15.025456 7 log.go:172] (0xc0029e53f0) Data frame received for 3 I0804 10:45:15.025464 7 log.go:172] (0xc001533860) (3) Data frame handling I0804 10:45:15.025750 7 log.go:172] (0xc0029e53f0) Data frame received for 5 I0804 10:45:15.025763 7 log.go:172] (0xc001533900) (5) Data frame handling I0804 10:45:15.027068 7 log.go:172] (0xc0029e53f0) Data frame received for 1 I0804 10:45:15.027102 7 log.go:172] (0xc001533720) (1) Data frame handling I0804 10:45:15.027118 7 log.go:172] (0xc001533720) (1) Data frame sent I0804 10:45:15.027135 7 log.go:172] (0xc0029e53f0) (0xc001533720) Stream removed, broadcasting: 1 I0804 10:45:15.027213 7 log.go:172] (0xc0029e53f0) (0xc001533720) Stream removed, broadcasting: 1 I0804 10:45:15.027249 7 log.go:172] (0xc0029e53f0) (0xc001533860) Stream removed, broadcasting: 3 I0804 10:45:15.027262 7 log.go:172] (0xc0029e53f0) (0xc001533900) Stream removed, broadcasting: 5 Aug 4 10:45:15.027: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 I0804 10:45:15.027319 7 log.go:172] (0xc0029e53f0) Go away received Aug 4 10:45:15.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-303" for this suite. • [SLOW TEST:31.825 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":275,"completed":43,"skipped":920,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:45:15.035: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Aug 4 10:45:29.772: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-8304 PodName:pod-sharedvolume-86bf4931-0627-4794-897f-28d216b9d565 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 4 10:45:29.772: INFO: >>> kubeConfig: /root/.kube/config I0804 10:45:29.871891 7 log.go:172] (0xc0029e5970) (0xc000329f40) Create stream I0804 10:45:29.871924 7 log.go:172] (0xc0029e5970) (0xc000329f40) Stream added, broadcasting: 1 I0804 10:45:29.873785 7 log.go:172] (0xc0029e5970) Reply frame received for 1 I0804 10:45:29.873823 7 log.go:172] (0xc0029e5970) (0xc000264320) Create stream I0804 10:45:29.873835 7 log.go:172] (0xc0029e5970) (0xc000264320) Stream added, broadcasting: 3 I0804 10:45:29.874445 7 log.go:172] (0xc0029e5970) Reply frame received for 3 I0804 10:45:29.874470 7 log.go:172] (0xc0029e5970) (0xc000bc2be0) Create stream I0804 10:45:29.874476 7 log.go:172] (0xc0029e5970) (0xc000bc2be0) Stream added, broadcasting: 5 I0804 10:45:29.875082 7 log.go:172] (0xc0029e5970) Reply frame received for 5 I0804 10:45:29.924632 7 log.go:172] (0xc0029e5970) Data frame received for 3 I0804 10:45:29.924702 7 log.go:172] (0xc000264320) (3) Data frame handling I0804 10:45:29.924857 7 log.go:172] (0xc000264320) (3) Data frame sent I0804 10:45:29.924895 7 log.go:172] (0xc0029e5970) Data frame received for 5 I0804 10:45:29.924935 7 log.go:172] (0xc000bc2be0) (5) Data frame handling I0804 10:45:29.924986 7 log.go:172] (0xc0029e5970) Data frame received for 3 I0804 10:45:29.925018 7 log.go:172] (0xc000264320) (3) Data frame handling I0804 10:45:29.926896 7 log.go:172] (0xc0029e5970) Data frame received for 1 I0804 10:45:29.926930 7 log.go:172] (0xc000329f40) (1) Data frame handling I0804 10:45:29.926952 7 log.go:172] (0xc000329f40) (1) Data frame sent I0804 10:45:29.926969 7 log.go:172] (0xc0029e5970) (0xc000329f40) Stream removed, broadcasting: 1 I0804 10:45:29.926988 7 log.go:172] (0xc0029e5970) Go away received I0804 10:45:29.927080 7 log.go:172] (0xc0029e5970) (0xc000329f40) Stream removed, broadcasting: 1 I0804 10:45:29.927101 7 log.go:172] (0xc0029e5970) (0xc000264320) Stream removed, broadcasting: 3 I0804 10:45:29.927112 7 log.go:172] (0xc0029e5970) (0xc000bc2be0) Stream removed, broadcasting: 5 Aug 4 10:45:29.927: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:45:29.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8304" for this suite. • [SLOW TEST:15.296 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":275,"completed":44,"skipped":924,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:45:30.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Aug 4 10:45:41.250: INFO: Successfully updated pod "adopt-release-qpg2t" STEP: Checking that the Job readopts the Pod Aug 4 10:45:41.250: INFO: Waiting up to 15m0s for pod "adopt-release-qpg2t" in namespace "job-5400" to be "adopted" Aug 4 10:45:41.253: INFO: Pod "adopt-release-qpg2t": Phase="Running", Reason="", readiness=true. Elapsed: 2.445187ms Aug 4 10:45:43.257: INFO: Pod "adopt-release-qpg2t": Phase="Running", Reason="", readiness=true. Elapsed: 2.006506233s Aug 4 10:45:43.257: INFO: Pod "adopt-release-qpg2t" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Aug 4 10:45:43.790: INFO: Successfully updated pod "adopt-release-qpg2t" STEP: Checking that the Job releases the Pod Aug 4 10:45:43.791: INFO: Waiting up to 15m0s for pod "adopt-release-qpg2t" in namespace "job-5400" to be "released" Aug 4 10:45:43.822: INFO: Pod "adopt-release-qpg2t": Phase="Running", Reason="", readiness=true. Elapsed: 31.147309ms Aug 4 10:45:46.450: INFO: Pod "adopt-release-qpg2t": Phase="Running", Reason="", readiness=true. Elapsed: 2.659425986s Aug 4 10:45:46.450: INFO: Pod "adopt-release-qpg2t" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:45:46.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-5400" for this suite. • [SLOW TEST:16.344 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":275,"completed":45,"skipped":946,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:45:46.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:45:47.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-9806" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":275,"completed":46,"skipped":971,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:45:47.351: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0804 10:46:00.401350 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Aug 4 10:46:00.401: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:46:00.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3595" for this suite. • [SLOW TEST:13.569 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":275,"completed":47,"skipped":1023,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:46:00.920: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-map-42c7dc17-7355-4387-9a2d-82fa725f5591 STEP: Creating a pod to test consume configMaps Aug 4 10:46:01.466: INFO: Waiting up to 5m0s for pod "pod-configmaps-5c886c3e-f02b-4c50-8559-180335b9b7a8" in namespace "configmap-6974" to be "Succeeded or Failed" Aug 4 10:46:01.530: INFO: Pod "pod-configmaps-5c886c3e-f02b-4c50-8559-180335b9b7a8": Phase="Pending", Reason="", readiness=false. Elapsed: 64.280301ms Aug 4 10:46:03.534: INFO: Pod "pod-configmaps-5c886c3e-f02b-4c50-8559-180335b9b7a8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067878455s Aug 4 10:46:05.564: INFO: Pod "pod-configmaps-5c886c3e-f02b-4c50-8559-180335b9b7a8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.098104896s STEP: Saw pod success Aug 4 10:46:05.564: INFO: Pod "pod-configmaps-5c886c3e-f02b-4c50-8559-180335b9b7a8" satisfied condition "Succeeded or Failed" Aug 4 10:46:05.567: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-5c886c3e-f02b-4c50-8559-180335b9b7a8 container configmap-volume-test: STEP: delete the pod Aug 4 10:46:05.676: INFO: Waiting for pod pod-configmaps-5c886c3e-f02b-4c50-8559-180335b9b7a8 to disappear Aug 4 10:46:05.921: INFO: Pod pod-configmaps-5c886c3e-f02b-4c50-8559-180335b9b7a8 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:46:05.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6974" for this suite. • [SLOW TEST:5.421 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":48,"skipped":1032,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:46:06.342: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:46:51.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8240" for this suite. • [SLOW TEST:45.279 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":275,"completed":49,"skipped":1043,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:46:51.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Aug 4 10:46:51.684: INFO: Waiting up to 5m0s for pod "downward-api-5a1301b5-f5b7-474a-8684-05de78fead82" in namespace "downward-api-7377" to be "Succeeded or Failed" Aug 4 10:46:51.700: INFO: Pod "downward-api-5a1301b5-f5b7-474a-8684-05de78fead82": Phase="Pending", Reason="", readiness=false. Elapsed: 16.013723ms Aug 4 10:46:53.724: INFO: Pod "downward-api-5a1301b5-f5b7-474a-8684-05de78fead82": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039967905s Aug 4 10:46:55.728: INFO: Pod "downward-api-5a1301b5-f5b7-474a-8684-05de78fead82": Phase="Running", Reason="", readiness=true. Elapsed: 4.044219294s Aug 4 10:46:57.922: INFO: Pod "downward-api-5a1301b5-f5b7-474a-8684-05de78fead82": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.237629322s STEP: Saw pod success Aug 4 10:46:57.922: INFO: Pod "downward-api-5a1301b5-f5b7-474a-8684-05de78fead82" satisfied condition "Succeeded or Failed" Aug 4 10:46:58.131: INFO: Trying to get logs from node kali-worker pod downward-api-5a1301b5-f5b7-474a-8684-05de78fead82 container dapi-container: STEP: delete the pod Aug 4 10:46:59.403: INFO: Waiting for pod downward-api-5a1301b5-f5b7-474a-8684-05de78fead82 to disappear Aug 4 10:46:59.652: INFO: Pod downward-api-5a1301b5-f5b7-474a-8684-05de78fead82 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:46:59.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7377" for this suite. • [SLOW TEST:8.205 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":275,"completed":50,"skipped":1055,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:46:59.829: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 4 10:47:00.373: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:47:01.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-6983" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":275,"completed":51,"skipped":1073,"failed":0} S ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:47:01.433: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Aug 4 10:47:01.637: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 4 10:47:01.723: INFO: Number of nodes with available pods: 0 Aug 4 10:47:01.723: INFO: Node kali-worker is running more than one daemon pod Aug 4 10:47:02.728: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 4 10:47:02.731: INFO: Number of nodes with available pods: 0 Aug 4 10:47:02.731: INFO: Node kali-worker is running more than one daemon pod Aug 4 10:47:03.731: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 4 10:47:03.735: INFO: Number of nodes with available pods: 0 Aug 4 10:47:03.735: INFO: Node kali-worker is running more than one daemon pod Aug 4 10:47:04.730: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 4 10:47:04.734: INFO: Number of nodes with available pods: 0 Aug 4 10:47:04.734: INFO: Node kali-worker is running more than one daemon pod Aug 4 10:47:05.737: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 4 10:47:05.741: INFO: Number of nodes with available pods: 0 Aug 4 10:47:05.741: INFO: Node kali-worker is running more than one daemon pod Aug 4 10:47:06.737: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 4 10:47:06.741: INFO: Number of nodes with available pods: 0 Aug 4 10:47:06.741: INFO: Node kali-worker is running more than one daemon pod Aug 4 10:47:07.729: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 4 10:47:07.733: INFO: Number of nodes with available pods: 2 Aug 4 10:47:07.733: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Aug 4 10:47:07.815: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 4 10:47:07.818: INFO: Number of nodes with available pods: 1 Aug 4 10:47:07.818: INFO: Node kali-worker is running more than one daemon pod Aug 4 10:47:08.823: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 4 10:47:08.826: INFO: Number of nodes with available pods: 1 Aug 4 10:47:08.826: INFO: Node kali-worker is running more than one daemon pod Aug 4 10:47:09.822: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 4 10:47:09.825: INFO: Number of nodes with available pods: 1 Aug 4 10:47:09.825: INFO: Node kali-worker is running more than one daemon pod Aug 4 10:47:10.892: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 4 10:47:11.042: INFO: Number of nodes with available pods: 1 Aug 4 10:47:11.042: INFO: Node kali-worker is running more than one daemon pod Aug 4 10:47:11.839: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 4 10:47:11.851: INFO: Number of nodes with available pods: 1 Aug 4 10:47:11.851: INFO: Node kali-worker is running more than one daemon pod Aug 4 10:47:12.859: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 4 10:47:12.863: INFO: Number of nodes with available pods: 1 Aug 4 10:47:12.863: INFO: Node kali-worker is running more than one daemon pod Aug 4 10:47:13.823: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 4 10:47:13.829: INFO: Number of nodes with available pods: 1 Aug 4 10:47:13.829: INFO: Node kali-worker is running more than one daemon pod Aug 4 10:47:14.839: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 4 10:47:14.842: INFO: Number of nodes with available pods: 1 Aug 4 10:47:14.842: INFO: Node kali-worker is running more than one daemon pod Aug 4 10:47:15.824: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 4 10:47:15.831: INFO: Number of nodes with available pods: 2 Aug 4 10:47:15.831: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3963, will wait for the garbage collector to delete the pods Aug 4 10:47:15.991: INFO: Deleting DaemonSet.extensions daemon-set took: 104.726339ms Aug 4 10:47:16.391: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.262301ms Aug 4 10:47:23.624: INFO: Number of nodes with available pods: 0 Aug 4 10:47:23.624: INFO: Number of running nodes: 0, number of available pods: 0 Aug 4 10:47:23.630: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3963/daemonsets","resourceVersion":"6665755"},"items":null} Aug 4 10:47:23.633: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3963/pods","resourceVersion":"6665755"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:47:23.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3963" for this suite. • [SLOW TEST:22.652 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":275,"completed":52,"skipped":1074,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:47:24.086: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:47:58.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-8853" for this suite. STEP: Destroying namespace "nsdeletetest-1540" for this suite. Aug 4 10:47:58.535: INFO: Namespace nsdeletetest-1540 was already deleted STEP: Destroying namespace "nsdeletetest-3484" for this suite. • [SLOW TEST:34.453 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":275,"completed":53,"skipped":1081,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:47:58.539: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Aug 4 10:48:05.251: INFO: Successfully updated pod "labelsupdate78bb00d8-7176-493b-96e8-7fb26d8ac42d" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:48:07.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3389" for this suite. • [SLOW TEST:8.806 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":275,"completed":54,"skipped":1098,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:48:07.347: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-jlpgk in namespace proxy-1588 I0804 10:48:07.648126 7 runners.go:190] Created replication controller with name: proxy-service-jlpgk, namespace: proxy-1588, replica count: 1 I0804 10:48:08.698686 7 runners.go:190] proxy-service-jlpgk Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0804 10:48:09.698897 7 runners.go:190] proxy-service-jlpgk Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0804 10:48:10.699189 7 runners.go:190] proxy-service-jlpgk Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0804 10:48:11.699457 7 runners.go:190] proxy-service-jlpgk Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0804 10:48:12.699670 7 runners.go:190] proxy-service-jlpgk Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0804 10:48:13.699930 7 runners.go:190] proxy-service-jlpgk Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0804 10:48:14.700180 7 runners.go:190] proxy-service-jlpgk Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0804 10:48:15.700623 7 runners.go:190] proxy-service-jlpgk Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 4 10:48:15.704: INFO: setup took 8.181549178s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Aug 4 10:48:15.714: INFO: (0) /api/v1/namespaces/proxy-1588/pods/proxy-service-jlpgk-kfc2l:1080/proxy/: test<... (200; 9.945126ms) Aug 4 10:48:15.714: INFO: (0) /api/v1/namespaces/proxy-1588/pods/proxy-service-jlpgk-kfc2l:160/proxy/: foo (200; 10.01183ms) Aug 4 10:48:15.715: INFO: (0) /api/v1/namespaces/proxy-1588/pods/proxy-service-jlpgk-kfc2l/proxy/: test (200; 10.911464ms) Aug 4 10:48:15.717: INFO: (0) /api/v1/namespaces/proxy-1588/pods/http:proxy-service-jlpgk-kfc2l:1080/proxy/: ... (200; 13.103242ms) Aug 4 10:48:15.717: INFO: (0) /api/v1/namespaces/proxy-1588/pods/http:proxy-service-jlpgk-kfc2l:162/proxy/: bar (200; 12.91402ms) Aug 4 10:48:15.718: INFO: (0) /api/v1/namespaces/proxy-1588/pods/proxy-service-jlpgk-kfc2l:162/proxy/: bar (200; 13.314903ms) Aug 4 10:48:15.718: INFO: (0) /api/v1/namespaces/proxy-1588/pods/http:proxy-service-jlpgk-kfc2l:160/proxy/: foo (200; 13.632329ms) Aug 4 10:48:15.718: INFO: (0) /api/v1/namespaces/proxy-1588/pods/https:proxy-service-jlpgk-kfc2l:462/proxy/: tls qux (200; 14.076555ms) Aug 4 10:48:15.718: INFO: (0) /api/v1/namespaces/proxy-1588/services/http:proxy-service-jlpgk:portname2/proxy/: bar (200; 14.354574ms) Aug 4 10:48:15.718: INFO: (0) /api/v1/namespaces/proxy-1588/services/proxy-service-jlpgk:portname2/proxy/: bar (200; 14.401735ms) Aug 4 10:48:15.718: INFO: (0) /api/v1/namespaces/proxy-1588/services/proxy-service-jlpgk:portname1/proxy/: foo (200; 14.350229ms) Aug 4 10:48:15.719: INFO: (0) /api/v1/namespaces/proxy-1588/services/http:proxy-service-jlpgk:portname1/proxy/: foo (200; 14.696701ms) Aug 4 10:48:15.722: INFO: (0) /api/v1/namespaces/proxy-1588/services/https:proxy-service-jlpgk:tlsportname2/proxy/: tls qux (200; 17.451149ms) Aug 4 10:48:15.722: INFO: (0) /api/v1/namespaces/proxy-1588/services/https:proxy-service-jlpgk:tlsportname1/proxy/: tls baz (200; 18.050601ms) Aug 4 10:48:15.722: INFO: (0) /api/v1/namespaces/proxy-1588/pods/https:proxy-service-jlpgk-kfc2l:460/proxy/: tls baz (200; 18.103303ms) Aug 4 10:48:15.722: INFO: (0) /api/v1/namespaces/proxy-1588/pods/https:proxy-service-jlpgk-kfc2l:443/proxy/: ... (200; 4.20861ms) Aug 4 10:48:15.727: INFO: (1) /api/v1/namespaces/proxy-1588/pods/proxy-service-jlpgk-kfc2l:1080/proxy/: test<... (200; 4.898613ms) Aug 4 10:48:15.727: INFO: (1) /api/v1/namespaces/proxy-1588/pods/proxy-service-jlpgk-kfc2l/proxy/: test (200; 4.971832ms) Aug 4 10:48:15.728: INFO: (1) /api/v1/namespaces/proxy-1588/pods/http:proxy-service-jlpgk-kfc2l:162/proxy/: bar (200; 5.218823ms) Aug 4 10:48:15.728: INFO: (1) /api/v1/namespaces/proxy-1588/pods/https:proxy-service-jlpgk-kfc2l:462/proxy/: tls qux (200; 5.471414ms) Aug 4 10:48:15.728: INFO: (1) /api/v1/namespaces/proxy-1588/pods/https:proxy-service-jlpgk-kfc2l:443/proxy/: ... (200; 2.648557ms) Aug 4 10:48:15.734: INFO: (2) /api/v1/namespaces/proxy-1588/pods/proxy-service-jlpgk-kfc2l:162/proxy/: bar (200; 4.276444ms) Aug 4 10:48:15.735: INFO: (2) /api/v1/namespaces/proxy-1588/pods/http:proxy-service-jlpgk-kfc2l:160/proxy/: foo (200; 4.284898ms) Aug 4 10:48:15.735: INFO: (2) /api/v1/namespaces/proxy-1588/pods/https:proxy-service-jlpgk-kfc2l:460/proxy/: tls baz (200; 4.446069ms) Aug 4 10:48:15.735: INFO: (2) /api/v1/namespaces/proxy-1588/pods/proxy-service-jlpgk-kfc2l:160/proxy/: foo (200; 4.324591ms) Aug 4 10:48:15.735: INFO: (2) /api/v1/namespaces/proxy-1588/pods/https:proxy-service-jlpgk-kfc2l:462/proxy/: tls qux (200; 4.488131ms) Aug 4 10:48:15.735: INFO: (2) /api/v1/namespaces/proxy-1588/services/http:proxy-service-jlpgk:portname2/proxy/: bar (200; 4.566193ms) Aug 4 10:48:15.735: INFO: (2) /api/v1/namespaces/proxy-1588/pods/https:proxy-service-jlpgk-kfc2l:443/proxy/: test (200; 4.919087ms) Aug 4 10:48:15.736: INFO: (2) /api/v1/namespaces/proxy-1588/services/https:proxy-service-jlpgk:tlsportname2/proxy/: tls qux (200; 6.161735ms) Aug 4 10:48:15.736: INFO: (2) /api/v1/namespaces/proxy-1588/pods/proxy-service-jlpgk-kfc2l:1080/proxy/: test<... (200; 6.26157ms) Aug 4 10:48:15.737: INFO: (2) /api/v1/namespaces/proxy-1588/services/http:proxy-service-jlpgk:portname1/proxy/: foo (200; 6.301057ms) Aug 4 10:48:15.737: INFO: (2) /api/v1/namespaces/proxy-1588/services/https:proxy-service-jlpgk:tlsportname1/proxy/: tls baz (200; 6.37132ms) Aug 4 10:48:15.740: INFO: (3) /api/v1/namespaces/proxy-1588/pods/proxy-service-jlpgk-kfc2l:160/proxy/: foo (200; 3.104331ms) Aug 4 10:48:15.743: INFO: (3) /api/v1/namespaces/proxy-1588/services/https:proxy-service-jlpgk:tlsportname2/proxy/: tls qux (200; 6.044596ms) Aug 4 10:48:15.743: INFO: (3) /api/v1/namespaces/proxy-1588/pods/http:proxy-service-jlpgk-kfc2l:160/proxy/: foo (200; 6.066785ms) Aug 4 10:48:15.743: INFO: (3) /api/v1/namespaces/proxy-1588/pods/proxy-service-jlpgk-kfc2l:162/proxy/: bar (200; 6.038512ms) Aug 4 10:48:15.743: INFO: (3) /api/v1/namespaces/proxy-1588/pods/http:proxy-service-jlpgk-kfc2l:1080/proxy/: ... (200; 6.057791ms) Aug 4 10:48:15.743: INFO: (3) /api/v1/namespaces/proxy-1588/pods/http:proxy-service-jlpgk-kfc2l:162/proxy/: bar (200; 6.038986ms) Aug 4 10:48:15.743: INFO: (3) /api/v1/namespaces/proxy-1588/services/proxy-service-jlpgk:portname1/proxy/: foo (200; 6.660876ms) Aug 4 10:48:15.743: INFO: (3) /api/v1/namespaces/proxy-1588/services/http:proxy-service-jlpgk:portname1/proxy/: foo (200; 6.66849ms) Aug 4 10:48:15.744: INFO: (3) /api/v1/namespaces/proxy-1588/pods/proxy-service-jlpgk-kfc2l:1080/proxy/: test<... (200; 7.001683ms) Aug 4 10:48:15.744: INFO: (3) /api/v1/namespaces/proxy-1588/pods/https:proxy-service-jlpgk-kfc2l:460/proxy/: tls baz (200; 6.99233ms) Aug 4 10:48:15.744: INFO: (3) /api/v1/namespaces/proxy-1588/services/proxy-service-jlpgk:portname2/proxy/: bar (200; 6.985885ms) Aug 4 10:48:15.744: INFO: (3) /api/v1/namespaces/proxy-1588/services/https:proxy-service-jlpgk:tlsportname1/proxy/: tls baz (200; 7.156748ms) Aug 4 10:48:15.744: INFO: (3) /api/v1/namespaces/proxy-1588/pods/proxy-service-jlpgk-kfc2l/proxy/: test (200; 7.254474ms) Aug 4 10:48:15.744: INFO: (3) /api/v1/namespaces/proxy-1588/pods/https:proxy-service-jlpgk-kfc2l:462/proxy/: tls qux (200; 7.17924ms) Aug 4 10:48:15.744: INFO: (3) /api/v1/namespaces/proxy-1588/services/http:proxy-service-jlpgk:portname2/proxy/: bar (200; 7.227224ms) Aug 4 10:48:15.744: INFO: (3) /api/v1/namespaces/proxy-1588/pods/https:proxy-service-jlpgk-kfc2l:443/proxy/: test<... (200; 10.227901ms) Aug 4 10:48:15.755: INFO: (4) /api/v1/namespaces/proxy-1588/pods/http:proxy-service-jlpgk-kfc2l:1080/proxy/: ... (200; 10.367262ms) Aug 4 10:48:15.755: INFO: (4) /api/v1/namespaces/proxy-1588/services/http:proxy-service-jlpgk:portname1/proxy/: foo (200; 10.700121ms) Aug 4 10:48:15.755: INFO: (4) /api/v1/namespaces/proxy-1588/pods/proxy-service-jlpgk-kfc2l:162/proxy/: bar (200; 10.680786ms) Aug 4 10:48:15.755: INFO: (4) /api/v1/namespaces/proxy-1588/pods/proxy-service-jlpgk-kfc2l/proxy/: test (200; 10.747572ms) Aug 4 10:48:15.755: INFO: (4) /api/v1/namespaces/proxy-1588/services/proxy-service-jlpgk:portname1/proxy/: foo (200; 10.526403ms) Aug 4 10:48:15.755: INFO: (4) /api/v1/namespaces/proxy-1588/services/proxy-service-jlpgk:portname2/proxy/: bar (200; 11.186715ms) Aug 4 10:48:15.756: INFO: (4) /api/v1/namespaces/proxy-1588/services/http:proxy-service-jlpgk:portname2/proxy/: bar (200; 11.467ms) Aug 4 10:48:15.756: INFO: (4) /api/v1/namespaces/proxy-1588/pods/https:proxy-service-jlpgk-kfc2l:462/proxy/: tls qux (200; 11.549736ms) Aug 4 10:48:15.756: INFO: (4) /api/v1/namespaces/proxy-1588/services/https:proxy-service-jlpgk:tlsportname2/proxy/: tls qux (200; 11.398562ms) Aug 4 10:48:15.756: INFO: (4) /api/v1/namespaces/proxy-1588/pods/https:proxy-service-jlpgk-kfc2l:443/proxy/: test (200; 4.780042ms) Aug 4 10:48:15.761: INFO: (5) /api/v1/namespaces/proxy-1588/pods/proxy-service-jlpgk-kfc2l:1080/proxy/: test<... (200; 4.716434ms) Aug 4 10:48:15.761: INFO: (5) /api/v1/namespaces/proxy-1588/services/http:proxy-service-jlpgk:portname1/proxy/: foo (200; 5.194835ms) Aug 4 10:48:15.761: INFO: (5) /api/v1/namespaces/proxy-1588/pods/http:proxy-service-jlpgk-kfc2l:1080/proxy/: ... (200; 5.184126ms) Aug 4 10:48:15.761: INFO: (5) /api/v1/namespaces/proxy-1588/services/proxy-service-jlpgk:portname2/proxy/: bar (200; 5.201057ms) Aug 4 10:48:15.761: INFO: (5) /api/v1/namespaces/proxy-1588/services/https:proxy-service-jlpgk:tlsportname2/proxy/: tls qux (200; 5.126713ms) Aug 4 10:48:15.761: INFO: (5) /api/v1/namespaces/proxy-1588/pods/https:proxy-service-jlpgk-kfc2l:443/proxy/: test (200; 2.453311ms) Aug 4 10:48:15.764: INFO: (6) /api/v1/namespaces/proxy-1588/pods/proxy-service-jlpgk-kfc2l:160/proxy/: foo (200; 2.93294ms) Aug 4 10:48:15.764: INFO: (6) /api/v1/namespaces/proxy-1588/pods/http:proxy-service-jlpgk-kfc2l:162/proxy/: bar (200; 2.930829ms) Aug 4 10:48:15.764: INFO: (6) /api/v1/namespaces/proxy-1588/pods/http:proxy-service-jlpgk-kfc2l:160/proxy/: foo (200; 2.994736ms) Aug 4 10:48:15.764: INFO: (6) /api/v1/namespaces/proxy-1588/pods/proxy-service-jlpgk-kfc2l:162/proxy/: bar (200; 3.051298ms) Aug 4 10:48:15.765: INFO: (6) /api/v1/namespaces/proxy-1588/pods/https:proxy-service-jlpgk-kfc2l:443/proxy/: ... (200; 3.395543ms) Aug 4 10:48:15.765: INFO: (6) /api/v1/namespaces/proxy-1588/pods/proxy-service-jlpgk-kfc2l:1080/proxy/: test<... (200; 3.666631ms) Aug 4 10:48:15.765: INFO: (6) /api/v1/namespaces/proxy-1588/pods/https:proxy-service-jlpgk-kfc2l:460/proxy/: tls baz (200; 3.652543ms) Aug 4 10:48:15.765: INFO: (6) /api/v1/namespaces/proxy-1588/services/proxy-service-jlpgk:portname2/proxy/: bar (200; 3.724045ms) Aug 4 10:48:15.766: INFO: (6) /api/v1/namespaces/proxy-1588/services/http:proxy-service-jlpgk:portname1/proxy/: foo (200; 4.101692ms) Aug 4 10:48:15.766: INFO: (6) /api/v1/namespaces/proxy-1588/services/proxy-service-jlpgk:portname1/proxy/: foo (200; 4.422274ms) Aug 4 10:48:15.766: INFO: (6) /api/v1/namespaces/proxy-1588/services/https:proxy-service-jlpgk:tlsportname1/proxy/: tls baz (200; 4.469798ms) Aug 4 10:48:15.766: INFO: (6) /api/v1/namespaces/proxy-1588/services/http:proxy-service-jlpgk:portname2/proxy/: bar (200; 4.747733ms) Aug 4 10:48:15.766: INFO: (6) /api/v1/namespaces/proxy-1588/pods/https:proxy-service-jlpgk-kfc2l:462/proxy/: tls qux (200; 4.934193ms) Aug 4 10:48:15.766: INFO: (6) /api/v1/namespaces/proxy-1588/services/https:proxy-service-jlpgk:tlsportname2/proxy/: tls qux (200; 5.039126ms) Aug 4 10:48:15.770: INFO: (7) /api/v1/namespaces/proxy-1588/services/https:proxy-service-jlpgk:tlsportname2/proxy/: tls qux (200; 3.26784ms) Aug 4 10:48:15.770: INFO: (7) /api/v1/namespaces/proxy-1588/services/proxy-service-jlpgk:portname1/proxy/: foo (200; 3.57548ms) Aug 4 10:48:15.770: INFO: (7) /api/v1/namespaces/proxy-1588/services/http:proxy-service-jlpgk:portname1/proxy/: foo (200; 3.574449ms) Aug 4 10:48:15.770: INFO: (7) /api/v1/namespaces/proxy-1588/pods/http:proxy-service-jlpgk-kfc2l:162/proxy/: bar (200; 3.584486ms) Aug 4 10:48:15.770: INFO: (7) /api/v1/namespaces/proxy-1588/pods/https:proxy-service-jlpgk-kfc2l:443/proxy/: ... (200; 3.964987ms) Aug 4 10:48:15.771: INFO: (7) /api/v1/namespaces/proxy-1588/services/http:proxy-service-jlpgk:portname2/proxy/: bar (200; 4.333506ms) Aug 4 10:48:15.771: INFO: (7) /api/v1/namespaces/proxy-1588/pods/proxy-service-jlpgk-kfc2l:1080/proxy/: test<... (200; 4.428525ms) Aug 4 10:48:15.771: INFO: (7) /api/v1/namespaces/proxy-1588/services/https:proxy-service-jlpgk:tlsportname1/proxy/: tls baz (200; 4.421866ms) Aug 4 10:48:15.771: INFO: (7) /api/v1/namespaces/proxy-1588/pods/proxy-service-jlpgk-kfc2l/proxy/: test (200; 4.452815ms) Aug 4 10:48:15.771: INFO: (7) /api/v1/namespaces/proxy-1588/pods/http:proxy-service-jlpgk-kfc2l:160/proxy/: foo (200; 4.544456ms) Aug 4 10:48:15.771: INFO: (7) /api/v1/namespaces/proxy-1588/pods/proxy-service-jlpgk-kfc2l:162/proxy/: bar (200; 4.539754ms) Aug 4 10:48:15.771: INFO: (7) /api/v1/namespaces/proxy-1588/services/proxy-service-jlpgk:portname2/proxy/: bar (200; 4.502112ms) Aug 4 10:48:15.771: INFO: (7) /api/v1/namespaces/proxy-1588/pods/proxy-service-jlpgk-kfc2l:160/proxy/: foo (200; 4.540582ms) Aug 4 10:48:15.771: INFO: (7) /api/v1/namespaces/proxy-1588/pods/https:proxy-service-jlpgk-kfc2l:462/proxy/: tls qux (200; 4.59346ms) Aug 4 10:48:15.775: INFO: (8) /api/v1/namespaces/proxy-1588/pods/proxy-service-jlpgk-kfc2l:162/proxy/: bar (200; 3.315772ms) Aug 4 10:48:15.775: INFO: (8) /api/v1/namespaces/proxy-1588/pods/proxy-service-jlpgk-kfc2l:160/proxy/: foo (200; 3.339767ms) Aug 4 10:48:15.775: INFO: (8) /api/v1/namespaces/proxy-1588/pods/https:proxy-service-jlpgk-kfc2l:443/proxy/: test<... (200; 3.955424ms) Aug 4 10:48:15.775: INFO: (8) /api/v1/namespaces/proxy-1588/services/proxy-service-jlpgk:portname2/proxy/: bar (200; 4.075023ms) Aug 4 10:48:15.775: INFO: (8) /api/v1/namespaces/proxy-1588/pods/http:proxy-service-jlpgk-kfc2l:162/proxy/: bar (200; 4.077826ms) Aug 4 10:48:15.776: INFO: (8) /api/v1/namespaces/proxy-1588/services/proxy-service-jlpgk:portname1/proxy/: foo (200; 4.255746ms) Aug 4 10:48:15.776: INFO: (8) /api/v1/namespaces/proxy-1588/pods/http:proxy-service-jlpgk-kfc2l:160/proxy/: foo (200; 4.246418ms) Aug 4 10:48:15.776: INFO: (8) /api/v1/namespaces/proxy-1588/services/http:proxy-service-jlpgk:portname2/proxy/: bar (200; 4.34895ms) Aug 4 10:48:15.776: INFO: (8) /api/v1/namespaces/proxy-1588/services/https:proxy-service-jlpgk:tlsportname2/proxy/: tls qux (200; 4.482031ms) Aug 4 10:48:15.776: INFO: (8) /api/v1/namespaces/proxy-1588/pods/proxy-service-jlpgk-kfc2l/proxy/: test (200; 4.554336ms) Aug 4 10:48:15.776: INFO: (8) /api/v1/namespaces/proxy-1588/pods/https:proxy-service-jlpgk-kfc2l:460/proxy/: tls baz (200; 4.635156ms) Aug 4 10:48:15.776: INFO: (8) /api/v1/namespaces/proxy-1588/pods/http:proxy-service-jlpgk-kfc2l:1080/proxy/: ... (200; 4.632949ms) Aug 4 10:48:15.776: INFO: (8) /api/v1/namespaces/proxy-1588/services/https:proxy-service-jlpgk:tlsportname1/proxy/: tls baz (200; 4.70968ms) Aug 4 10:48:15.776: INFO: (8) /api/v1/namespaces/proxy-1588/pods/https:proxy-service-jlpgk-kfc2l:462/proxy/: tls qux (200; 4.798672ms) Aug 4 10:48:15.779: INFO: (9) /api/v1/namespaces/proxy-1588/pods/proxy-service-jlpgk-kfc2l:160/proxy/: foo (200; 2.984218ms) Aug 4 10:48:15.779: INFO: (9) /api/v1/namespaces/proxy-1588/pods/proxy-service-jlpgk-kfc2l/proxy/: test (200; 3.090627ms) Aug 4 10:48:15.779: INFO: (9) /api/v1/namespaces/proxy-1588/pods/proxy-service-jlpgk-kfc2l:162/proxy/: bar (200; 3.037596ms) Aug 4 10:48:15.779: INFO: (9) /api/v1/namespaces/proxy-1588/pods/https:proxy-service-jlpgk-kfc2l:443/proxy/: ... (200; 3.503812ms) Aug 4 10:48:15.780: INFO: (9) /api/v1/namespaces/proxy-1588/pods/http:proxy-service-jlpgk-kfc2l:160/proxy/: foo (200; 3.811062ms) Aug 4 10:48:15.780: INFO: (9) /api/v1/namespaces/proxy-1588/pods/https:proxy-service-jlpgk-kfc2l:460/proxy/: tls baz (200; 3.750594ms) Aug 4 10:48:15.780: INFO: (9) /api/v1/namespaces/proxy-1588/pods/proxy-service-jlpgk-kfc2l:1080/proxy/: test<... (200; 3.817028ms) Aug 4 10:48:15.781: INFO: (9) /api/v1/namespaces/proxy-1588/services/https:proxy-service-jlpgk:tlsportname1/proxy/: tls baz (200; 5.304219ms) Aug 4 10:48:15.781: INFO: (9) /api/v1/namespaces/proxy-1588/services/http:proxy-service-jlpgk:portname1/proxy/: foo (200; 5.399324ms) Aug 4 10:48:15.781: INFO: (9) /api/v1/namespaces/proxy-1588/services/http:proxy-service-jlpgk:portname2/proxy/: bar (200; 5.308643ms) Aug 4 10:48:15.781: INFO: (9) /api/v1/namespaces/proxy-1588/pods/http:proxy-service-jlpgk-kfc2l:162/proxy/: bar (200; 5.39169ms) Aug 4 10:48:15.781: INFO: (9) /api/v1/namespaces/proxy-1588/services/https:proxy-service-jlpgk:tlsportname2/proxy/: tls qux (200; 5.305196ms) Aug 4 10:48:15.781: INFO: (9) /api/v1/namespaces/proxy-1588/services/proxy-service-jlpgk:portname1/proxy/: foo (200; 5.379494ms) Aug 4 10:48:15.781: INFO: (9) /api/v1/namespaces/proxy-1588/services/proxy-service-jlpgk:portname2/proxy/: bar (200; 5.411303ms) Aug 4 10:48:15.782: INFO: (9) /api/v1/namespaces/proxy-1588/pods/https:proxy-service-jlpgk-kfc2l:462/proxy/: tls qux (200; 5.463276ms) Aug 4 10:48:15.788: INFO: (10) /api/v1/namespaces/proxy-1588/pods/proxy-service-jlpgk-kfc2l/proxy/: test (200; 6.611183ms) Aug 4 10:48:15.788: INFO: (10) /api/v1/namespaces/proxy-1588/pods/https:proxy-service-jlpgk-kfc2l:462/proxy/: tls qux (200; 6.615754ms) Aug 4 10:48:15.788: INFO: (10) /api/v1/namespaces/proxy-1588/pods/http:proxy-service-jlpgk-kfc2l:160/proxy/: foo (200; 6.629474ms) Aug 4 10:48:15.789: INFO: (10) /api/v1/namespaces/proxy-1588/pods/https:proxy-service-jlpgk-kfc2l:460/proxy/: tls baz (200; 7.151505ms) Aug 4 10:48:15.789: INFO: (10) /api/v1/namespaces/proxy-1588/pods/http:proxy-service-jlpgk-kfc2l:162/proxy/: bar (200; 7.116443ms) Aug 4 10:48:15.789: INFO: (10) /api/v1/namespaces/proxy-1588/pods/proxy-service-jlpgk-kfc2l:1080/proxy/: test<... (200; 7.238147ms) Aug 4 10:48:15.789: INFO: (10) /api/v1/namespaces/proxy-1588/pods/http:proxy-service-jlpgk-kfc2l:1080/proxy/: ... (200; 7.122447ms) Aug 4 10:48:15.789: INFO: (10) /api/v1/namespaces/proxy-1588/pods/proxy-service-jlpgk-kfc2l:162/proxy/: bar (200; 7.133731ms) Aug 4 10:48:15.789: INFO: (10) /api/v1/namespaces/proxy-1588/pods/https:proxy-service-jlpgk-kfc2l:443/proxy/: test (200; 6.691372ms) Aug 4 10:48:15.798: INFO: (11) /api/v1/namespaces/proxy-1588/pods/proxy-service-jlpgk-kfc2l:160/proxy/: foo (200; 6.893242ms) Aug 4 10:48:15.798: INFO: (11) /api/v1/namespaces/proxy-1588/pods/http:proxy-service-jlpgk-kfc2l:162/proxy/: bar (200; 7.011495ms) Aug 4 10:48:15.798: INFO: (11) /api/v1/namespaces/proxy-1588/pods/http:proxy-service-jlpgk-kfc2l:1080/proxy/: ... (200; 7.092528ms) Aug 4 10:48:15.798: INFO: (11) /api/v1/namespaces/proxy-1588/pods/proxy-service-jlpgk-kfc2l:1080/proxy/: test<... (200; 7.157436ms) Aug 4 10:48:15.799: INFO: (11) /api/v1/namespaces/proxy-1588/services/https:proxy-service-jlpgk:tlsportname2/proxy/: tls qux (200; 7.973061ms) Aug 4 10:48:15.799: INFO: (11) /api/v1/namespaces/proxy-1588/services/proxy-service-jlpgk:portname1/proxy/: foo (200; 8.111041ms) Aug 4 10:48:15.799: INFO: (11) /api/v1/namespaces/proxy-1588/services/http:proxy-service-jlpgk:portname1/proxy/: foo (200; 8.106813ms) Aug 4 10:48:15.799: INFO: (11) /api/v1/namespaces/proxy-1588/services/https:proxy-service-jlpgk:tlsportname1/proxy/: tls baz (200; 8.284087ms) Aug 4 10:48:15.799: INFO: (11) /api/v1/namespaces/proxy-1588/services/proxy-service-jlpgk:portname2/proxy/: bar (200; 8.367415ms) Aug 4 10:48:15.799: INFO: (11) /api/v1/namespaces/proxy-1588/services/http:proxy-service-jlpgk:portname2/proxy/: bar (200; 8.350096ms) Aug 4 10:48:15.806: INFO: (12) /api/v1/namespaces/proxy-1588/pods/proxy-service-jlpgk-kfc2l:162/proxy/: bar (200; 6.618649ms) Aug 4 10:48:15.806: INFO: (12) /api/v1/namespaces/proxy-1588/pods/http:proxy-service-jlpgk-kfc2l:160/proxy/: foo (200; 6.493371ms) Aug 4 10:48:15.806: INFO: (12) /api/v1/namespaces/proxy-1588/pods/proxy-service-jlpgk-kfc2l:1080/proxy/: test<... (200; 6.591212ms) Aug 4 10:48:15.806: INFO: (12) /api/v1/namespaces/proxy-1588/pods/proxy-service-jlpgk-kfc2l/proxy/: test (200; 6.571671ms) Aug 4 10:48:15.807: INFO: (12) /api/v1/namespaces/proxy-1588/pods/https:proxy-service-jlpgk-kfc2l:462/proxy/: tls qux (200; 7.316034ms) Aug 4 10:48:15.807: INFO: (12) /api/v1/namespaces/proxy-1588/pods/http:proxy-service-jlpgk-kfc2l:1080/proxy/: ... (200; 7.795424ms) Aug 4 10:48:15.807: INFO: (12) /api/v1/namespaces/proxy-1588/pods/https:proxy-service-jlpgk-kfc2l:443/proxy/: ... (200; 11.067693ms) Aug 4 10:48:15.820: INFO: (13) /api/v1/namespaces/proxy-1588/pods/http:proxy-service-jlpgk-kfc2l:160/proxy/: foo (200; 11.144533ms) Aug 4 10:48:15.820: INFO: (13) /api/v1/namespaces/proxy-1588/pods/proxy-service-jlpgk-kfc2l:162/proxy/: bar (200; 11.107852ms) Aug 4 10:48:15.820: INFO: (13) /api/v1/namespaces/proxy-1588/pods/https:proxy-service-jlpgk-kfc2l:443/proxy/: test (200; 11.268825ms) Aug 4 10:48:15.820: INFO: (13) /api/v1/namespaces/proxy-1588/pods/proxy-service-jlpgk-kfc2l:1080/proxy/: test<... (200; 11.434087ms) Aug 4 10:48:15.822: INFO: (13) /api/v1/namespaces/proxy-1588/services/proxy-service-jlpgk:portname1/proxy/: foo (200; 13.027424ms) Aug 4 10:48:15.822: INFO: (13) /api/v1/namespaces/proxy-1588/pods/proxy-service-jlpgk-kfc2l:160/proxy/: foo (200; 13.143284ms) Aug 4 10:48:15.822: INFO: (13) /api/v1/namespaces/proxy-1588/services/proxy-service-jlpgk:portname2/proxy/: bar (200; 13.143825ms) Aug 4 10:48:15.822: INFO: (13) /api/v1/namespaces/proxy-1588/services/http:proxy-service-jlpgk:portname2/proxy/: bar (200; 13.196568ms) Aug 4 10:48:15.822: INFO: (13) /api/v1/namespaces/proxy-1588/pods/https:proxy-service-jlpgk-kfc2l:460/proxy/: tls baz (200; 13.297182ms) Aug 4 10:48:15.822: INFO: (13) /api/v1/namespaces/proxy-1588/services/https:proxy-service-jlpgk:tlsportname2/proxy/: tls qux (200; 13.374194ms) Aug 4 10:48:15.822: INFO: (13) /api/v1/namespaces/proxy-1588/services/http:proxy-service-jlpgk:portname1/proxy/: foo (200; 13.288969ms) Aug 4 10:48:15.822: INFO: (13) /api/v1/namespaces/proxy-1588/services/https:proxy-service-jlpgk:tlsportname1/proxy/: tls baz (200; 13.304488ms) Aug 4 10:48:15.822: INFO: (13) /api/v1/namespaces/proxy-1588/pods/http:proxy-service-jlpgk-kfc2l:162/proxy/: bar (200; 13.309878ms) Aug 4 10:48:15.827: INFO: (14) /api/v1/namespaces/proxy-1588/services/http:proxy-service-jlpgk:portname2/proxy/: bar (200; 5.366846ms) Aug 4 10:48:15.827: INFO: (14) /api/v1/namespaces/proxy-1588/services/http:proxy-service-jlpgk:portname1/proxy/: foo (200; 5.387476ms) Aug 4 10:48:15.828: INFO: (14) /api/v1/namespaces/proxy-1588/services/https:proxy-service-jlpgk:tlsportname1/proxy/: tls baz (200; 5.493722ms) Aug 4 10:48:15.828: INFO: (14) /api/v1/namespaces/proxy-1588/pods/http:proxy-service-jlpgk-kfc2l:160/proxy/: foo (200; 5.403481ms) Aug 4 10:48:15.828: INFO: (14) /api/v1/namespaces/proxy-1588/pods/proxy-service-jlpgk-kfc2l:162/proxy/: bar (200; 5.527221ms) Aug 4 10:48:15.828: INFO: (14) /api/v1/namespaces/proxy-1588/pods/http:proxy-service-jlpgk-kfc2l:1080/proxy/: ... (200; 6.086503ms) Aug 4 10:48:15.828: INFO: (14) /api/v1/namespaces/proxy-1588/pods/https:proxy-service-jlpgk-kfc2l:460/proxy/: tls baz (200; 6.048889ms) Aug 4 10:48:15.831: INFO: (14) /api/v1/namespaces/proxy-1588/services/proxy-service-jlpgk:portname1/proxy/: foo (200; 8.548633ms) Aug 4 10:48:15.831: INFO: (14) /api/v1/namespaces/proxy-1588/services/proxy-service-jlpgk:portname2/proxy/: bar (200; 8.782064ms) Aug 4 10:48:15.831: INFO: (14) /api/v1/namespaces/proxy-1588/pods/http:proxy-service-jlpgk-kfc2l:162/proxy/: bar (200; 8.948218ms) Aug 4 10:48:15.831: INFO: (14) /api/v1/namespaces/proxy-1588/services/https:proxy-service-jlpgk:tlsportname2/proxy/: tls qux (200; 8.899714ms) Aug 4 10:48:15.831: INFO: (14) /api/v1/namespaces/proxy-1588/pods/proxy-service-jlpgk-kfc2l/proxy/: test (200; 8.979836ms) Aug 4 10:48:15.831: INFO: (14) /api/v1/namespaces/proxy-1588/pods/proxy-service-jlpgk-kfc2l:1080/proxy/: test<... (200; 8.877758ms) Aug 4 10:48:15.831: INFO: (14) /api/v1/namespaces/proxy-1588/pods/proxy-service-jlpgk-kfc2l:160/proxy/: foo (200; 8.94874ms) Aug 4 10:48:15.831: INFO: (14) /api/v1/namespaces/proxy-1588/pods/https:proxy-service-jlpgk-kfc2l:443/proxy/: test<... (200; 3.678832ms) Aug 4 10:48:15.835: INFO: (15) /api/v1/namespaces/proxy-1588/pods/https:proxy-service-jlpgk-kfc2l:462/proxy/: tls qux (200; 3.705691ms) Aug 4 10:48:15.835: INFO: (15) /api/v1/namespaces/proxy-1588/pods/http:proxy-service-jlpgk-kfc2l:1080/proxy/: ... (200; 3.956049ms) Aug 4 10:48:15.836: INFO: (15) /api/v1/namespaces/proxy-1588/services/proxy-service-jlpgk:portname2/proxy/: bar (200; 4.516845ms) Aug 4 10:48:15.836: INFO: (15) /api/v1/namespaces/proxy-1588/services/https:proxy-service-jlpgk:tlsportname2/proxy/: tls qux (200; 4.643812ms) Aug 4 10:48:15.836: INFO: (15) /api/v1/namespaces/proxy-1588/pods/http:proxy-service-jlpgk-kfc2l:160/proxy/: foo (200; 4.549372ms) Aug 4 10:48:15.836: INFO: (15) /api/v1/namespaces/proxy-1588/pods/proxy-service-jlpgk-kfc2l/proxy/: test (200; 4.554928ms) Aug 4 10:48:15.836: INFO: (15) /api/v1/namespaces/proxy-1588/services/http:proxy-service-jlpgk:portname2/proxy/: bar (200; 4.612407ms) Aug 4 10:48:15.836: INFO: (15) /api/v1/namespaces/proxy-1588/pods/proxy-service-jlpgk-kfc2l:162/proxy/: bar (200; 4.568504ms) Aug 4 10:48:15.836: INFO: (15) /api/v1/namespaces/proxy-1588/services/https:proxy-service-jlpgk:tlsportname1/proxy/: tls baz (200; 4.614959ms) Aug 4 10:48:15.836: INFO: (15) /api/v1/namespaces/proxy-1588/pods/proxy-service-jlpgk-kfc2l:160/proxy/: foo (200; 4.599304ms) Aug 4 10:48:15.836: INFO: (15) /api/v1/namespaces/proxy-1588/pods/https:proxy-service-jlpgk-kfc2l:460/proxy/: tls baz (200; 4.814799ms) Aug 4 10:48:15.836: INFO: (15) /api/v1/namespaces/proxy-1588/services/proxy-service-jlpgk:portname1/proxy/: foo (200; 4.762645ms) Aug 4 10:48:15.836: INFO: (15) /api/v1/namespaces/proxy-1588/services/http:proxy-service-jlpgk:portname1/proxy/: foo (200; 4.79878ms) Aug 4 10:48:15.836: INFO: (15) /api/v1/namespaces/proxy-1588/pods/http:proxy-service-jlpgk-kfc2l:162/proxy/: bar (200; 4.773918ms) Aug 4 10:48:15.840: INFO: (16) /api/v1/namespaces/proxy-1588/pods/https:proxy-service-jlpgk-kfc2l:443/proxy/: test<... (200; 3.778266ms) Aug 4 10:48:15.840: INFO: (16) /api/v1/namespaces/proxy-1588/services/https:proxy-service-jlpgk:tlsportname1/proxy/: tls baz (200; 3.990865ms) Aug 4 10:48:15.841: INFO: (16) /api/v1/namespaces/proxy-1588/pods/https:proxy-service-jlpgk-kfc2l:462/proxy/: tls qux (200; 4.430536ms) Aug 4 10:48:15.841: INFO: (16) /api/v1/namespaces/proxy-1588/pods/proxy-service-jlpgk-kfc2l/proxy/: test (200; 4.442321ms) Aug 4 10:48:15.841: INFO: (16) /api/v1/namespaces/proxy-1588/pods/http:proxy-service-jlpgk-kfc2l:1080/proxy/: ... (200; 5.084142ms) Aug 4 10:48:15.842: INFO: (16) /api/v1/namespaces/proxy-1588/pods/proxy-service-jlpgk-kfc2l:160/proxy/: foo (200; 5.336403ms) Aug 4 10:48:15.842: INFO: (16) /api/v1/namespaces/proxy-1588/services/https:proxy-service-jlpgk:tlsportname2/proxy/: tls qux (200; 5.380114ms) Aug 4 10:48:15.842: INFO: (16) /api/v1/namespaces/proxy-1588/pods/http:proxy-service-jlpgk-kfc2l:160/proxy/: foo (200; 5.290719ms) Aug 4 10:48:15.842: INFO: (16) /api/v1/namespaces/proxy-1588/pods/http:proxy-service-jlpgk-kfc2l:162/proxy/: bar (200; 5.483176ms) Aug 4 10:48:15.842: INFO: (16) /api/v1/namespaces/proxy-1588/pods/https:proxy-service-jlpgk-kfc2l:460/proxy/: tls baz (200; 5.488372ms) Aug 4 10:48:15.842: INFO: (16) /api/v1/namespaces/proxy-1588/services/proxy-service-jlpgk:portname1/proxy/: foo (200; 5.445371ms) Aug 4 10:48:15.842: INFO: (16) /api/v1/namespaces/proxy-1588/services/proxy-service-jlpgk:portname2/proxy/: bar (200; 5.513587ms) Aug 4 10:48:15.842: INFO: (16) /api/v1/namespaces/proxy-1588/services/http:proxy-service-jlpgk:portname2/proxy/: bar (200; 5.669046ms) Aug 4 10:48:15.842: INFO: (16) /api/v1/namespaces/proxy-1588/pods/proxy-service-jlpgk-kfc2l:162/proxy/: bar (200; 5.529065ms) Aug 4 10:48:15.842: INFO: (16) /api/v1/namespaces/proxy-1588/services/http:proxy-service-jlpgk:portname1/proxy/: foo (200; 5.780979ms) Aug 4 10:48:15.846: INFO: (17) /api/v1/namespaces/proxy-1588/pods/http:proxy-service-jlpgk-kfc2l:160/proxy/: foo (200; 3.902112ms) Aug 4 10:48:15.846: INFO: (17) /api/v1/namespaces/proxy-1588/pods/proxy-service-jlpgk-kfc2l/proxy/: test (200; 4.043726ms) Aug 4 10:48:15.846: INFO: (17) /api/v1/namespaces/proxy-1588/pods/http:proxy-service-jlpgk-kfc2l:162/proxy/: bar (200; 4.358764ms) Aug 4 10:48:15.846: INFO: (17) /api/v1/namespaces/proxy-1588/pods/https:proxy-service-jlpgk-kfc2l:443/proxy/: ... (200; 4.490559ms) Aug 4 10:48:15.847: INFO: (17) /api/v1/namespaces/proxy-1588/pods/https:proxy-service-jlpgk-kfc2l:460/proxy/: tls baz (200; 4.984716ms) Aug 4 10:48:15.848: INFO: (17) /api/v1/namespaces/proxy-1588/pods/proxy-service-jlpgk-kfc2l:162/proxy/: bar (200; 5.62131ms) Aug 4 10:48:15.848: INFO: (17) /api/v1/namespaces/proxy-1588/services/proxy-service-jlpgk:portname1/proxy/: foo (200; 5.352746ms) Aug 4 10:48:15.848: INFO: (17) /api/v1/namespaces/proxy-1588/services/proxy-service-jlpgk:portname2/proxy/: bar (200; 5.354889ms) Aug 4 10:48:15.848: INFO: (17) /api/v1/namespaces/proxy-1588/services/https:proxy-service-jlpgk:tlsportname2/proxy/: tls qux (200; 5.480075ms) Aug 4 10:48:15.848: INFO: (17) /api/v1/namespaces/proxy-1588/services/https:proxy-service-jlpgk:tlsportname1/proxy/: tls baz (200; 5.815624ms) Aug 4 10:48:15.848: INFO: (17) /api/v1/namespaces/proxy-1588/pods/proxy-service-jlpgk-kfc2l:1080/proxy/: test<... (200; 5.442796ms) Aug 4 10:48:15.848: INFO: (17) /api/v1/namespaces/proxy-1588/services/http:proxy-service-jlpgk:portname1/proxy/: foo (200; 5.562879ms) Aug 4 10:48:15.848: INFO: (17) /api/v1/namespaces/proxy-1588/services/http:proxy-service-jlpgk:portname2/proxy/: bar (200; 5.701423ms) Aug 4 10:48:15.849: INFO: (17) /api/v1/namespaces/proxy-1588/pods/https:proxy-service-jlpgk-kfc2l:462/proxy/: tls qux (200; 6.490742ms) Aug 4 10:48:15.853: INFO: (18) /api/v1/namespaces/proxy-1588/pods/https:proxy-service-jlpgk-kfc2l:462/proxy/: tls qux (200; 4.074148ms) Aug 4 10:48:15.853: INFO: (18) /api/v1/namespaces/proxy-1588/pods/proxy-service-jlpgk-kfc2l:162/proxy/: bar (200; 4.107908ms) Aug 4 10:48:15.853: INFO: (18) /api/v1/namespaces/proxy-1588/services/proxy-service-jlpgk:portname2/proxy/: bar (200; 4.047464ms) Aug 4 10:48:15.853: INFO: (18) /api/v1/namespaces/proxy-1588/pods/proxy-service-jlpgk-kfc2l/proxy/: test (200; 4.111596ms) Aug 4 10:48:15.853: INFO: (18) /api/v1/namespaces/proxy-1588/pods/https:proxy-service-jlpgk-kfc2l:443/proxy/: test<... (200; 4.135943ms) Aug 4 10:48:15.853: INFO: (18) /api/v1/namespaces/proxy-1588/services/proxy-service-jlpgk:portname1/proxy/: foo (200; 4.270761ms) Aug 4 10:48:15.853: INFO: (18) /api/v1/namespaces/proxy-1588/services/http:proxy-service-jlpgk:portname2/proxy/: bar (200; 4.404343ms) Aug 4 10:48:15.853: INFO: (18) /api/v1/namespaces/proxy-1588/pods/proxy-service-jlpgk-kfc2l:160/proxy/: foo (200; 4.319683ms) Aug 4 10:48:15.853: INFO: (18) /api/v1/namespaces/proxy-1588/pods/http:proxy-service-jlpgk-kfc2l:1080/proxy/: ... (200; 4.425557ms) Aug 4 10:48:15.853: INFO: (18) /api/v1/namespaces/proxy-1588/pods/http:proxy-service-jlpgk-kfc2l:162/proxy/: bar (200; 4.345768ms) Aug 4 10:48:15.853: INFO: (18) /api/v1/namespaces/proxy-1588/pods/https:proxy-service-jlpgk-kfc2l:460/proxy/: tls baz (200; 4.417161ms) Aug 4 10:48:15.853: INFO: (18) /api/v1/namespaces/proxy-1588/pods/http:proxy-service-jlpgk-kfc2l:160/proxy/: foo (200; 4.767361ms) Aug 4 10:48:15.854: INFO: (18) /api/v1/namespaces/proxy-1588/services/http:proxy-service-jlpgk:portname1/proxy/: foo (200; 4.8629ms) Aug 4 10:48:15.854: INFO: (18) /api/v1/namespaces/proxy-1588/services/https:proxy-service-jlpgk:tlsportname1/proxy/: tls baz (200; 4.955045ms) Aug 4 10:48:15.854: INFO: (18) /api/v1/namespaces/proxy-1588/services/https:proxy-service-jlpgk:tlsportname2/proxy/: tls qux (200; 4.999921ms) Aug 4 10:48:15.856: INFO: (19) /api/v1/namespaces/proxy-1588/pods/proxy-service-jlpgk-kfc2l/proxy/: test (200; 2.119292ms) Aug 4 10:48:15.857: INFO: (19) /api/v1/namespaces/proxy-1588/pods/http:proxy-service-jlpgk-kfc2l:160/proxy/: foo (200; 2.751907ms) Aug 4 10:48:15.857: INFO: (19) /api/v1/namespaces/proxy-1588/pods/proxy-service-jlpgk-kfc2l:160/proxy/: foo (200; 3.057716ms) Aug 4 10:48:15.857: INFO: (19) /api/v1/namespaces/proxy-1588/pods/proxy-service-jlpgk-kfc2l:1080/proxy/: test<... (200; 3.133958ms) Aug 4 10:48:15.857: INFO: (19) /api/v1/namespaces/proxy-1588/pods/http:proxy-service-jlpgk-kfc2l:162/proxy/: bar (200; 2.960997ms) Aug 4 10:48:15.857: INFO: (19) /api/v1/namespaces/proxy-1588/pods/https:proxy-service-jlpgk-kfc2l:443/proxy/: ... (200; 4.866814ms) Aug 4 10:48:15.859: INFO: (19) /api/v1/namespaces/proxy-1588/pods/proxy-service-jlpgk-kfc2l:162/proxy/: bar (200; 4.858983ms) Aug 4 10:48:15.859: INFO: (19) /api/v1/namespaces/proxy-1588/services/http:proxy-service-jlpgk:portname1/proxy/: foo (200; 4.898628ms) Aug 4 10:48:15.859: INFO: (19) /api/v1/namespaces/proxy-1588/services/http:proxy-service-jlpgk:portname2/proxy/: bar (200; 4.896405ms) Aug 4 10:48:15.859: INFO: (19) /api/v1/namespaces/proxy-1588/services/https:proxy-service-jlpgk:tlsportname1/proxy/: tls baz (200; 4.902868ms) Aug 4 10:48:15.859: INFO: (19) /api/v1/namespaces/proxy-1588/pods/https:proxy-service-jlpgk-kfc2l:462/proxy/: tls qux (200; 4.903286ms) Aug 4 10:48:15.859: INFO: (19) /api/v1/namespaces/proxy-1588/services/proxy-service-jlpgk:portname2/proxy/: bar (200; 5.250431ms) STEP: deleting ReplicationController proxy-service-jlpgk in namespace proxy-1588, will wait for the garbage collector to delete the pods Aug 4 10:48:15.918: INFO: Deleting ReplicationController proxy-service-jlpgk took: 6.750852ms Aug 4 10:48:16.218: INFO: Terminating ReplicationController proxy-service-jlpgk pods took: 300.239213ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:48:23.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-1588" for this suite. • [SLOW TEST:16.180 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:59 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":275,"completed":55,"skipped":1133,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:48:23.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Aug 4 10:48:23.650: INFO: Waiting up to 5m0s for pod "downward-api-d77cfb32-96b8-4474-a899-9746dd1ee682" in namespace "downward-api-1158" to be "Succeeded or Failed" Aug 4 10:48:23.666: INFO: Pod "downward-api-d77cfb32-96b8-4474-a899-9746dd1ee682": Phase="Pending", Reason="", readiness=false. Elapsed: 16.749055ms Aug 4 10:48:25.671: INFO: Pod "downward-api-d77cfb32-96b8-4474-a899-9746dd1ee682": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020983097s Aug 4 10:48:27.779: INFO: Pod "downward-api-d77cfb32-96b8-4474-a899-9746dd1ee682": Phase="Pending", Reason="", readiness=false. Elapsed: 4.12947529s Aug 4 10:48:30.066: INFO: Pod "downward-api-d77cfb32-96b8-4474-a899-9746dd1ee682": Phase="Running", Reason="", readiness=true. Elapsed: 6.416580349s Aug 4 10:48:32.070: INFO: Pod "downward-api-d77cfb32-96b8-4474-a899-9746dd1ee682": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.420215981s STEP: Saw pod success Aug 4 10:48:32.070: INFO: Pod "downward-api-d77cfb32-96b8-4474-a899-9746dd1ee682" satisfied condition "Succeeded or Failed" Aug 4 10:48:32.073: INFO: Trying to get logs from node kali-worker pod downward-api-d77cfb32-96b8-4474-a899-9746dd1ee682 container dapi-container: STEP: delete the pod Aug 4 10:48:32.112: INFO: Waiting for pod downward-api-d77cfb32-96b8-4474-a899-9746dd1ee682 to disappear Aug 4 10:48:32.167: INFO: Pod downward-api-d77cfb32-96b8-4474-a899-9746dd1ee682 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:48:32.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1158" for this suite. • [SLOW TEST:8.649 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":275,"completed":56,"skipped":1160,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:48:32.177: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 4 10:48:33.390: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 4 10:48:35.714: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732134913, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732134913, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732134913, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732134913, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 4 10:48:37.718: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732134913, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732134913, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732134913, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732134913, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 4 10:48:40.762: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:48:40.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4254" for this suite. STEP: Destroying namespace "webhook-4254-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.673 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":275,"completed":57,"skipped":1182,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:48:40.851: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Aug 4 10:48:41.782: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Aug 4 10:48:43.794: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732134921, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732134921, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732134922, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732134921, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-65c6cd5fdf\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 4 10:48:45.797: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732134921, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732134921, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732134922, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732134921, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-65c6cd5fdf\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 4 10:48:48.899: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 4 10:48:48.903: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:48:50.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-8311" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:9.337 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":275,"completed":58,"skipped":1209,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:48:50.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-4c712ec2-7a3d-4cd2-b9be-35cd7e8999d4 STEP: Creating a pod to test consume configMaps Aug 4 10:48:50.336: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-59989f25-eae0-4f10-a039-47831d227c08" in namespace "projected-8119" to be "Succeeded or Failed" Aug 4 10:48:50.349: INFO: Pod "pod-projected-configmaps-59989f25-eae0-4f10-a039-47831d227c08": Phase="Pending", Reason="", readiness=false. Elapsed: 13.338108ms Aug 4 10:48:52.426: INFO: Pod "pod-projected-configmaps-59989f25-eae0-4f10-a039-47831d227c08": Phase="Pending", Reason="", readiness=false. Elapsed: 2.090073906s Aug 4 10:48:54.431: INFO: Pod "pod-projected-configmaps-59989f25-eae0-4f10-a039-47831d227c08": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.094969694s STEP: Saw pod success Aug 4 10:48:54.431: INFO: Pod "pod-projected-configmaps-59989f25-eae0-4f10-a039-47831d227c08" satisfied condition "Succeeded or Failed" Aug 4 10:48:54.434: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-59989f25-eae0-4f10-a039-47831d227c08 container projected-configmap-volume-test: STEP: delete the pod Aug 4 10:48:54.463: INFO: Waiting for pod pod-projected-configmaps-59989f25-eae0-4f10-a039-47831d227c08 to disappear Aug 4 10:48:54.533: INFO: Pod pod-projected-configmaps-59989f25-eae0-4f10-a039-47831d227c08 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:48:54.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8119" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":275,"completed":59,"skipped":1225,"failed":0} ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:48:54.546: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-908333df-230f-4fee-ae2c-455f452fd332 STEP: Creating a pod to test consume secrets Aug 4 10:48:54.809: INFO: Waiting up to 5m0s for pod "pod-secrets-a82480d8-3599-4da5-a731-8386708c639a" in namespace "secrets-9995" to be "Succeeded or Failed" Aug 4 10:48:54.965: INFO: Pod "pod-secrets-a82480d8-3599-4da5-a731-8386708c639a": Phase="Pending", Reason="", readiness=false. Elapsed: 156.243576ms Aug 4 10:48:56.969: INFO: Pod "pod-secrets-a82480d8-3599-4da5-a731-8386708c639a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.159927723s Aug 4 10:48:58.974: INFO: Pod "pod-secrets-a82480d8-3599-4da5-a731-8386708c639a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.164542752s STEP: Saw pod success Aug 4 10:48:58.974: INFO: Pod "pod-secrets-a82480d8-3599-4da5-a731-8386708c639a" satisfied condition "Succeeded or Failed" Aug 4 10:48:58.977: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-a82480d8-3599-4da5-a731-8386708c639a container secret-volume-test: STEP: delete the pod Aug 4 10:48:59.012: INFO: Waiting for pod pod-secrets-a82480d8-3599-4da5-a731-8386708c639a to disappear Aug 4 10:48:59.026: INFO: Pod pod-secrets-a82480d8-3599-4da5-a731-8386708c639a no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:48:59.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9995" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":60,"skipped":1225,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:48:59.033: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-map-c063198f-af8a-4ff1-adf9-a6307c8d364e STEP: Creating a pod to test consume configMaps Aug 4 10:48:59.224: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-091d5795-1069-4724-9e3a-b2f8b28a6658" in namespace "projected-2921" to be "Succeeded or Failed" Aug 4 10:48:59.251: INFO: Pod "pod-projected-configmaps-091d5795-1069-4724-9e3a-b2f8b28a6658": Phase="Pending", Reason="", readiness=false. Elapsed: 27.100346ms Aug 4 10:49:01.342: INFO: Pod "pod-projected-configmaps-091d5795-1069-4724-9e3a-b2f8b28a6658": Phase="Pending", Reason="", readiness=false. Elapsed: 2.118789849s Aug 4 10:49:03.606: INFO: Pod "pod-projected-configmaps-091d5795-1069-4724-9e3a-b2f8b28a6658": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.382361586s STEP: Saw pod success Aug 4 10:49:03.606: INFO: Pod "pod-projected-configmaps-091d5795-1069-4724-9e3a-b2f8b28a6658" satisfied condition "Succeeded or Failed" Aug 4 10:49:03.609: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-091d5795-1069-4724-9e3a-b2f8b28a6658 container projected-configmap-volume-test: STEP: delete the pod Aug 4 10:49:04.092: INFO: Waiting for pod pod-projected-configmaps-091d5795-1069-4724-9e3a-b2f8b28a6658 to disappear Aug 4 10:49:04.156: INFO: Pod pod-projected-configmaps-091d5795-1069-4724-9e3a-b2f8b28a6658 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:49:04.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2921" for this suite. • [SLOW TEST:5.139 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":275,"completed":61,"skipped":1249,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:49:04.173: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-map-2e12c884-0260-44f9-a2d4-309f80d467e5 STEP: Creating a pod to test consume secrets Aug 4 10:49:04.368: INFO: Waiting up to 5m0s for pod "pod-secrets-7eac6df3-4766-4fe2-9e62-9e8f495afaa6" in namespace "secrets-3617" to be "Succeeded or Failed" Aug 4 10:49:04.438: INFO: Pod "pod-secrets-7eac6df3-4766-4fe2-9e62-9e8f495afaa6": Phase="Pending", Reason="", readiness=false. Elapsed: 70.315863ms Aug 4 10:49:06.443: INFO: Pod "pod-secrets-7eac6df3-4766-4fe2-9e62-9e8f495afaa6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074925226s Aug 4 10:49:08.448: INFO: Pod "pod-secrets-7eac6df3-4766-4fe2-9e62-9e8f495afaa6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.079487906s STEP: Saw pod success Aug 4 10:49:08.448: INFO: Pod "pod-secrets-7eac6df3-4766-4fe2-9e62-9e8f495afaa6" satisfied condition "Succeeded or Failed" Aug 4 10:49:08.451: INFO: Trying to get logs from node kali-worker pod pod-secrets-7eac6df3-4766-4fe2-9e62-9e8f495afaa6 container secret-volume-test: STEP: delete the pod Aug 4 10:49:08.500: INFO: Waiting for pod pod-secrets-7eac6df3-4766-4fe2-9e62-9e8f495afaa6 to disappear Aug 4 10:49:08.505: INFO: Pod pod-secrets-7eac6df3-4766-4fe2-9e62-9e8f495afaa6 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:49:08.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3617" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":62,"skipped":1331,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:49:08.516: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:49:15.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9080" for this suite. • [SLOW TEST:7.071 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":275,"completed":63,"skipped":1353,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:49:15.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 4 10:49:15.746: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:49:16.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-6899" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":275,"completed":64,"skipped":1360,"failed":0} SS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:49:16.986: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Aug 4 10:49:22.860: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:49:22.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5784" for this suite. • [SLOW TEST:5.962 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:133 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":65,"skipped":1362,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:49:22.948: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1418 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: running the image docker.io/library/httpd:2.4.38-alpine Aug 4 10:49:23.003: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-3728' Aug 4 10:49:23.102: INFO: stderr: "" Aug 4 10:49:23.102: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1423 Aug 4 10:49:23.120: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-3728' Aug 4 10:49:28.139: INFO: stderr: "" Aug 4 10:49:28.139: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:49:28.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3728" for this suite. • [SLOW TEST:5.206 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1414 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":275,"completed":66,"skipped":1363,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:49:28.155: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Aug 4 10:49:28.395: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bd64c53c-3c5b-4901-83c1-cb91bd50938e" in namespace "projected-6755" to be "Succeeded or Failed" Aug 4 10:49:28.467: INFO: Pod "downwardapi-volume-bd64c53c-3c5b-4901-83c1-cb91bd50938e": Phase="Pending", Reason="", readiness=false. Elapsed: 71.951176ms Aug 4 10:49:30.483: INFO: Pod "downwardapi-volume-bd64c53c-3c5b-4901-83c1-cb91bd50938e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088660453s Aug 4 10:49:32.487: INFO: Pod "downwardapi-volume-bd64c53c-3c5b-4901-83c1-cb91bd50938e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.092436703s Aug 4 10:49:34.491: INFO: Pod "downwardapi-volume-bd64c53c-3c5b-4901-83c1-cb91bd50938e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.096522655s STEP: Saw pod success Aug 4 10:49:34.491: INFO: Pod "downwardapi-volume-bd64c53c-3c5b-4901-83c1-cb91bd50938e" satisfied condition "Succeeded or Failed" Aug 4 10:49:34.494: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-bd64c53c-3c5b-4901-83c1-cb91bd50938e container client-container: STEP: delete the pod Aug 4 10:49:34.635: INFO: Waiting for pod downwardapi-volume-bd64c53c-3c5b-4901-83c1-cb91bd50938e to disappear Aug 4 10:49:34.655: INFO: Pod downwardapi-volume-bd64c53c-3c5b-4901-83c1-cb91bd50938e no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:49:34.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6755" for this suite. • [SLOW TEST:6.529 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":275,"completed":67,"skipped":1385,"failed":0} SSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:49:34.684: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-configmap-ftjp STEP: Creating a pod to test atomic-volume-subpath Aug 4 10:49:34.807: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-ftjp" in namespace "subpath-3546" to be "Succeeded or Failed" Aug 4 10:49:34.822: INFO: Pod "pod-subpath-test-configmap-ftjp": Phase="Pending", Reason="", readiness=false. Elapsed: 15.298487ms Aug 4 10:49:36.922: INFO: Pod "pod-subpath-test-configmap-ftjp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.114844009s Aug 4 10:49:38.926: INFO: Pod "pod-subpath-test-configmap-ftjp": Phase="Running", Reason="", readiness=true. Elapsed: 4.119502492s Aug 4 10:49:40.930: INFO: Pod "pod-subpath-test-configmap-ftjp": Phase="Running", Reason="", readiness=true. Elapsed: 6.122858433s Aug 4 10:49:43.529: INFO: Pod "pod-subpath-test-configmap-ftjp": Phase="Running", Reason="", readiness=true. Elapsed: 8.722389965s Aug 4 10:49:45.534: INFO: Pod "pod-subpath-test-configmap-ftjp": Phase="Running", Reason="", readiness=true. Elapsed: 10.727235422s Aug 4 10:49:47.538: INFO: Pod "pod-subpath-test-configmap-ftjp": Phase="Running", Reason="", readiness=true. Elapsed: 12.730745135s Aug 4 10:49:49.541: INFO: Pod "pod-subpath-test-configmap-ftjp": Phase="Running", Reason="", readiness=true. Elapsed: 14.734285165s Aug 4 10:49:51.545: INFO: Pod "pod-subpath-test-configmap-ftjp": Phase="Running", Reason="", readiness=true. Elapsed: 16.738415619s Aug 4 10:49:53.549: INFO: Pod "pod-subpath-test-configmap-ftjp": Phase="Running", Reason="", readiness=true. Elapsed: 18.742384247s Aug 4 10:49:55.553: INFO: Pod "pod-subpath-test-configmap-ftjp": Phase="Running", Reason="", readiness=true. Elapsed: 20.745761608s Aug 4 10:49:57.556: INFO: Pod "pod-subpath-test-configmap-ftjp": Phase="Running", Reason="", readiness=true. Elapsed: 22.748967445s Aug 4 10:49:59.560: INFO: Pod "pod-subpath-test-configmap-ftjp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.753024609s STEP: Saw pod success Aug 4 10:49:59.560: INFO: Pod "pod-subpath-test-configmap-ftjp" satisfied condition "Succeeded or Failed" Aug 4 10:49:59.563: INFO: Trying to get logs from node kali-worker2 pod pod-subpath-test-configmap-ftjp container test-container-subpath-configmap-ftjp: STEP: delete the pod Aug 4 10:49:59.595: INFO: Waiting for pod pod-subpath-test-configmap-ftjp to disappear Aug 4 10:49:59.610: INFO: Pod pod-subpath-test-configmap-ftjp no longer exists STEP: Deleting pod pod-subpath-test-configmap-ftjp Aug 4 10:49:59.610: INFO: Deleting pod "pod-subpath-test-configmap-ftjp" in namespace "subpath-3546" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:49:59.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3546" for this suite. • [SLOW TEST:24.937 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":275,"completed":68,"skipped":1388,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:49:59.621: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 4 10:50:00.381: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 4 10:50:02.464: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732135000, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732135000, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732135000, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732135000, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 4 10:50:05.511: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:50:06.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1359" for this suite. STEP: Destroying namespace "webhook-1359-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.695 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":275,"completed":69,"skipped":1401,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:50:06.316: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Aug 4 10:50:06.370: INFO: PodSpec: initContainers in spec.initContainers Aug 4 10:51:03.291: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-3c1d1494-7881-48f6-935b-5196f8c6be40", GenerateName:"", Namespace:"init-container-8230", SelfLink:"/api/v1/namespaces/init-container-8230/pods/pod-init-3c1d1494-7881-48f6-935b-5196f8c6be40", UID:"739f9713-ead5-4f99-961f-0485e19508b8", ResourceVersion:"6667268", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63732135006, loc:(*time.Location)(0x7b220e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"370149988"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002c3d1c0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002c3d1e0)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002c3d200), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002c3d220)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-4kzrp", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0030d5840), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-4kzrp", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-4kzrp", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-4kzrp", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0052a6f78), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"kali-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002b25ab0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0052a7030)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0052a7050)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0052a7058), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0052a705c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732135006, loc:(*time.Location)(0x7b220e0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732135006, loc:(*time.Location)(0x7b220e0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732135006, loc:(*time.Location)(0x7b220e0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732135006, loc:(*time.Location)(0x7b220e0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.13", PodIP:"10.244.2.198", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.198"}}, StartTime:(*v1.Time)(0xc002c3d240), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002b25b90)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002b25c00)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://95e00317171189f14196b709c171f4f73f9c9d4ca421815278e3642bf4248ea7", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002c3d280), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002c3d260), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc0052a70df)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:51:03.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8230" for this suite. • [SLOW TEST:57.369 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":275,"completed":70,"skipped":1422,"failed":0} SSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:51:03.686: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Aug 4 10:51:04.239: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:51:11.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1520" for this suite. • [SLOW TEST:7.649 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":275,"completed":71,"skipped":1427,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:51:11.335: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Aug 4 10:51:21.869: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 4 10:51:21.883: INFO: Pod pod-with-poststart-http-hook still exists Aug 4 10:51:23.883: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 4 10:51:23.955: INFO: Pod pod-with-poststart-http-hook still exists Aug 4 10:51:25.883: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 4 10:51:25.887: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:51:25.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9897" for this suite. • [SLOW TEST:14.561 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":275,"completed":72,"skipped":1444,"failed":0} SSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:51:25.897: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:51:43.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2168" for this suite. • [SLOW TEST:17.168 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":275,"completed":73,"skipped":1449,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:51:43.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 4 10:51:43.190: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Aug 4 10:51:43.212: INFO: Pod name sample-pod: Found 0 pods out of 1 Aug 4 10:51:48.215: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Aug 4 10:51:48.215: INFO: Creating deployment "test-rolling-update-deployment" Aug 4 10:51:48.219: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Aug 4 10:51:48.229: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Aug 4 10:51:50.254: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Aug 4 10:51:50.270: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732135108, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732135108, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732135108, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732135108, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-59d5cb45c7\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 4 10:51:52.294: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Aug 4 10:51:52.320: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-6213 /apis/apps/v1/namespaces/deployment-6213/deployments/test-rolling-update-deployment 63691cc6-1683-456a-be8e-a4feabe72f5d 6667570 1 2020-08-04 10:51:48 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2020-08-04 10:51:48 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 114 111 108 108 105 110 103 85 112 100 97 116 101 34 58 123 34 46 34 58 123 125 44 34 102 58 109 97 120 83 117 114 103 101 34 58 123 125 44 34 102 58 109 97 120 85 110 97 118 97 105 108 97 98 108 101 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-08-04 10:51:51 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0048c2d48 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-08-04 10:51:48 +0000 UTC,LastTransitionTime:2020-08-04 10:51:48 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-59d5cb45c7" has successfully progressed.,LastUpdateTime:2020-08-04 10:51:51 +0000 UTC,LastTransitionTime:2020-08-04 10:51:48 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Aug 4 10:51:52.323: INFO: New ReplicaSet "test-rolling-update-deployment-59d5cb45c7" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-59d5cb45c7 deployment-6213 /apis/apps/v1/namespaces/deployment-6213/replicasets/test-rolling-update-deployment-59d5cb45c7 4fb9e29b-8180-4625-a4ee-8a04b31ab70b 6667558 1 2020-08-04 10:51:48 +0000 UTC map[name:sample-pod pod-template-hash:59d5cb45c7] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 63691cc6-1683-456a-be8e-a4feabe72f5d 0xc00453a407 0xc00453a408}] [] [{kube-controller-manager Update apps/v1 2020-08-04 10:51:51 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 51 54 57 49 99 99 54 45 49 54 56 51 45 52 53 54 97 45 98 101 56 101 45 97 52 102 101 97 98 101 55 50 102 53 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 59d5cb45c7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:59d5cb45c7] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00453a498 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Aug 4 10:51:52.323: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Aug 4 10:51:52.323: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-6213 /apis/apps/v1/namespaces/deployment-6213/replicasets/test-rolling-update-controller 88ce18f0-c185-4bb6-9dbf-ff68bf89c1d6 6667569 2 2020-08-04 10:51:43 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 63691cc6-1683-456a-be8e-a4feabe72f5d 0xc00453a2f7 0xc00453a2f8}] [] [{e2e.test Update apps/v1 2020-08-04 10:51:43 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-08-04 10:51:51 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 51 54 57 49 99 99 54 45 49 54 56 51 45 52 53 54 97 45 98 101 56 101 45 97 52 102 101 97 98 101 55 50 102 53 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00453a398 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Aug 4 10:51:52.326: INFO: Pod "test-rolling-update-deployment-59d5cb45c7-2vdc7" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-59d5cb45c7-2vdc7 test-rolling-update-deployment-59d5cb45c7- deployment-6213 /api/v1/namespaces/deployment-6213/pods/test-rolling-update-deployment-59d5cb45c7-2vdc7 4639e933-127b-4913-920c-4a5c15d47ac3 6667557 0 2020-08-04 10:51:48 +0000 UTC map[name:sample-pod pod-template-hash:59d5cb45c7] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-59d5cb45c7 4fb9e29b-8180-4625-a4ee-8a04b31ab70b 0xc00453a997 0xc00453a998}] [] [{kube-controller-manager Update v1 2020-08-04 10:51:48 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 52 102 98 57 101 50 57 98 45 56 49 56 48 45 52 54 50 53 45 97 52 101 101 45 56 97 48 52 98 51 49 97 98 55 48 98 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-04 10:51:51 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 50 48 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ckqb8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ckqb8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ckqb8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 10:51:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 10:51:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 10:51:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 10:51:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.201,StartTime:2020-08-04 10:51:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-04 10:51:50 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://693c0329b2acac7b043202bbf6447906b2799a8fe96fee9fd90a9f4b6be16aa9,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.201,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:51:52.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6213" for this suite. • [SLOW TEST:9.270 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":275,"completed":74,"skipped":1464,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:51:52.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on node default medium Aug 4 10:51:52.599: INFO: Waiting up to 5m0s for pod "pod-c02cc99a-5bb9-4b9a-8c52-1f233ac42b62" in namespace "emptydir-6463" to be "Succeeded or Failed" Aug 4 10:51:52.632: INFO: Pod "pod-c02cc99a-5bb9-4b9a-8c52-1f233ac42b62": Phase="Pending", Reason="", readiness=false. Elapsed: 32.777084ms Aug 4 10:51:54.721: INFO: Pod "pod-c02cc99a-5bb9-4b9a-8c52-1f233ac42b62": Phase="Pending", Reason="", readiness=false. Elapsed: 2.121857838s Aug 4 10:51:56.739: INFO: Pod "pod-c02cc99a-5bb9-4b9a-8c52-1f233ac42b62": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.139418042s STEP: Saw pod success Aug 4 10:51:56.739: INFO: Pod "pod-c02cc99a-5bb9-4b9a-8c52-1f233ac42b62" satisfied condition "Succeeded or Failed" Aug 4 10:51:56.742: INFO: Trying to get logs from node kali-worker pod pod-c02cc99a-5bb9-4b9a-8c52-1f233ac42b62 container test-container: STEP: delete the pod Aug 4 10:51:56.943: INFO: Waiting for pod pod-c02cc99a-5bb9-4b9a-8c52-1f233ac42b62 to disappear Aug 4 10:51:56.967: INFO: Pod pod-c02cc99a-5bb9-4b9a-8c52-1f233ac42b62 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:51:56.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6463" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":75,"skipped":1488,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:51:56.975: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on node default medium Aug 4 10:51:57.029: INFO: Waiting up to 5m0s for pod "pod-fc4ca6c8-fe2a-4153-abfd-9b28cedae1ce" in namespace "emptydir-3671" to be "Succeeded or Failed" Aug 4 10:51:57.093: INFO: Pod "pod-fc4ca6c8-fe2a-4153-abfd-9b28cedae1ce": Phase="Pending", Reason="", readiness=false. Elapsed: 63.353624ms Aug 4 10:51:59.278: INFO: Pod "pod-fc4ca6c8-fe2a-4153-abfd-9b28cedae1ce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.248552616s Aug 4 10:52:01.282: INFO: Pod "pod-fc4ca6c8-fe2a-4153-abfd-9b28cedae1ce": Phase="Pending", Reason="", readiness=false. Elapsed: 4.253212142s Aug 4 10:52:03.287: INFO: Pod "pod-fc4ca6c8-fe2a-4153-abfd-9b28cedae1ce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.25761743s STEP: Saw pod success Aug 4 10:52:03.287: INFO: Pod "pod-fc4ca6c8-fe2a-4153-abfd-9b28cedae1ce" satisfied condition "Succeeded or Failed" Aug 4 10:52:03.290: INFO: Trying to get logs from node kali-worker2 pod pod-fc4ca6c8-fe2a-4153-abfd-9b28cedae1ce container test-container: STEP: delete the pod Aug 4 10:52:03.307: INFO: Waiting for pod pod-fc4ca6c8-fe2a-4153-abfd-9b28cedae1ce to disappear Aug 4 10:52:03.330: INFO: Pod pod-fc4ca6c8-fe2a-4153-abfd-9b28cedae1ce no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:52:03.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3671" for this suite. • [SLOW TEST:6.363 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":76,"skipped":1491,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:52:03.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-map-4b29b8b6-c556-43df-a5a5-51f464342529 STEP: Creating a pod to test consume secrets Aug 4 10:52:03.441: INFO: Waiting up to 5m0s for pod "pod-secrets-3e0d9a7b-240f-444c-8e6e-d7a6bbe7d8ef" in namespace "secrets-3333" to be "Succeeded or Failed" Aug 4 10:52:03.480: INFO: Pod "pod-secrets-3e0d9a7b-240f-444c-8e6e-d7a6bbe7d8ef": Phase="Pending", Reason="", readiness=false. Elapsed: 39.751387ms Aug 4 10:52:05.484: INFO: Pod "pod-secrets-3e0d9a7b-240f-444c-8e6e-d7a6bbe7d8ef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043018873s Aug 4 10:52:07.487: INFO: Pod "pod-secrets-3e0d9a7b-240f-444c-8e6e-d7a6bbe7d8ef": Phase="Running", Reason="", readiness=true. Elapsed: 4.046677667s Aug 4 10:52:09.492: INFO: Pod "pod-secrets-3e0d9a7b-240f-444c-8e6e-d7a6bbe7d8ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.050995003s STEP: Saw pod success Aug 4 10:52:09.492: INFO: Pod "pod-secrets-3e0d9a7b-240f-444c-8e6e-d7a6bbe7d8ef" satisfied condition "Succeeded or Failed" Aug 4 10:52:09.495: INFO: Trying to get logs from node kali-worker pod pod-secrets-3e0d9a7b-240f-444c-8e6e-d7a6bbe7d8ef container secret-volume-test: STEP: delete the pod Aug 4 10:52:09.517: INFO: Waiting for pod pod-secrets-3e0d9a7b-240f-444c-8e6e-d7a6bbe7d8ef to disappear Aug 4 10:52:09.571: INFO: Pod pod-secrets-3e0d9a7b-240f-444c-8e6e-d7a6bbe7d8ef no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:52:09.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3333" for this suite. • [SLOW TEST:6.240 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":77,"skipped":1499,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:52:09.579: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8203.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8203.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8203.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8203.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 4 10:52:17.742: INFO: DNS probes using dns-test-d28e0c09-90b7-47de-abef-c85981aaddda succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8203.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8203.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8203.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8203.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 4 10:52:25.862: INFO: File wheezy_udp@dns-test-service-3.dns-8203.svc.cluster.local from pod dns-8203/dns-test-5d50e4fe-11b2-469b-9ca0-9cffaaa8344d contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 4 10:52:25.865: INFO: File jessie_udp@dns-test-service-3.dns-8203.svc.cluster.local from pod dns-8203/dns-test-5d50e4fe-11b2-469b-9ca0-9cffaaa8344d contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 4 10:52:25.865: INFO: Lookups using dns-8203/dns-test-5d50e4fe-11b2-469b-9ca0-9cffaaa8344d failed for: [wheezy_udp@dns-test-service-3.dns-8203.svc.cluster.local jessie_udp@dns-test-service-3.dns-8203.svc.cluster.local] Aug 4 10:52:30.870: INFO: File wheezy_udp@dns-test-service-3.dns-8203.svc.cluster.local from pod dns-8203/dns-test-5d50e4fe-11b2-469b-9ca0-9cffaaa8344d contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 4 10:52:30.874: INFO: File jessie_udp@dns-test-service-3.dns-8203.svc.cluster.local from pod dns-8203/dns-test-5d50e4fe-11b2-469b-9ca0-9cffaaa8344d contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 4 10:52:30.874: INFO: Lookups using dns-8203/dns-test-5d50e4fe-11b2-469b-9ca0-9cffaaa8344d failed for: [wheezy_udp@dns-test-service-3.dns-8203.svc.cluster.local jessie_udp@dns-test-service-3.dns-8203.svc.cluster.local] Aug 4 10:52:35.870: INFO: File wheezy_udp@dns-test-service-3.dns-8203.svc.cluster.local from pod dns-8203/dns-test-5d50e4fe-11b2-469b-9ca0-9cffaaa8344d contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 4 10:52:35.874: INFO: File jessie_udp@dns-test-service-3.dns-8203.svc.cluster.local from pod dns-8203/dns-test-5d50e4fe-11b2-469b-9ca0-9cffaaa8344d contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 4 10:52:35.874: INFO: Lookups using dns-8203/dns-test-5d50e4fe-11b2-469b-9ca0-9cffaaa8344d failed for: [wheezy_udp@dns-test-service-3.dns-8203.svc.cluster.local jessie_udp@dns-test-service-3.dns-8203.svc.cluster.local] Aug 4 10:52:40.871: INFO: File wheezy_udp@dns-test-service-3.dns-8203.svc.cluster.local from pod dns-8203/dns-test-5d50e4fe-11b2-469b-9ca0-9cffaaa8344d contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 4 10:52:40.875: INFO: File jessie_udp@dns-test-service-3.dns-8203.svc.cluster.local from pod dns-8203/dns-test-5d50e4fe-11b2-469b-9ca0-9cffaaa8344d contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 4 10:52:40.875: INFO: Lookups using dns-8203/dns-test-5d50e4fe-11b2-469b-9ca0-9cffaaa8344d failed for: [wheezy_udp@dns-test-service-3.dns-8203.svc.cluster.local jessie_udp@dns-test-service-3.dns-8203.svc.cluster.local] Aug 4 10:52:45.875: INFO: DNS probes using dns-test-5d50e4fe-11b2-469b-9ca0-9cffaaa8344d succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8203.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-8203.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8203.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-8203.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 4 10:52:54.672: INFO: DNS probes using dns-test-6101788b-6d45-4810-9c55-b7d2c2ffa010 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:52:54.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8203" for this suite. • [SLOW TEST:45.303 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":275,"completed":78,"skipped":1536,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:52:54.883: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on tmpfs Aug 4 10:52:55.196: INFO: Waiting up to 5m0s for pod "pod-cb339f95-431d-4651-b6ab-494ddc320ec2" in namespace "emptydir-8909" to be "Succeeded or Failed" Aug 4 10:52:55.315: INFO: Pod "pod-cb339f95-431d-4651-b6ab-494ddc320ec2": Phase="Pending", Reason="", readiness=false. Elapsed: 119.187459ms Aug 4 10:52:57.319: INFO: Pod "pod-cb339f95-431d-4651-b6ab-494ddc320ec2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.123422488s Aug 4 10:52:59.323: INFO: Pod "pod-cb339f95-431d-4651-b6ab-494ddc320ec2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.12779418s STEP: Saw pod success Aug 4 10:52:59.323: INFO: Pod "pod-cb339f95-431d-4651-b6ab-494ddc320ec2" satisfied condition "Succeeded or Failed" Aug 4 10:52:59.326: INFO: Trying to get logs from node kali-worker2 pod pod-cb339f95-431d-4651-b6ab-494ddc320ec2 container test-container: STEP: delete the pod Aug 4 10:52:59.342: INFO: Waiting for pod pod-cb339f95-431d-4651-b6ab-494ddc320ec2 to disappear Aug 4 10:52:59.346: INFO: Pod pod-cb339f95-431d-4651-b6ab-494ddc320ec2 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:52:59.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8909" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":79,"skipped":1543,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:52:59.353: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0804 10:53:09.521805 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Aug 4 10:53:09.521: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:53:09.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7974" for this suite. • [SLOW TEST:10.200 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":275,"completed":80,"skipped":1554,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:53:09.554: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 4 10:53:09.611: INFO: Creating ReplicaSet my-hostname-basic-d6dfbcec-f217-451a-a1a8-ac9144f6c5c7 Aug 4 10:53:09.637: INFO: Pod name my-hostname-basic-d6dfbcec-f217-451a-a1a8-ac9144f6c5c7: Found 0 pods out of 1 Aug 4 10:53:14.662: INFO: Pod name my-hostname-basic-d6dfbcec-f217-451a-a1a8-ac9144f6c5c7: Found 1 pods out of 1 Aug 4 10:53:14.662: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-d6dfbcec-f217-451a-a1a8-ac9144f6c5c7" is running Aug 4 10:53:14.664: INFO: Pod "my-hostname-basic-d6dfbcec-f217-451a-a1a8-ac9144f6c5c7-lk82m" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-04 10:53:09 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-04 10:53:12 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-04 10:53:12 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-04 10:53:09 +0000 UTC Reason: Message:}]) Aug 4 10:53:14.664: INFO: Trying to dial the pod Aug 4 10:53:19.706: INFO: Controller my-hostname-basic-d6dfbcec-f217-451a-a1a8-ac9144f6c5c7: Got expected result from replica 1 [my-hostname-basic-d6dfbcec-f217-451a-a1a8-ac9144f6c5c7-lk82m]: "my-hostname-basic-d6dfbcec-f217-451a-a1a8-ac9144f6c5c7-lk82m", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:53:19.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-5413" for this suite. • [SLOW TEST:10.183 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":275,"completed":81,"skipped":1573,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:53:19.737: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-map-1a5824b2-8076-4e06-9d1d-f6fd612b0e65 STEP: Creating a pod to test consume configMaps Aug 4 10:53:19.843: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5c5cb15d-ff54-4acd-aa68-c82bf3218c58" in namespace "projected-5651" to be "Succeeded or Failed" Aug 4 10:53:19.861: INFO: Pod "pod-projected-configmaps-5c5cb15d-ff54-4acd-aa68-c82bf3218c58": Phase="Pending", Reason="", readiness=false. Elapsed: 18.144888ms Aug 4 10:53:21.865: INFO: Pod "pod-projected-configmaps-5c5cb15d-ff54-4acd-aa68-c82bf3218c58": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022414435s Aug 4 10:53:23.962: INFO: Pod "pod-projected-configmaps-5c5cb15d-ff54-4acd-aa68-c82bf3218c58": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.119427759s STEP: Saw pod success Aug 4 10:53:23.962: INFO: Pod "pod-projected-configmaps-5c5cb15d-ff54-4acd-aa68-c82bf3218c58" satisfied condition "Succeeded or Failed" Aug 4 10:53:23.966: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-5c5cb15d-ff54-4acd-aa68-c82bf3218c58 container projected-configmap-volume-test: STEP: delete the pod Aug 4 10:53:24.229: INFO: Waiting for pod pod-projected-configmaps-5c5cb15d-ff54-4acd-aa68-c82bf3218c58 to disappear Aug 4 10:53:24.249: INFO: Pod pod-projected-configmaps-5c5cb15d-ff54-4acd-aa68-c82bf3218c58 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:53:24.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5651" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":82,"skipped":1583,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:53:24.255: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Aug 4 10:53:24.495: INFO: Waiting up to 5m0s for pod "downwardapi-volume-59413fb3-31d9-44a1-8602-f16d190e977f" in namespace "downward-api-4354" to be "Succeeded or Failed" Aug 4 10:53:24.498: INFO: Pod "downwardapi-volume-59413fb3-31d9-44a1-8602-f16d190e977f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.422145ms Aug 4 10:53:26.502: INFO: Pod "downwardapi-volume-59413fb3-31d9-44a1-8602-f16d190e977f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006827254s Aug 4 10:53:28.506: INFO: Pod "downwardapi-volume-59413fb3-31d9-44a1-8602-f16d190e977f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010759675s STEP: Saw pod success Aug 4 10:53:28.506: INFO: Pod "downwardapi-volume-59413fb3-31d9-44a1-8602-f16d190e977f" satisfied condition "Succeeded or Failed" Aug 4 10:53:28.509: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-59413fb3-31d9-44a1-8602-f16d190e977f container client-container: STEP: delete the pod Aug 4 10:53:28.562: INFO: Waiting for pod downwardapi-volume-59413fb3-31d9-44a1-8602-f16d190e977f to disappear Aug 4 10:53:28.566: INFO: Pod downwardapi-volume-59413fb3-31d9-44a1-8602-f16d190e977f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:53:28.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4354" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":275,"completed":83,"skipped":1600,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:53:28.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange Aug 4 10:53:28.634: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values Aug 4 10:53:28.656: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Aug 4 10:53:28.656: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange Aug 4 10:53:28.688: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Aug 4 10:53:28.688: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange Aug 4 10:53:28.740: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] Aug 4 10:53:28.740: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted Aug 4 10:53:36.181: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:53:36.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-8037" for this suite. • [SLOW TEST:7.681 seconds] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":275,"completed":84,"skipped":1623,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:53:36.256: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Aug 4 10:53:36.401: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d6c16bca-25d3-4369-9fe1-75f038ee048f" in namespace "projected-7189" to be "Succeeded or Failed" Aug 4 10:53:36.405: INFO: Pod "downwardapi-volume-d6c16bca-25d3-4369-9fe1-75f038ee048f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062274ms Aug 4 10:53:38.561: INFO: Pod "downwardapi-volume-d6c16bca-25d3-4369-9fe1-75f038ee048f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.159580455s Aug 4 10:53:40.565: INFO: Pod "downwardapi-volume-d6c16bca-25d3-4369-9fe1-75f038ee048f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.163758572s Aug 4 10:53:42.669: INFO: Pod "downwardapi-volume-d6c16bca-25d3-4369-9fe1-75f038ee048f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.267386707s STEP: Saw pod success Aug 4 10:53:42.669: INFO: Pod "downwardapi-volume-d6c16bca-25d3-4369-9fe1-75f038ee048f" satisfied condition "Succeeded or Failed" Aug 4 10:53:42.672: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-d6c16bca-25d3-4369-9fe1-75f038ee048f container client-container: STEP: delete the pod Aug 4 10:53:42.844: INFO: Waiting for pod downwardapi-volume-d6c16bca-25d3-4369-9fe1-75f038ee048f to disappear Aug 4 10:53:42.939: INFO: Pod downwardapi-volume-d6c16bca-25d3-4369-9fe1-75f038ee048f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:53:42.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7189" for this suite. • [SLOW TEST:7.173 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":275,"completed":85,"skipped":1644,"failed":0} SSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:53:43.429: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 4 10:53:43.746: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:53:50.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-613" for this suite. • [SLOW TEST:7.293 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":275,"completed":86,"skipped":1649,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:53:50.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 4 10:53:50.817: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:53:57.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-4478" for this suite. • [SLOW TEST:6.422 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":275,"completed":87,"skipped":1701,"failed":0} SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:53:57.145: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 4 10:53:57.209: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6776' Aug 4 10:53:57.574: INFO: stderr: "" Aug 4 10:53:57.574: INFO: stdout: "replicationcontroller/agnhost-master created\n" Aug 4 10:53:57.574: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6776' Aug 4 10:53:57.858: INFO: stderr: "" Aug 4 10:53:57.858: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Aug 4 10:53:58.863: INFO: Selector matched 1 pods for map[app:agnhost] Aug 4 10:53:58.863: INFO: Found 0 / 1 Aug 4 10:53:59.863: INFO: Selector matched 1 pods for map[app:agnhost] Aug 4 10:53:59.863: INFO: Found 0 / 1 Aug 4 10:54:00.866: INFO: Selector matched 1 pods for map[app:agnhost] Aug 4 10:54:00.866: INFO: Found 1 / 1 Aug 4 10:54:00.866: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Aug 4 10:54:00.869: INFO: Selector matched 1 pods for map[app:agnhost] Aug 4 10:54:00.869: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Aug 4 10:54:00.869: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config describe pod agnhost-master-mgkjv --namespace=kubectl-6776' Aug 4 10:54:00.973: INFO: stderr: "" Aug 4 10:54:00.973: INFO: stdout: "Name: agnhost-master-mgkjv\nNamespace: kubectl-6776\nPriority: 0\nNode: kali-worker/172.18.0.13\nStart Time: Tue, 04 Aug 2020 10:53:57 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.2.210\nIPs:\n IP: 10.244.2.210\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://bf638cc71a9b38ce3303fdab9f30eaad1b021cc530f81a3c52dd78aae2a75a28\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n Image ID: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Tue, 04 Aug 2020 10:54:00 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-5zwbk (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-5zwbk:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-5zwbk\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled default-scheduler Successfully assigned kubectl-6776/agnhost-master-mgkjv to kali-worker\n Normal Pulled 2s kubelet, kali-worker Container image \"us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\" already present on machine\n Normal Created 1s kubelet, kali-worker Created container agnhost-master\n Normal Started 0s kubelet, kali-worker Started container agnhost-master\n" Aug 4 10:54:00.973: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-6776' Aug 4 10:54:01.085: INFO: stderr: "" Aug 4 10:54:01.085: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-6776\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s replication-controller Created pod: agnhost-master-mgkjv\n" Aug 4 10:54:01.085: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-6776' Aug 4 10:54:01.197: INFO: stderr: "" Aug 4 10:54:01.197: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-6776\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.106.92.212\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.2.210:6379\nSession Affinity: None\nEvents: \n" Aug 4 10:54:01.200: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config describe node kali-control-plane' Aug 4 10:54:01.331: INFO: stderr: "" Aug 4 10:54:01.331: INFO: stdout: "Name: kali-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=kali-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Fri, 10 Jul 2020 10:27:46 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: kali-control-plane\n AcquireTime: \n RenewTime: Tue, 04 Aug 2020 10:53:52 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Tue, 04 Aug 2020 10:49:19 +0000 Fri, 10 Jul 2020 10:27:45 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Tue, 04 Aug 2020 10:49:19 +0000 Fri, 10 Jul 2020 10:27:45 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Tue, 04 Aug 2020 10:49:19 +0000 Fri, 10 Jul 2020 10:27:45 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Tue, 04 Aug 2020 10:49:19 +0000 Fri, 10 Jul 2020 10:28:23 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.18.0.16\n Hostname: kali-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759872Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759872Ki\n pods: 110\nSystem Info:\n Machine ID: d83d42c4b42d4de1b3233683d9cadf95\n System UUID: e06c57c7-ce4f-4ae9-8bb6-40f1dc0e1a64\n Boot ID: 11738d2d-5baa-4089-8e7f-2fb0329fce58\n Kernel Version: 4.15.0-109-generic\n OS Image: Ubuntu 20.04 LTS\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.4.0-beta.1-34-g49b0743c\n Kubelet Version: v1.18.4\n Kube-Proxy Version: v1.18.4\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-66bff467f8-qtcqs 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 25d\n kube-system coredns-66bff467f8-tjkg9 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 25d\n kube-system etcd-kali-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 25d\n kube-system kindnet-zxw2f 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 25d\n kube-system kube-apiserver-kali-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 25d\n kube-system kube-controller-manager-kali-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 25d\n kube-system kube-proxy-xmqbs 0 (0%) 0 (0%) 0 (0%) 0 (0%) 25d\n kube-system kube-scheduler-kali-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 25d\n local-path-storage local-path-provisioner-67795f75bd-clsb6 0 (0%) 0 (0%) 0 (0%) 0 (0%) 25d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" Aug 4 10:54:01.331: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config describe namespace kubectl-6776' Aug 4 10:54:01.472: INFO: stderr: "" Aug 4 10:54:01.472: INFO: stdout: "Name: kubectl-6776\nLabels: e2e-framework=kubectl\n e2e-run=bab4f067-a7ad-46ba-b07c-2e01c836795f\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:54:01.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6776" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":275,"completed":88,"skipped":1712,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:54:01.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Aug 4 10:54:01.546: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-6026 /api/v1/namespaces/watch-6026/configmaps/e2e-watch-test-watch-closed 81a3bc0c-1701-4907-b7a4-7d97865443e6 6668517 0 2020-08-04 10:54:01 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-08-04 10:54:01 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Aug 4 10:54:01.546: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-6026 /api/v1/namespaces/watch-6026/configmaps/e2e-watch-test-watch-closed 81a3bc0c-1701-4907-b7a4-7d97865443e6 6668518 0 2020-08-04 10:54:01 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-08-04 10:54:01 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Aug 4 10:54:01.593: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-6026 /api/v1/namespaces/watch-6026/configmaps/e2e-watch-test-watch-closed 81a3bc0c-1701-4907-b7a4-7d97865443e6 6668519 0 2020-08-04 10:54:01 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-08-04 10:54:01 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Aug 4 10:54:01.594: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-6026 /api/v1/namespaces/watch-6026/configmaps/e2e-watch-test-watch-closed 81a3bc0c-1701-4907-b7a4-7d97865443e6 6668520 0 2020-08-04 10:54:01 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-08-04 10:54:01 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:54:01.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6026" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":275,"completed":89,"skipped":1764,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:54:01.615: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-9c21ecec-764b-4c74-a0f9-19cbcf93359e STEP: Creating a pod to test consume configMaps Aug 4 10:54:01.724: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-12531894-0a0b-4f9d-98db-6f32347451df" in namespace "projected-7484" to be "Succeeded or Failed" Aug 4 10:54:01.759: INFO: Pod "pod-projected-configmaps-12531894-0a0b-4f9d-98db-6f32347451df": Phase="Pending", Reason="", readiness=false. Elapsed: 35.427557ms Aug 4 10:54:03.763: INFO: Pod "pod-projected-configmaps-12531894-0a0b-4f9d-98db-6f32347451df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039201223s Aug 4 10:54:05.771: INFO: Pod "pod-projected-configmaps-12531894-0a0b-4f9d-98db-6f32347451df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.047776113s STEP: Saw pod success Aug 4 10:54:05.771: INFO: Pod "pod-projected-configmaps-12531894-0a0b-4f9d-98db-6f32347451df" satisfied condition "Succeeded or Failed" Aug 4 10:54:05.813: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-12531894-0a0b-4f9d-98db-6f32347451df container projected-configmap-volume-test: STEP: delete the pod Aug 4 10:54:05.844: INFO: Waiting for pod pod-projected-configmaps-12531894-0a0b-4f9d-98db-6f32347451df to disappear Aug 4 10:54:05.862: INFO: Pod pod-projected-configmaps-12531894-0a0b-4f9d-98db-6f32347451df no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:54:05.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7484" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":275,"completed":90,"skipped":1791,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:54:05.870: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on tmpfs Aug 4 10:54:05.989: INFO: Waiting up to 5m0s for pod "pod-f1210f74-0f91-4da6-af15-6d627797ed01" in namespace "emptydir-9582" to be "Succeeded or Failed" Aug 4 10:54:06.000: INFO: Pod "pod-f1210f74-0f91-4da6-af15-6d627797ed01": Phase="Pending", Reason="", readiness=false. Elapsed: 10.326959ms Aug 4 10:54:08.016: INFO: Pod "pod-f1210f74-0f91-4da6-af15-6d627797ed01": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02710279s Aug 4 10:54:10.020: INFO: Pod "pod-f1210f74-0f91-4da6-af15-6d627797ed01": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030701245s STEP: Saw pod success Aug 4 10:54:10.020: INFO: Pod "pod-f1210f74-0f91-4da6-af15-6d627797ed01" satisfied condition "Succeeded or Failed" Aug 4 10:54:10.022: INFO: Trying to get logs from node kali-worker2 pod pod-f1210f74-0f91-4da6-af15-6d627797ed01 container test-container: STEP: delete the pod Aug 4 10:54:10.334: INFO: Waiting for pod pod-f1210f74-0f91-4da6-af15-6d627797ed01 to disappear Aug 4 10:54:10.363: INFO: Pod pod-f1210f74-0f91-4da6-af15-6d627797ed01 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:54:10.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9582" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":91,"skipped":1802,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:54:10.374: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap configmap-2601/configmap-test-68140335-2c0b-4627-9fe6-bc94e6612ffe STEP: Creating a pod to test consume configMaps Aug 4 10:54:10.462: INFO: Waiting up to 5m0s for pod "pod-configmaps-8137267b-d9ab-43e6-80c3-ab8d0504d6bb" in namespace "configmap-2601" to be "Succeeded or Failed" Aug 4 10:54:10.488: INFO: Pod "pod-configmaps-8137267b-d9ab-43e6-80c3-ab8d0504d6bb": Phase="Pending", Reason="", readiness=false. Elapsed: 25.707492ms Aug 4 10:54:12.572: INFO: Pod "pod-configmaps-8137267b-d9ab-43e6-80c3-ab8d0504d6bb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.110219593s Aug 4 10:54:14.577: INFO: Pod "pod-configmaps-8137267b-d9ab-43e6-80c3-ab8d0504d6bb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.114522948s STEP: Saw pod success Aug 4 10:54:14.577: INFO: Pod "pod-configmaps-8137267b-d9ab-43e6-80c3-ab8d0504d6bb" satisfied condition "Succeeded or Failed" Aug 4 10:54:14.580: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-8137267b-d9ab-43e6-80c3-ab8d0504d6bb container env-test: STEP: delete the pod Aug 4 10:54:14.605: INFO: Waiting for pod pod-configmaps-8137267b-d9ab-43e6-80c3-ab8d0504d6bb to disappear Aug 4 10:54:14.621: INFO: Pod pod-configmaps-8137267b-d9ab-43e6-80c3-ab8d0504d6bb no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:54:14.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2601" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":275,"completed":92,"skipped":1814,"failed":0} ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:54:14.630: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-8313 STEP: creating a selector STEP: Creating the service pods in kubernetes Aug 4 10:54:14.796: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Aug 4 10:54:14.839: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 4 10:54:17.103: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 4 10:54:18.842: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 4 10:54:20.843: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 4 10:54:22.843: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 4 10:54:24.843: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 4 10:54:26.843: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 4 10:54:28.843: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 4 10:54:30.843: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 4 10:54:32.848: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 4 10:54:34.862: INFO: The status of Pod netserver-0 is Running (Ready = true) Aug 4 10:54:34.868: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Aug 4 10:54:41.027: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.211:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8313 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 4 10:54:41.027: INFO: >>> kubeConfig: /root/.kube/config I0804 10:54:41.061160 7 log.go:172] (0xc00180bd90) (0xc0012b4dc0) Create stream I0804 10:54:41.061199 7 log.go:172] (0xc00180bd90) (0xc0012b4dc0) Stream added, broadcasting: 1 I0804 10:54:41.063634 7 log.go:172] (0xc00180bd90) Reply frame received for 1 I0804 10:54:41.063684 7 log.go:172] (0xc00180bd90) (0xc0012b5220) Create stream I0804 10:54:41.063697 7 log.go:172] (0xc00180bd90) (0xc0012b5220) Stream added, broadcasting: 3 I0804 10:54:41.064796 7 log.go:172] (0xc00180bd90) Reply frame received for 3 I0804 10:54:41.064839 7 log.go:172] (0xc00180bd90) (0xc0013146e0) Create stream I0804 10:54:41.064851 7 log.go:172] (0xc00180bd90) (0xc0013146e0) Stream added, broadcasting: 5 I0804 10:54:41.065839 7 log.go:172] (0xc00180bd90) Reply frame received for 5 I0804 10:54:41.128713 7 log.go:172] (0xc00180bd90) Data frame received for 5 I0804 10:54:41.128861 7 log.go:172] (0xc0013146e0) (5) Data frame handling I0804 10:54:41.128898 7 log.go:172] (0xc00180bd90) Data frame received for 3 I0804 10:54:41.128928 7 log.go:172] (0xc0012b5220) (3) Data frame handling I0804 10:54:41.128968 7 log.go:172] (0xc0012b5220) (3) Data frame sent I0804 10:54:41.128983 7 log.go:172] (0xc00180bd90) Data frame received for 3 I0804 10:54:41.128992 7 log.go:172] (0xc0012b5220) (3) Data frame handling I0804 10:54:41.130933 7 log.go:172] (0xc00180bd90) Data frame received for 1 I0804 10:54:41.130973 7 log.go:172] (0xc0012b4dc0) (1) Data frame handling I0804 10:54:41.130993 7 log.go:172] (0xc0012b4dc0) (1) Data frame sent I0804 10:54:41.131037 7 log.go:172] (0xc00180bd90) (0xc0012b4dc0) Stream removed, broadcasting: 1 I0804 10:54:41.131070 7 log.go:172] (0xc00180bd90) Go away received I0804 10:54:41.131177 7 log.go:172] (0xc00180bd90) (0xc0012b4dc0) Stream removed, broadcasting: 1 I0804 10:54:41.131219 7 log.go:172] (0xc00180bd90) (0xc0012b5220) Stream removed, broadcasting: 3 I0804 10:54:41.131244 7 log.go:172] (0xc00180bd90) (0xc0013146e0) Stream removed, broadcasting: 5 Aug 4 10:54:41.131: INFO: Found all expected endpoints: [netserver-0] Aug 4 10:54:41.135: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.126:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8313 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 4 10:54:41.135: INFO: >>> kubeConfig: /root/.kube/config I0804 10:54:41.172537 7 log.go:172] (0xc0020c6580) (0xc001532960) Create stream I0804 10:54:41.172590 7 log.go:172] (0xc0020c6580) (0xc001532960) Stream added, broadcasting: 1 I0804 10:54:41.175918 7 log.go:172] (0xc0020c6580) Reply frame received for 1 I0804 10:54:41.175965 7 log.go:172] (0xc0020c6580) (0xc001036000) Create stream I0804 10:54:41.175979 7 log.go:172] (0xc0020c6580) (0xc001036000) Stream added, broadcasting: 3 I0804 10:54:41.177355 7 log.go:172] (0xc0020c6580) Reply frame received for 3 I0804 10:54:41.177405 7 log.go:172] (0xc0020c6580) (0xc001314aa0) Create stream I0804 10:54:41.177427 7 log.go:172] (0xc0020c6580) (0xc001314aa0) Stream added, broadcasting: 5 I0804 10:54:41.178700 7 log.go:172] (0xc0020c6580) Reply frame received for 5 I0804 10:54:41.246350 7 log.go:172] (0xc0020c6580) Data frame received for 5 I0804 10:54:41.246390 7 log.go:172] (0xc001314aa0) (5) Data frame handling I0804 10:54:41.246431 7 log.go:172] (0xc0020c6580) Data frame received for 3 I0804 10:54:41.246449 7 log.go:172] (0xc001036000) (3) Data frame handling I0804 10:54:41.246464 7 log.go:172] (0xc001036000) (3) Data frame sent I0804 10:54:41.246472 7 log.go:172] (0xc0020c6580) Data frame received for 3 I0804 10:54:41.246486 7 log.go:172] (0xc001036000) (3) Data frame handling I0804 10:54:41.247924 7 log.go:172] (0xc0020c6580) Data frame received for 1 I0804 10:54:41.247944 7 log.go:172] (0xc001532960) (1) Data frame handling I0804 10:54:41.247955 7 log.go:172] (0xc001532960) (1) Data frame sent I0804 10:54:41.247981 7 log.go:172] (0xc0020c6580) (0xc001532960) Stream removed, broadcasting: 1 I0804 10:54:41.248000 7 log.go:172] (0xc0020c6580) Go away received I0804 10:54:41.248136 7 log.go:172] (0xc0020c6580) (0xc001532960) Stream removed, broadcasting: 1 I0804 10:54:41.248155 7 log.go:172] (0xc0020c6580) (0xc001036000) Stream removed, broadcasting: 3 I0804 10:54:41.248165 7 log.go:172] (0xc0020c6580) (0xc001314aa0) Stream removed, broadcasting: 5 Aug 4 10:54:41.248: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:54:41.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8313" for this suite. • [SLOW TEST:26.626 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":93,"skipped":1814,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:54:41.256: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Aug 4 10:54:41.363: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c1831c5b-ba26-48ec-afc3-74be2fab7458" in namespace "projected-2649" to be "Succeeded or Failed" Aug 4 10:54:41.420: INFO: Pod "downwardapi-volume-c1831c5b-ba26-48ec-afc3-74be2fab7458": Phase="Pending", Reason="", readiness=false. Elapsed: 57.347432ms Aug 4 10:54:43.424: INFO: Pod "downwardapi-volume-c1831c5b-ba26-48ec-afc3-74be2fab7458": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061596055s Aug 4 10:54:45.428: INFO: Pod "downwardapi-volume-c1831c5b-ba26-48ec-afc3-74be2fab7458": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.065578979s STEP: Saw pod success Aug 4 10:54:45.428: INFO: Pod "downwardapi-volume-c1831c5b-ba26-48ec-afc3-74be2fab7458" satisfied condition "Succeeded or Failed" Aug 4 10:54:45.432: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-c1831c5b-ba26-48ec-afc3-74be2fab7458 container client-container: STEP: delete the pod Aug 4 10:54:45.676: INFO: Waiting for pod downwardapi-volume-c1831c5b-ba26-48ec-afc3-74be2fab7458 to disappear Aug 4 10:54:45.683: INFO: Pod downwardapi-volume-c1831c5b-ba26-48ec-afc3-74be2fab7458 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:54:45.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2649" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":94,"skipped":1823,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:54:45.705: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Aug 4 10:54:45.765: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Aug 4 10:54:56.472: INFO: >>> kubeConfig: /root/.kube/config Aug 4 10:54:59.457: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:55:10.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5602" for this suite. • [SLOW TEST:24.441 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":275,"completed":95,"skipped":1852,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:55:10.147: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:55:26.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-1689" for this suite. • [SLOW TEST:16.141 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":275,"completed":96,"skipped":1864,"failed":0} S ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:55:26.288: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod test-webserver-d0a21af9-0644-4a9e-9833-9ba50b33fb00 in namespace container-probe-3370 Aug 4 10:55:30.417: INFO: Started pod test-webserver-d0a21af9-0644-4a9e-9833-9ba50b33fb00 in namespace container-probe-3370 STEP: checking the pod's current state and verifying that restartCount is present Aug 4 10:55:30.420: INFO: Initial restart count of pod test-webserver-d0a21af9-0644-4a9e-9833-9ba50b33fb00 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:59:30.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3370" for this suite. • [SLOW TEST:244.327 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":275,"completed":97,"skipped":1865,"failed":0} SSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:59:30.615: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test env composition Aug 4 10:59:30.933: INFO: Waiting up to 5m0s for pod "var-expansion-dad2848a-dc56-4061-95ba-66f8a8802ffc" in namespace "var-expansion-6320" to be "Succeeded or Failed" Aug 4 10:59:31.037: INFO: Pod "var-expansion-dad2848a-dc56-4061-95ba-66f8a8802ffc": Phase="Pending", Reason="", readiness=false. Elapsed: 103.781494ms Aug 4 10:59:33.040: INFO: Pod "var-expansion-dad2848a-dc56-4061-95ba-66f8a8802ffc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.107195788s Aug 4 10:59:35.230: INFO: Pod "var-expansion-dad2848a-dc56-4061-95ba-66f8a8802ffc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.296885231s Aug 4 10:59:37.234: INFO: Pod "var-expansion-dad2848a-dc56-4061-95ba-66f8a8802ffc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.301348262s STEP: Saw pod success Aug 4 10:59:37.234: INFO: Pod "var-expansion-dad2848a-dc56-4061-95ba-66f8a8802ffc" satisfied condition "Succeeded or Failed" Aug 4 10:59:37.238: INFO: Trying to get logs from node kali-worker pod var-expansion-dad2848a-dc56-4061-95ba-66f8a8802ffc container dapi-container: STEP: delete the pod Aug 4 10:59:37.272: INFO: Waiting for pod var-expansion-dad2848a-dc56-4061-95ba-66f8a8802ffc to disappear Aug 4 10:59:37.289: INFO: Pod var-expansion-dad2848a-dc56-4061-95ba-66f8a8802ffc no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:59:37.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6320" for this suite. • [SLOW TEST:6.684 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":275,"completed":98,"skipped":1868,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:59:37.299: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 4 10:59:38.263: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 4 10:59:40.336: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732135578, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732135578, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732135578, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732135578, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 4 10:59:42.370: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732135578, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732135578, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732135578, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732135578, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 4 10:59:45.355: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 10:59:57.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5900" for this suite. STEP: Destroying namespace "webhook-5900-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:20.334 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":275,"completed":99,"skipped":1870,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 10:59:57.634: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 4 10:59:58.526: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 4 11:00:00.536: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732135598, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732135598, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732135598, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732135598, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 4 11:00:02.541: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732135598, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732135598, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732135598, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732135598, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 4 11:00:05.569: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 11:00:05.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2200" for this suite. STEP: Destroying namespace "webhook-2200-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.189 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":275,"completed":100,"skipped":1882,"failed":0} SSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 11:00:05.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Aug 4 11:00:06.257: INFO: Waiting up to 5m0s for pod "downward-api-6c99d93e-b6a2-4c7c-a980-f0fb16a39c68" in namespace "downward-api-7773" to be "Succeeded or Failed" Aug 4 11:00:06.360: INFO: Pod "downward-api-6c99d93e-b6a2-4c7c-a980-f0fb16a39c68": Phase="Pending", Reason="", readiness=false. Elapsed: 103.547723ms Aug 4 11:00:08.365: INFO: Pod "downward-api-6c99d93e-b6a2-4c7c-a980-f0fb16a39c68": Phase="Pending", Reason="", readiness=false. Elapsed: 2.107993757s Aug 4 11:00:10.369: INFO: Pod "downward-api-6c99d93e-b6a2-4c7c-a980-f0fb16a39c68": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.111922492s STEP: Saw pod success Aug 4 11:00:10.369: INFO: Pod "downward-api-6c99d93e-b6a2-4c7c-a980-f0fb16a39c68" satisfied condition "Succeeded or Failed" Aug 4 11:00:10.371: INFO: Trying to get logs from node kali-worker2 pod downward-api-6c99d93e-b6a2-4c7c-a980-f0fb16a39c68 container dapi-container: STEP: delete the pod Aug 4 11:00:10.412: INFO: Waiting for pod downward-api-6c99d93e-b6a2-4c7c-a980-f0fb16a39c68 to disappear Aug 4 11:00:10.422: INFO: Pod downward-api-6c99d93e-b6a2-4c7c-a980-f0fb16a39c68 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 11:00:10.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7773" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":275,"completed":101,"skipped":1888,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 11:00:10.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: running the image docker.io/library/httpd:2.4.38-alpine Aug 4 11:00:10.554: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-5741' Aug 4 11:00:14.816: INFO: stderr: "" Aug 4 11:00:14.816: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Aug 4 11:00:24.867: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-5741 -o json' Aug 4 11:00:24.956: INFO: stderr: "" Aug 4 11:00:24.956: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-08-04T11:00:14Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"managedFields\": [\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:run\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n \".\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {}\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {}\n }\n },\n \"manager\": \"kubectl\",\n \"operation\": \"Update\",\n \"time\": \"2020-08-04T11:00:14Z\"\n },\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:phase\": {},\n \"f:podIP\": {},\n \"f:podIPs\": {\n \".\": {},\n \"k:{\\\"ip\\\":\\\"10.244.1.132\\\"}\": {\n \".\": {},\n \"f:ip\": {}\n }\n },\n \"f:startTime\": {}\n }\n },\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"time\": \"2020-08-04T11:00:21Z\"\n }\n ],\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-5741\",\n \"resourceVersion\": \"6670172\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-5741/pods/e2e-test-httpd-pod\",\n \"uid\": \"fc314f9a-f36c-42f6-95f1-7aa32d3b41cc\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-5btw2\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"kali-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-5btw2\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-5btw2\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-08-04T11:00:14Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-08-04T11:00:21Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-08-04T11:00:21Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-08-04T11:00:14Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://4a79e1881bcf9aa04cb02bfda4a3a28a480f3fe7f367eb47f43503dae784944e\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-08-04T11:00:20Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.18.0.15\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.132\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.1.132\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-08-04T11:00:14Z\"\n }\n}\n" STEP: replace the image in the pod Aug 4 11:00:24.957: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-5741' Aug 4 11:00:25.708: INFO: stderr: "" Aug 4 11:00:25.708: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459 Aug 4 11:00:25.759: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-5741' Aug 4 11:00:29.818: INFO: stderr: "" Aug 4 11:00:29.818: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 11:00:29.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5741" for this suite. • [SLOW TEST:19.387 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1450 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":275,"completed":102,"skipped":1916,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 11:00:29.835: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test override arguments Aug 4 11:00:29.916: INFO: Waiting up to 5m0s for pod "client-containers-e129e14b-5d84-4e75-9a0d-2dfdb610598f" in namespace "containers-8486" to be "Succeeded or Failed" Aug 4 11:00:29.919: INFO: Pod "client-containers-e129e14b-5d84-4e75-9a0d-2dfdb610598f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.687814ms Aug 4 11:00:31.924: INFO: Pod "client-containers-e129e14b-5d84-4e75-9a0d-2dfdb610598f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007908942s Aug 4 11:00:33.950: INFO: Pod "client-containers-e129e14b-5d84-4e75-9a0d-2dfdb610598f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034404652s STEP: Saw pod success Aug 4 11:00:33.950: INFO: Pod "client-containers-e129e14b-5d84-4e75-9a0d-2dfdb610598f" satisfied condition "Succeeded or Failed" Aug 4 11:00:33.953: INFO: Trying to get logs from node kali-worker2 pod client-containers-e129e14b-5d84-4e75-9a0d-2dfdb610598f container test-container: STEP: delete the pod Aug 4 11:00:34.253: INFO: Waiting for pod client-containers-e129e14b-5d84-4e75-9a0d-2dfdb610598f to disappear Aug 4 11:00:34.348: INFO: Pod client-containers-e129e14b-5d84-4e75-9a0d-2dfdb610598f no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 11:00:34.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8486" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":275,"completed":103,"skipped":1941,"failed":0} SSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 11:00:34.363: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Aug 4 11:00:34.476: INFO: Waiting up to 5m0s for pod "downwardapi-volume-57f20e19-2f27-4dd3-a855-36057c91e063" in namespace "projected-7177" to be "Succeeded or Failed" Aug 4 11:00:34.514: INFO: Pod "downwardapi-volume-57f20e19-2f27-4dd3-a855-36057c91e063": Phase="Pending", Reason="", readiness=false. Elapsed: 38.137703ms Aug 4 11:00:36.518: INFO: Pod "downwardapi-volume-57f20e19-2f27-4dd3-a855-36057c91e063": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041840275s Aug 4 11:00:38.522: INFO: Pod "downwardapi-volume-57f20e19-2f27-4dd3-a855-36057c91e063": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.046422724s STEP: Saw pod success Aug 4 11:00:38.522: INFO: Pod "downwardapi-volume-57f20e19-2f27-4dd3-a855-36057c91e063" satisfied condition "Succeeded or Failed" Aug 4 11:00:38.526: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-57f20e19-2f27-4dd3-a855-36057c91e063 container client-container: STEP: delete the pod Aug 4 11:00:38.562: INFO: Waiting for pod downwardapi-volume-57f20e19-2f27-4dd3-a855-36057c91e063 to disappear Aug 4 11:00:38.568: INFO: Pod downwardapi-volume-57f20e19-2f27-4dd3-a855-36057c91e063 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 11:00:38.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7177" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":104,"skipped":1944,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 11:00:38.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Aug 4 11:00:48.736: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 4 11:00:48.741: INFO: Pod pod-with-poststart-exec-hook still exists Aug 4 11:00:50.741: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 4 11:00:50.745: INFO: Pod pod-with-poststart-exec-hook still exists Aug 4 11:00:52.741: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 4 11:00:52.746: INFO: Pod pod-with-poststart-exec-hook still exists Aug 4 11:00:54.741: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 4 11:00:54.745: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 11:00:54.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-251" for this suite. • [SLOW TEST:16.181 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":275,"completed":105,"skipped":1980,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 11:00:54.757: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Aug 4 11:00:54.820: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bbe68826-11c1-4cad-8964-dd690fdea919" in namespace "downward-api-3334" to be "Succeeded or Failed" Aug 4 11:00:54.824: INFO: Pod "downwardapi-volume-bbe68826-11c1-4cad-8964-dd690fdea919": Phase="Pending", Reason="", readiness=false. Elapsed: 3.87594ms Aug 4 11:00:56.828: INFO: Pod "downwardapi-volume-bbe68826-11c1-4cad-8964-dd690fdea919": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007821495s Aug 4 11:00:58.833: INFO: Pod "downwardapi-volume-bbe68826-11c1-4cad-8964-dd690fdea919": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012260464s STEP: Saw pod success Aug 4 11:00:58.833: INFO: Pod "downwardapi-volume-bbe68826-11c1-4cad-8964-dd690fdea919" satisfied condition "Succeeded or Failed" Aug 4 11:00:58.836: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-bbe68826-11c1-4cad-8964-dd690fdea919 container client-container: STEP: delete the pod Aug 4 11:00:58.865: INFO: Waiting for pod downwardapi-volume-bbe68826-11c1-4cad-8964-dd690fdea919 to disappear Aug 4 11:00:58.899: INFO: Pod downwardapi-volume-bbe68826-11c1-4cad-8964-dd690fdea919 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 11:00:58.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3334" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":275,"completed":106,"skipped":2005,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 11:00:58.906: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating service endpoint-test2 in namespace services-2361 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2361 to expose endpoints map[] Aug 4 11:00:59.049: INFO: Get endpoints failed (60.179546ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Aug 4 11:01:00.051: INFO: successfully validated that service endpoint-test2 in namespace services-2361 exposes endpoints map[] (1.062507705s elapsed) STEP: Creating pod pod1 in namespace services-2361 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2361 to expose endpoints map[pod1:[80]] Aug 4 11:01:04.438: INFO: successfully validated that service endpoint-test2 in namespace services-2361 exposes endpoints map[pod1:[80]] (4.379713338s elapsed) STEP: Creating pod pod2 in namespace services-2361 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2361 to expose endpoints map[pod1:[80] pod2:[80]] Aug 4 11:01:08.509: INFO: successfully validated that service endpoint-test2 in namespace services-2361 exposes endpoints map[pod1:[80] pod2:[80]] (4.067487427s elapsed) STEP: Deleting pod pod1 in namespace services-2361 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2361 to expose endpoints map[pod2:[80]] Aug 4 11:01:09.558: INFO: successfully validated that service endpoint-test2 in namespace services-2361 exposes endpoints map[pod2:[80]] (1.043698031s elapsed) STEP: Deleting pod pod2 in namespace services-2361 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2361 to expose endpoints map[] Aug 4 11:01:10.591: INFO: successfully validated that service endpoint-test2 in namespace services-2361 exposes endpoints map[] (1.029069923s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 11:01:10.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2361" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:11.855 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":275,"completed":107,"skipped":2024,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 11:01:10.763: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 4 11:01:10.948: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config version' Aug 4 11:01:11.102: INFO: stderr: "" Aug 4 11:01:11.102: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"18\", GitVersion:\"v1.18.5\", GitCommit:\"e6503f8d8f769ace2f338794c914a96fc335df0f\", GitTreeState:\"clean\", BuildDate:\"2020-07-09T18:53:46Z\", GoVersion:\"go1.13.12\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"18\", GitVersion:\"v1.18.4\", GitCommit:\"c96aede7b5205121079932896c4ad89bb93260af\", GitTreeState:\"clean\", BuildDate:\"2020-06-20T01:49:49Z\", GoVersion:\"go1.13.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 11:01:11.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6083" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":275,"completed":108,"skipped":2046,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 11:01:11.113: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Aug 4 11:01:11.176: INFO: Created pod &Pod{ObjectMeta:{dns-2975 dns-2975 /api/v1/namespaces/dns-2975/pods/dns-2975 a8907a7d-1dc9-4047-9ef8-d9bb12213f6c 6670508 0 2020-08-04 11:01:11 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2020-08-04 11:01:11 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 114 103 115 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 67 111 110 102 105 103 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 115 101 114 118 101 114 115 34 58 123 125 44 34 102 58 115 101 97 114 99 104 101 115 34 58 123 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-t7fj4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-t7fj4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-t7fj4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 4 11:01:11.186: INFO: The status of Pod dns-2975 is Pending, waiting for it to be Running (with Ready = true) Aug 4 11:01:13.271: INFO: The status of Pod dns-2975 is Pending, waiting for it to be Running (with Ready = true) Aug 4 11:01:15.191: INFO: The status of Pod dns-2975 is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... Aug 4 11:01:15.191: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-2975 PodName:dns-2975 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 4 11:01:15.191: INFO: >>> kubeConfig: /root/.kube/config I0804 11:01:15.232141 7 log.go:172] (0xc002d48a50) (0xc001d6a460) Create stream I0804 11:01:15.232173 7 log.go:172] (0xc002d48a50) (0xc001d6a460) Stream added, broadcasting: 1 I0804 11:01:15.234121 7 log.go:172] (0xc002d48a50) Reply frame received for 1 I0804 11:01:15.234175 7 log.go:172] (0xc002d48a50) (0xc0012b4140) Create stream I0804 11:01:15.234192 7 log.go:172] (0xc002d48a50) (0xc0012b4140) Stream added, broadcasting: 3 I0804 11:01:15.235159 7 log.go:172] (0xc002d48a50) Reply frame received for 3 I0804 11:01:15.235194 7 log.go:172] (0xc002d48a50) (0xc001190960) Create stream I0804 11:01:15.235207 7 log.go:172] (0xc002d48a50) (0xc001190960) Stream added, broadcasting: 5 I0804 11:01:15.236000 7 log.go:172] (0xc002d48a50) Reply frame received for 5 I0804 11:01:15.325776 7 log.go:172] (0xc002d48a50) Data frame received for 3 I0804 11:01:15.325801 7 log.go:172] (0xc0012b4140) (3) Data frame handling I0804 11:01:15.325828 7 log.go:172] (0xc0012b4140) (3) Data frame sent I0804 11:01:15.326370 7 log.go:172] (0xc002d48a50) Data frame received for 3 I0804 11:01:15.326389 7 log.go:172] (0xc0012b4140) (3) Data frame handling I0804 11:01:15.326567 7 log.go:172] (0xc002d48a50) Data frame received for 5 I0804 11:01:15.326583 7 log.go:172] (0xc001190960) (5) Data frame handling I0804 11:01:15.328791 7 log.go:172] (0xc002d48a50) Data frame received for 1 I0804 11:01:15.328815 7 log.go:172] (0xc001d6a460) (1) Data frame handling I0804 11:01:15.328837 7 log.go:172] (0xc001d6a460) (1) Data frame sent I0804 11:01:15.328938 7 log.go:172] (0xc002d48a50) (0xc001d6a460) Stream removed, broadcasting: 1 I0804 11:01:15.328989 7 log.go:172] (0xc002d48a50) (0xc001d6a460) Stream removed, broadcasting: 1 I0804 11:01:15.329000 7 log.go:172] (0xc002d48a50) (0xc0012b4140) Stream removed, broadcasting: 3 I0804 11:01:15.329005 7 log.go:172] (0xc002d48a50) (0xc001190960) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Aug 4 11:01:15.329: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-2975 PodName:dns-2975 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 4 11:01:15.329: INFO: >>> kubeConfig: /root/.kube/config I0804 11:01:15.329112 7 log.go:172] (0xc002d48a50) Go away received I0804 11:01:15.367796 7 log.go:172] (0xc002d06a50) (0xc001191040) Create stream I0804 11:01:15.367840 7 log.go:172] (0xc002d06a50) (0xc001191040) Stream added, broadcasting: 1 I0804 11:01:15.369757 7 log.go:172] (0xc002d06a50) Reply frame received for 1 I0804 11:01:15.369798 7 log.go:172] (0xc002d06a50) (0xc001191180) Create stream I0804 11:01:15.369811 7 log.go:172] (0xc002d06a50) (0xc001191180) Stream added, broadcasting: 3 I0804 11:01:15.370678 7 log.go:172] (0xc002d06a50) Reply frame received for 3 I0804 11:01:15.370714 7 log.go:172] (0xc002d06a50) (0xc001e82500) Create stream I0804 11:01:15.370726 7 log.go:172] (0xc002d06a50) (0xc001e82500) Stream added, broadcasting: 5 I0804 11:01:15.371654 7 log.go:172] (0xc002d06a50) Reply frame received for 5 I0804 11:01:15.442742 7 log.go:172] (0xc002d06a50) Data frame received for 3 I0804 11:01:15.442774 7 log.go:172] (0xc001191180) (3) Data frame handling I0804 11:01:15.442809 7 log.go:172] (0xc001191180) (3) Data frame sent I0804 11:01:15.443812 7 log.go:172] (0xc002d06a50) Data frame received for 5 I0804 11:01:15.443848 7 log.go:172] (0xc001e82500) (5) Data frame handling I0804 11:01:15.443974 7 log.go:172] (0xc002d06a50) Data frame received for 3 I0804 11:01:15.443989 7 log.go:172] (0xc001191180) (3) Data frame handling I0804 11:01:15.445734 7 log.go:172] (0xc002d06a50) Data frame received for 1 I0804 11:01:15.445750 7 log.go:172] (0xc001191040) (1) Data frame handling I0804 11:01:15.445774 7 log.go:172] (0xc001191040) (1) Data frame sent I0804 11:01:15.445854 7 log.go:172] (0xc002d06a50) (0xc001191040) Stream removed, broadcasting: 1 I0804 11:01:15.445908 7 log.go:172] (0xc002d06a50) Go away received I0804 11:01:15.445994 7 log.go:172] (0xc002d06a50) (0xc001191040) Stream removed, broadcasting: 1 I0804 11:01:15.446022 7 log.go:172] (0xc002d06a50) (0xc001191180) Stream removed, broadcasting: 3 I0804 11:01:15.446046 7 log.go:172] (0xc002d06a50) (0xc001e82500) Stream removed, broadcasting: 5 Aug 4 11:01:15.446: INFO: Deleting pod dns-2975... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 11:01:15.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2975" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":275,"completed":109,"skipped":2069,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 11:01:15.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0804 11:01:17.629799 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Aug 4 11:01:17.629: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 11:01:17.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6209" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":275,"completed":110,"skipped":2121,"failed":0} SSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 11:01:17.641: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-b8744c4f-2b62-4a50-a0c9-c6c765cc425f STEP: Creating a pod to test consume configMaps Aug 4 11:01:17.745: INFO: Waiting up to 5m0s for pod "pod-configmaps-e1653f69-b851-486c-a0ef-2bdbd196b01c" in namespace "configmap-7319" to be "Succeeded or Failed" Aug 4 11:01:17.749: INFO: Pod "pod-configmaps-e1653f69-b851-486c-a0ef-2bdbd196b01c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008467ms Aug 4 11:01:19.938: INFO: Pod "pod-configmaps-e1653f69-b851-486c-a0ef-2bdbd196b01c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.193280295s Aug 4 11:01:21.954: INFO: Pod "pod-configmaps-e1653f69-b851-486c-a0ef-2bdbd196b01c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.208460767s Aug 4 11:01:24.032: INFO: Pod "pod-configmaps-e1653f69-b851-486c-a0ef-2bdbd196b01c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.286409125s STEP: Saw pod success Aug 4 11:01:24.032: INFO: Pod "pod-configmaps-e1653f69-b851-486c-a0ef-2bdbd196b01c" satisfied condition "Succeeded or Failed" Aug 4 11:01:24.035: INFO: Trying to get logs from node kali-worker pod pod-configmaps-e1653f69-b851-486c-a0ef-2bdbd196b01c container configmap-volume-test: STEP: delete the pod Aug 4 11:01:24.698: INFO: Waiting for pod pod-configmaps-e1653f69-b851-486c-a0ef-2bdbd196b01c to disappear Aug 4 11:01:24.718: INFO: Pod pod-configmaps-e1653f69-b851-486c-a0ef-2bdbd196b01c no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 4 11:01:24.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7319" for this suite. • [SLOW TEST:7.087 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":275,"completed":111,"skipped":2128,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 4 11:01:24.728: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 4 11:01:24.915: INFO: (0) /api/v1/nodes/kali-worker/proxy/logs/:
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name cm-test-opt-del-58c9bdd4-dfc9-4eec-a8e6-cbd3862f6e24
STEP: Creating configMap with name cm-test-opt-upd-bc6c32f6-cbd6-42b0-97ad-3bb4f154bc78
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-58c9bdd4-dfc9-4eec-a8e6-cbd3862f6e24
STEP: Updating configmap cm-test-opt-upd-bc6c32f6-cbd6-42b0-97ad-3bb4f154bc78
STEP: Creating configMap with name cm-test-opt-create-5558d426-9337-4a8f-9a61-403451b868df
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:01:34.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3252" for this suite.

• [SLOW TEST:9.251 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":113,"skipped":2161,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:01:34.231: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: starting the proxy server
Aug  4 11:01:34.311: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:01:34.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6488" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":275,"completed":114,"skipped":2186,"failed":0}
SS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:01:34.408: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Aug  4 11:01:34.516: INFO: Pod name pod-release: Found 0 pods out of 1
Aug  4 11:01:39.530: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:01:39.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-442" for this suite.

• [SLOW TEST:5.897 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":275,"completed":115,"skipped":2188,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:01:40.306: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug  4 11:01:40.719: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Aug  4 11:01:45.723: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Aug  4 11:01:47.840: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
Aug  4 11:01:48.279: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:{test-cleanup-deployment  deployment-4648 /apis/apps/v1/namespaces/deployment-4648/deployments/test-cleanup-deployment d9a55ada-304f-4fb9-ac74-082438d6cd96 6670851 1 2020-08-04 11:01:47 +0000 UTC   map[name:cleanup-pod] map[] [] []  [{e2e.test Update apps/v1 2020-08-04 11:01:47 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 114 111 108 108 105 110 103 85 112 100 97 116 101 34 58 123 34 46 34 58 123 125 44 34 102 58 109 97 120 83 117 114 103 101 34 58 123 125 44 34 102 58 109 97 120 85 110 97 118 97 105 108 97 98 108 101 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0041f2758  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},}

Aug  4 11:01:48.962: INFO: New ReplicaSet "test-cleanup-deployment-b4867b47f" of Deployment "test-cleanup-deployment":
&ReplicaSet{ObjectMeta:{test-cleanup-deployment-b4867b47f  deployment-4648 /apis/apps/v1/namespaces/deployment-4648/replicasets/test-cleanup-deployment-b4867b47f 8bbb9a7d-6d1b-4dfc-9a37-62f4873f264a 6670859 1 2020-08-04 11:01:48 +0000 UTC   map[name:cleanup-pod pod-template-hash:b4867b47f] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment d9a55ada-304f-4fb9-ac74-082438d6cd96 0xc0041f2c60 0xc0041f2c61}] []  [{kube-controller-manager Update apps/v1 2020-08-04 11:01:48 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 100 57 97 53 53 97 100 97 45 51 48 52 102 45 52 102 98 57 45 97 99 55 52 45 48 56 50 52 51 56 100 54 99 100 57 54 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: b4867b47f,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod pod-template-hash:b4867b47f] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0041f2cd8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Aug  4 11:01:48.962: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment":
Aug  4 11:01:48.962: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller  deployment-4648 /apis/apps/v1/namespaces/deployment-4648/replicasets/test-cleanup-controller fedb5426-fd3e-4e2e-ac1c-da68089f6fb7 6670852 1 2020-08-04 11:01:40 +0000 UTC   map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment d9a55ada-304f-4fb9-ac74-082438d6cd96 0xc0041f2b3f 0xc0041f2b60}] []  [{e2e.test Update apps/v1 2020-08-04 11:01:40 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-08-04 11:01:48 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 100 57 97 53 53 97 100 97 45 51 48 52 102 45 52 102 98 57 45 97 99 55 52 45 48 56 50 52 51 56 100 54 99 100 57 54 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0041f2bf8  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Aug  4 11:01:49.305: INFO: Pod "test-cleanup-controller-744bp" is available:
&Pod{ObjectMeta:{test-cleanup-controller-744bp test-cleanup-controller- deployment-4648 /api/v1/namespaces/deployment-4648/pods/test-cleanup-controller-744bp f0141f37-eb28-4563-ab10-1fd53e4dd062 6670835 0 2020-08-04 11:01:40 +0000 UTC   map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller fedb5426-fd3e-4e2e-ac1c-da68089f6fb7 0xc0041f3227 0xc0041f3228}] []  [{kube-controller-manager Update v1 2020-08-04 11:01:40 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 102 101 100 98 53 52 50 54 45 102 100 51 101 45 52 101 50 101 45 97 99 49 99 45 100 97 54 56 48 56 57 102 54 102 98 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-04 11:01:46 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 49 52 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6lk8v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6lk8v,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6lk8v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:01:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:01:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:01:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:01:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.1.143,StartTime:2020-08-04 11:01:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-04 11:01:45 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://139c68e934c7c976b18759289dd34134144f2cd683f65a15fc46f0bf7c4e2b16,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.143,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug  4 11:01:49.305: INFO: Pod "test-cleanup-deployment-b4867b47f-vc72c" is not available:
&Pod{ObjectMeta:{test-cleanup-deployment-b4867b47f-vc72c test-cleanup-deployment-b4867b47f- deployment-4648 /api/v1/namespaces/deployment-4648/pods/test-cleanup-deployment-b4867b47f-vc72c 502b8d45-19a7-472a-b9e5-d699110e0d8b 6670858 0 2020-08-04 11:01:48 +0000 UTC   map[name:cleanup-pod pod-template-hash:b4867b47f] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-b4867b47f 8bbb9a7d-6d1b-4dfc-9a37-62f4873f264a 0xc0041f3400 0xc0041f3401}] []  [{kube-controller-manager Update v1 2020-08-04 11:01:48 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 56 98 98 98 57 97 55 100 45 54 100 49 98 45 52 100 102 99 45 57 97 51 55 45 54 50 102 52 56 55 51 102 50 54 52 97 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6lk8v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6lk8v,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6lk8v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:01:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:01:49.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-4648" for this suite.

• [SLOW TEST:9.562 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":275,"completed":116,"skipped":2205,"failed":0}
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:01:49.868: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1288
STEP: creating an pod
Aug  4 11:01:50.890: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config run logs-generator --image=us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 --namespace=kubectl-2771 -- logs-generator --log-lines-total 100 --run-duration 20s'
Aug  4 11:01:51.030: INFO: stderr: ""
Aug  4 11:01:51.031: INFO: stdout: "pod/logs-generator created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Waiting for log generator to start.
Aug  4 11:01:51.031: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator]
Aug  4 11:01:51.031: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-2771" to be "running and ready, or succeeded"
Aug  4 11:01:51.324: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 293.099633ms
Aug  4 11:01:53.355: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.324880327s
Aug  4 11:01:55.361: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.329951373s
Aug  4 11:01:57.365: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 6.334293447s
Aug  4 11:01:57.365: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded"
Aug  4 11:01:57.365: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator]
STEP: checking for a matching strings
Aug  4 11:01:57.365: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-2771'
Aug  4 11:01:57.471: INFO: stderr: ""
Aug  4 11:01:57.471: INFO: stdout: "I0804 11:01:54.776009       1 logs_generator.go:76] 0 POST /api/v1/namespaces/default/pods/hxp9 279\nI0804 11:01:54.976139       1 logs_generator.go:76] 1 GET /api/v1/namespaces/kube-system/pods/h47 497\nI0804 11:01:55.176249       1 logs_generator.go:76] 2 GET /api/v1/namespaces/kube-system/pods/tg4 403\nI0804 11:01:55.376142       1 logs_generator.go:76] 3 GET /api/v1/namespaces/default/pods/5k2 248\nI0804 11:01:55.576279       1 logs_generator.go:76] 4 PUT /api/v1/namespaces/ns/pods/svqn 207\nI0804 11:01:55.776165       1 logs_generator.go:76] 5 PUT /api/v1/namespaces/default/pods/crzq 558\nI0804 11:01:55.976172       1 logs_generator.go:76] 6 PUT /api/v1/namespaces/default/pods/5x5 559\nI0804 11:01:56.176207       1 logs_generator.go:76] 7 POST /api/v1/namespaces/ns/pods/2w2p 322\nI0804 11:01:56.376205       1 logs_generator.go:76] 8 POST /api/v1/namespaces/ns/pods/h2r 589\nI0804 11:01:56.576185       1 logs_generator.go:76] 9 POST /api/v1/namespaces/kube-system/pods/j9m 529\nI0804 11:01:56.776204       1 logs_generator.go:76] 10 POST /api/v1/namespaces/kube-system/pods/c7sx 469\nI0804 11:01:56.976212       1 logs_generator.go:76] 11 PUT /api/v1/namespaces/default/pods/f7b 545\nI0804 11:01:57.176146       1 logs_generator.go:76] 12 GET /api/v1/namespaces/kube-system/pods/vq5x 259\nI0804 11:01:57.376198       1 logs_generator.go:76] 13 PUT /api/v1/namespaces/kube-system/pods/khjl 383\n"
STEP: limiting log lines
Aug  4 11:01:57.471: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-2771 --tail=1'
Aug  4 11:01:57.580: INFO: stderr: ""
Aug  4 11:01:57.580: INFO: stdout: "I0804 11:01:57.376198       1 logs_generator.go:76] 13 PUT /api/v1/namespaces/kube-system/pods/khjl 383\n"
Aug  4 11:01:57.580: INFO: got output "I0804 11:01:57.376198       1 logs_generator.go:76] 13 PUT /api/v1/namespaces/kube-system/pods/khjl 383\n"
STEP: limiting log bytes
Aug  4 11:01:57.580: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-2771 --limit-bytes=1'
Aug  4 11:01:57.693: INFO: stderr: ""
Aug  4 11:01:57.693: INFO: stdout: "I"
Aug  4 11:01:57.693: INFO: got output "I"
STEP: exposing timestamps
Aug  4 11:01:57.693: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-2771 --tail=1 --timestamps'
Aug  4 11:01:57.794: INFO: stderr: ""
Aug  4 11:01:57.794: INFO: stdout: "2020-08-04T11:01:57.776219965Z I0804 11:01:57.776080       1 logs_generator.go:76] 15 POST /api/v1/namespaces/ns/pods/766b 394\n"
Aug  4 11:01:57.794: INFO: got output "2020-08-04T11:01:57.776219965Z I0804 11:01:57.776080       1 logs_generator.go:76] 15 POST /api/v1/namespaces/ns/pods/766b 394\n"
STEP: restricting to a time range
Aug  4 11:02:00.294: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-2771 --since=1s'
Aug  4 11:02:00.407: INFO: stderr: ""
Aug  4 11:02:00.407: INFO: stdout: "I0804 11:01:59.576164       1 logs_generator.go:76] 24 POST /api/v1/namespaces/ns/pods/s5gj 358\nI0804 11:01:59.776181       1 logs_generator.go:76] 25 PUT /api/v1/namespaces/default/pods/fkv 514\nI0804 11:01:59.976173       1 logs_generator.go:76] 26 POST /api/v1/namespaces/ns/pods/nn4 412\nI0804 11:02:00.176173       1 logs_generator.go:76] 27 POST /api/v1/namespaces/kube-system/pods/qdg 220\nI0804 11:02:00.376215       1 logs_generator.go:76] 28 POST /api/v1/namespaces/kube-system/pods/8bs 496\n"
Aug  4 11:02:00.407: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-2771 --since=24h'
Aug  4 11:02:00.514: INFO: stderr: ""
Aug  4 11:02:00.514: INFO: stdout: "I0804 11:01:54.776009       1 logs_generator.go:76] 0 POST /api/v1/namespaces/default/pods/hxp9 279\nI0804 11:01:54.976139       1 logs_generator.go:76] 1 GET /api/v1/namespaces/kube-system/pods/h47 497\nI0804 11:01:55.176249       1 logs_generator.go:76] 2 GET /api/v1/namespaces/kube-system/pods/tg4 403\nI0804 11:01:55.376142       1 logs_generator.go:76] 3 GET /api/v1/namespaces/default/pods/5k2 248\nI0804 11:01:55.576279       1 logs_generator.go:76] 4 PUT /api/v1/namespaces/ns/pods/svqn 207\nI0804 11:01:55.776165       1 logs_generator.go:76] 5 PUT /api/v1/namespaces/default/pods/crzq 558\nI0804 11:01:55.976172       1 logs_generator.go:76] 6 PUT /api/v1/namespaces/default/pods/5x5 559\nI0804 11:01:56.176207       1 logs_generator.go:76] 7 POST /api/v1/namespaces/ns/pods/2w2p 322\nI0804 11:01:56.376205       1 logs_generator.go:76] 8 POST /api/v1/namespaces/ns/pods/h2r 589\nI0804 11:01:56.576185       1 logs_generator.go:76] 9 POST /api/v1/namespaces/kube-system/pods/j9m 529\nI0804 11:01:56.776204       1 logs_generator.go:76] 10 POST /api/v1/namespaces/kube-system/pods/c7sx 469\nI0804 11:01:56.976212       1 logs_generator.go:76] 11 PUT /api/v1/namespaces/default/pods/f7b 545\nI0804 11:01:57.176146       1 logs_generator.go:76] 12 GET /api/v1/namespaces/kube-system/pods/vq5x 259\nI0804 11:01:57.376198       1 logs_generator.go:76] 13 PUT /api/v1/namespaces/kube-system/pods/khjl 383\nI0804 11:01:57.576158       1 logs_generator.go:76] 14 POST /api/v1/namespaces/default/pods/95pc 471\nI0804 11:01:57.776080       1 logs_generator.go:76] 15 POST /api/v1/namespaces/ns/pods/766b 394\nI0804 11:01:57.976151       1 logs_generator.go:76] 16 POST /api/v1/namespaces/kube-system/pods/l4k 259\nI0804 11:01:58.176161       1 logs_generator.go:76] 17 GET /api/v1/namespaces/kube-system/pods/8xwb 544\nI0804 11:01:58.376179       1 logs_generator.go:76] 18 PUT /api/v1/namespaces/kube-system/pods/2wx 312\nI0804 11:01:58.576169       1 logs_generator.go:76] 19 POST /api/v1/namespaces/default/pods/nhj 441\nI0804 11:01:58.776141       1 logs_generator.go:76] 20 PUT /api/v1/namespaces/default/pods/qsj 233\nI0804 11:01:58.976164       1 logs_generator.go:76] 21 PUT /api/v1/namespaces/kube-system/pods/xz9b 291\nI0804 11:01:59.176194       1 logs_generator.go:76] 22 GET /api/v1/namespaces/ns/pods/f5k 268\nI0804 11:01:59.376177       1 logs_generator.go:76] 23 GET /api/v1/namespaces/kube-system/pods/8qs 280\nI0804 11:01:59.576164       1 logs_generator.go:76] 24 POST /api/v1/namespaces/ns/pods/s5gj 358\nI0804 11:01:59.776181       1 logs_generator.go:76] 25 PUT /api/v1/namespaces/default/pods/fkv 514\nI0804 11:01:59.976173       1 logs_generator.go:76] 26 POST /api/v1/namespaces/ns/pods/nn4 412\nI0804 11:02:00.176173       1 logs_generator.go:76] 27 POST /api/v1/namespaces/kube-system/pods/qdg 220\nI0804 11:02:00.376215       1 logs_generator.go:76] 28 POST /api/v1/namespaces/kube-system/pods/8bs 496\n"
[AfterEach] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1294
Aug  4 11:02:00.514: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-2771'
Aug  4 11:02:03.455: INFO: stderr: ""
Aug  4 11:02:03.455: INFO: stdout: "pod \"logs-generator\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:02:03.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2771" for this suite.

• [SLOW TEST:13.593 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1284
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":275,"completed":117,"skipped":2214,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:02:03.461: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod pod-subpath-test-projected-7k7f
STEP: Creating a pod to test atomic-volume-subpath
Aug  4 11:02:03.531: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-7k7f" in namespace "subpath-5630" to be "Succeeded or Failed"
Aug  4 11:02:03.577: INFO: Pod "pod-subpath-test-projected-7k7f": Phase="Pending", Reason="", readiness=false. Elapsed: 46.195767ms
Aug  4 11:02:05.582: INFO: Pod "pod-subpath-test-projected-7k7f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050332078s
Aug  4 11:02:07.586: INFO: Pod "pod-subpath-test-projected-7k7f": Phase="Running", Reason="", readiness=true. Elapsed: 4.054511541s
Aug  4 11:02:09.590: INFO: Pod "pod-subpath-test-projected-7k7f": Phase="Running", Reason="", readiness=true. Elapsed: 6.058794002s
Aug  4 11:02:11.593: INFO: Pod "pod-subpath-test-projected-7k7f": Phase="Running", Reason="", readiness=true. Elapsed: 8.062033443s
Aug  4 11:02:13.598: INFO: Pod "pod-subpath-test-projected-7k7f": Phase="Running", Reason="", readiness=true. Elapsed: 10.066562313s
Aug  4 11:02:15.603: INFO: Pod "pod-subpath-test-projected-7k7f": Phase="Running", Reason="", readiness=true. Elapsed: 12.071351684s
Aug  4 11:02:17.775: INFO: Pod "pod-subpath-test-projected-7k7f": Phase="Running", Reason="", readiness=true. Elapsed: 14.243629075s
Aug  4 11:02:19.779: INFO: Pod "pod-subpath-test-projected-7k7f": Phase="Running", Reason="", readiness=true. Elapsed: 16.248109293s
Aug  4 11:02:21.783: INFO: Pod "pod-subpath-test-projected-7k7f": Phase="Running", Reason="", readiness=true. Elapsed: 18.251831192s
Aug  4 11:02:23.788: INFO: Pod "pod-subpath-test-projected-7k7f": Phase="Running", Reason="", readiness=true. Elapsed: 20.256542142s
Aug  4 11:02:25.792: INFO: Pod "pod-subpath-test-projected-7k7f": Phase="Running", Reason="", readiness=true. Elapsed: 22.260903314s
Aug  4 11:02:27.797: INFO: Pod "pod-subpath-test-projected-7k7f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.26552837s
STEP: Saw pod success
Aug  4 11:02:27.797: INFO: Pod "pod-subpath-test-projected-7k7f" satisfied condition "Succeeded or Failed"
Aug  4 11:02:27.801: INFO: Trying to get logs from node kali-worker2 pod pod-subpath-test-projected-7k7f container test-container-subpath-projected-7k7f: 
STEP: delete the pod
Aug  4 11:02:28.058: INFO: Waiting for pod pod-subpath-test-projected-7k7f to disappear
Aug  4 11:02:28.080: INFO: Pod pod-subpath-test-projected-7k7f no longer exists
STEP: Deleting pod pod-subpath-test-projected-7k7f
Aug  4 11:02:28.080: INFO: Deleting pod "pod-subpath-test-projected-7k7f" in namespace "subpath-5630"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:02:28.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-5630" for this suite.

• [SLOW TEST:24.650 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":275,"completed":118,"skipped":2221,"failed":0}
SS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:02:28.111: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-fb046fa6-9849-42df-9316-39c497f0319f
STEP: Creating a pod to test consume secrets
Aug  4 11:02:28.244: INFO: Waiting up to 5m0s for pod "pod-secrets-ae258fbf-91cc-4952-82bb-ad754612aa4e" in namespace "secrets-8048" to be "Succeeded or Failed"
Aug  4 11:02:28.248: INFO: Pod "pod-secrets-ae258fbf-91cc-4952-82bb-ad754612aa4e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.409833ms
Aug  4 11:02:30.251: INFO: Pod "pod-secrets-ae258fbf-91cc-4952-82bb-ad754612aa4e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00660417s
Aug  4 11:02:32.255: INFO: Pod "pod-secrets-ae258fbf-91cc-4952-82bb-ad754612aa4e": Phase="Running", Reason="", readiness=true. Elapsed: 4.010145464s
Aug  4 11:02:34.259: INFO: Pod "pod-secrets-ae258fbf-91cc-4952-82bb-ad754612aa4e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014433173s
STEP: Saw pod success
Aug  4 11:02:34.259: INFO: Pod "pod-secrets-ae258fbf-91cc-4952-82bb-ad754612aa4e" satisfied condition "Succeeded or Failed"
Aug  4 11:02:34.262: INFO: Trying to get logs from node kali-worker pod pod-secrets-ae258fbf-91cc-4952-82bb-ad754612aa4e container secret-volume-test: 
STEP: delete the pod
Aug  4 11:02:34.318: INFO: Waiting for pod pod-secrets-ae258fbf-91cc-4952-82bb-ad754612aa4e to disappear
Aug  4 11:02:34.355: INFO: Pod pod-secrets-ae258fbf-91cc-4952-82bb-ad754612aa4e no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:02:34.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8048" for this suite.

• [SLOW TEST:6.252 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":275,"completed":119,"skipped":2223,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:02:34.364: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Aug  4 11:02:34.503: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:02:53.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8262" for this suite.

• [SLOW TEST:19.124 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":275,"completed":120,"skipped":2232,"failed":0}
SSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:02:53.488: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test substitution in container's args
Aug  4 11:02:53.543: INFO: Waiting up to 5m0s for pod "var-expansion-2c57c505-95ae-41d9-8fa7-587d6672a544" in namespace "var-expansion-849" to be "Succeeded or Failed"
Aug  4 11:02:53.548: INFO: Pod "var-expansion-2c57c505-95ae-41d9-8fa7-587d6672a544": Phase="Pending", Reason="", readiness=false. Elapsed: 4.610325ms
Aug  4 11:02:55.550: INFO: Pod "var-expansion-2c57c505-95ae-41d9-8fa7-587d6672a544": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007344064s
Aug  4 11:02:57.577: INFO: Pod "var-expansion-2c57c505-95ae-41d9-8fa7-587d6672a544": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034327102s
STEP: Saw pod success
Aug  4 11:02:57.577: INFO: Pod "var-expansion-2c57c505-95ae-41d9-8fa7-587d6672a544" satisfied condition "Succeeded or Failed"
Aug  4 11:02:57.581: INFO: Trying to get logs from node kali-worker2 pod var-expansion-2c57c505-95ae-41d9-8fa7-587d6672a544 container dapi-container: 
STEP: delete the pod
Aug  4 11:02:57.648: INFO: Waiting for pod var-expansion-2c57c505-95ae-41d9-8fa7-587d6672a544 to disappear
Aug  4 11:02:57.661: INFO: Pod var-expansion-2c57c505-95ae-41d9-8fa7-587d6672a544 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:02:57.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-849" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":275,"completed":121,"skipped":2243,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:02:57.669: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug  4 11:02:58.070: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Aug  4 11:03:00.999: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6157 create -f -'
Aug  4 11:03:04.341: INFO: stderr: ""
Aug  4 11:03:04.341: INFO: stdout: "e2e-test-crd-publish-openapi-4048-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
Aug  4 11:03:04.341: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6157 delete e2e-test-crd-publish-openapi-4048-crds test-cr'
Aug  4 11:03:04.460: INFO: stderr: ""
Aug  4 11:03:04.460: INFO: stdout: "e2e-test-crd-publish-openapi-4048-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
Aug  4 11:03:04.460: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6157 apply -f -'
Aug  4 11:03:04.724: INFO: stderr: ""
Aug  4 11:03:04.724: INFO: stdout: "e2e-test-crd-publish-openapi-4048-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
Aug  4 11:03:04.724: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6157 delete e2e-test-crd-publish-openapi-4048-crds test-cr'
Aug  4 11:03:04.836: INFO: stderr: ""
Aug  4 11:03:04.837: INFO: stdout: "e2e-test-crd-publish-openapi-4048-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
Aug  4 11:03:04.837: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4048-crds'
Aug  4 11:03:05.120: INFO: stderr: ""
Aug  4 11:03:05.120: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-4048-crd\nVERSION:  crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n     preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Waldo\n\n   status\t\n     Status of Waldo\n\n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:03:07.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-6157" for this suite.

• [SLOW TEST:9.381 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":275,"completed":122,"skipped":2249,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:03:07.050: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0666 on node default medium
Aug  4 11:03:07.148: INFO: Waiting up to 5m0s for pod "pod-ce461f69-a081-4e47-a30a-75daac95e3a9" in namespace "emptydir-2626" to be "Succeeded or Failed"
Aug  4 11:03:07.150: INFO: Pod "pod-ce461f69-a081-4e47-a30a-75daac95e3a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.735479ms
Aug  4 11:03:09.155: INFO: Pod "pod-ce461f69-a081-4e47-a30a-75daac95e3a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007009056s
Aug  4 11:03:11.158: INFO: Pod "pod-ce461f69-a081-4e47-a30a-75daac95e3a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010369549s
STEP: Saw pod success
Aug  4 11:03:11.158: INFO: Pod "pod-ce461f69-a081-4e47-a30a-75daac95e3a9" satisfied condition "Succeeded or Failed"
Aug  4 11:03:11.160: INFO: Trying to get logs from node kali-worker2 pod pod-ce461f69-a081-4e47-a30a-75daac95e3a9 container test-container: 
STEP: delete the pod
Aug  4 11:03:11.209: INFO: Waiting for pod pod-ce461f69-a081-4e47-a30a-75daac95e3a9 to disappear
Aug  4 11:03:11.218: INFO: Pod pod-ce461f69-a081-4e47-a30a-75daac95e3a9 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:03:11.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2626" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":123,"skipped":2270,"failed":0}
S
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:03:11.226: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: delete a job
STEP: deleting Job.batch foo in namespace job-7281, will wait for the garbage collector to delete the pods
Aug  4 11:03:17.356: INFO: Deleting Job.batch foo took: 6.317224ms
Aug  4 11:03:17.657: INFO: Terminating Job.batch foo pods took: 300.368729ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:03:53.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-7281" for this suite.

• [SLOW TEST:42.252 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":275,"completed":124,"skipped":2271,"failed":0}
SS
------------------------------
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch 
  watch on custom resource definition objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:03:53.478: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] watch on custom resource definition objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug  4 11:03:53.518: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating first CR 
Aug  4 11:03:54.093: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-04T11:03:54Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-08-04T11:03:54Z]] name:name1 resourceVersion:6671473 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:7fe87d7f-929d-416b-b9dd-5a360e2bcab8] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Creating second CR
Aug  4 11:04:04.100: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-04T11:04:04Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-08-04T11:04:04Z]] name:name2 resourceVersion:6671520 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:4599ce8b-2721-4f30-a88f-573cad5a6394] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying first CR
Aug  4 11:04:14.108: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-04T11:03:54Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-08-04T11:04:14Z]] name:name1 resourceVersion:6671551 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:7fe87d7f-929d-416b-b9dd-5a360e2bcab8] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying second CR
Aug  4 11:04:24.118: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-04T11:04:04Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-08-04T11:04:24Z]] name:name2 resourceVersion:6671582 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:4599ce8b-2721-4f30-a88f-573cad5a6394] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting first CR
Aug  4 11:04:34.126: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-04T11:03:54Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-08-04T11:04:14Z]] name:name1 resourceVersion:6671612 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:7fe87d7f-929d-416b-b9dd-5a360e2bcab8] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting second CR
Aug  4 11:04:44.135: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-04T11:04:04Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-08-04T11:04:24Z]] name:name2 resourceVersion:6671638 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:4599ce8b-2721-4f30-a88f-573cad5a6394] num:map[num1:9223372036854775807 num2:1000000]]}
[AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:04:54.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-watch-1792" for this suite.

• [SLOW TEST:61.181 seconds]
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  CustomResourceDefinition Watch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42
    watch on custom resource definition objects [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":275,"completed":125,"skipped":2273,"failed":0}
SSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:04:54.659: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91
Aug  4 11:04:54.753: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug  4 11:04:54.761: INFO: Waiting for terminating namespaces to be deleted...
Aug  4 11:04:54.763: INFO: 
Logging pods the kubelet thinks is on node kali-worker before test
Aug  4 11:04:54.780: INFO: rally-824618b1-6cukkjuh-lb7rq from c-rally-824618b1-4lzsfcdd started at 2020-08-01 10:57:26 +0000 UTC (1 container statuses recorded)
Aug  4 11:04:54.780: INFO: 	Container rally-824618b1-6cukkjuh ready: true, restart count 3
Aug  4 11:04:54.780: INFO: rally-19e4df10-30wkw9yu-glqpf from c-rally-19e4df10-7fs771wk started at 2020-08-01 11:12:55 +0000 UTC (1 container statuses recorded)
Aug  4 11:04:54.780: INFO: 	Container rally-19e4df10-30wkw9yu ready: true, restart count 0
Aug  4 11:04:54.780: INFO: kindnet-njbgt from kube-system started at 2020-07-10 10:28:30 +0000 UTC (1 container statuses recorded)
Aug  4 11:04:54.780: INFO: 	Container kindnet-cni ready: true, restart count 1
Aug  4 11:04:54.780: INFO: kube-proxy-qwsfx from kube-system started at 2020-07-10 10:28:31 +0000 UTC (1 container statuses recorded)
Aug  4 11:04:54.780: INFO: 	Container kube-proxy ready: true, restart count 0
Aug  4 11:04:54.780: INFO: 
Logging pods the kubelet thinks is on node kali-worker2 before test
Aug  4 11:04:54.800: INFO: rally-824618b1-6cukkjuh-m84l4 from c-rally-824618b1-4lzsfcdd started at 2020-08-01 10:57:24 +0000 UTC (1 container statuses recorded)
Aug  4 11:04:54.800: INFO: 	Container rally-824618b1-6cukkjuh ready: true, restart count 3
Aug  4 11:04:54.800: INFO: kube-proxy-vk6jr from kube-system started at 2020-07-10 10:28:28 +0000 UTC (1 container statuses recorded)
Aug  4 11:04:54.800: INFO: 	Container kube-proxy ready: true, restart count 0
Aug  4 11:04:54.800: INFO: kindnet-pk4xb from kube-system started at 2020-07-10 10:28:28 +0000 UTC (1 container statuses recorded)
Aug  4 11:04:54.800: INFO: 	Container kindnet-cni ready: true, restart count 1
Aug  4 11:04:54.800: INFO: rally-19e4df10-30wkw9yu-qbmr7 from c-rally-19e4df10-7fs771wk started at 2020-08-01 11:12:55 +0000 UTC (1 container statuses recorded)
Aug  4 11:04:54.800: INFO: 	Container rally-19e4df10-30wkw9yu ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.16280bd910ec9c5e], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.]
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.16280bd912a7e816], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:04:55.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-9910" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","total":275,"completed":126,"skipped":2277,"failed":0}
S
------------------------------
[sig-api-machinery] ResourceQuota 
  should be able to update and delete ResourceQuota. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:04:55.828: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to update and delete ResourceQuota. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a ResourceQuota
STEP: Getting a ResourceQuota
STEP: Updating a ResourceQuota
STEP: Verifying a ResourceQuota was modified
STEP: Deleting a ResourceQuota
STEP: Verifying the deleted ResourceQuota
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:04:56.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-8492" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":275,"completed":127,"skipped":2278,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:04:56.161: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug  4 11:04:56.318: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b2538e1d-7eab-44f0-ae78-3e3bd6438fd0" in namespace "downward-api-6848" to be "Succeeded or Failed"
Aug  4 11:04:56.338: INFO: Pod "downwardapi-volume-b2538e1d-7eab-44f0-ae78-3e3bd6438fd0": Phase="Pending", Reason="", readiness=false. Elapsed: 20.128423ms
Aug  4 11:04:58.343: INFO: Pod "downwardapi-volume-b2538e1d-7eab-44f0-ae78-3e3bd6438fd0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024510986s
Aug  4 11:05:00.346: INFO: Pod "downwardapi-volume-b2538e1d-7eab-44f0-ae78-3e3bd6438fd0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028289159s
STEP: Saw pod success
Aug  4 11:05:00.346: INFO: Pod "downwardapi-volume-b2538e1d-7eab-44f0-ae78-3e3bd6438fd0" satisfied condition "Succeeded or Failed"
Aug  4 11:05:00.349: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-b2538e1d-7eab-44f0-ae78-3e3bd6438fd0 container client-container: 
STEP: delete the pod
Aug  4 11:05:00.382: INFO: Waiting for pod downwardapi-volume-b2538e1d-7eab-44f0-ae78-3e3bd6438fd0 to disappear
Aug  4 11:05:00.394: INFO: Pod downwardapi-volume-b2538e1d-7eab-44f0-ae78-3e3bd6438fd0 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:05:00.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6848" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":275,"completed":128,"skipped":2290,"failed":0}
SSSSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:05:00.458: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug  4 11:05:00.507: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
Aug  4 11:05:02.589: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:05:03.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-38" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":275,"completed":129,"skipped":2295,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:05:03.645: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug  4 11:05:04.519: INFO: Create a RollingUpdate DaemonSet
Aug  4 11:05:04.523: INFO: Check that daemon pods launch on every node of the cluster
Aug  4 11:05:04.628: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  4 11:05:04.656: INFO: Number of nodes with available pods: 0
Aug  4 11:05:04.656: INFO: Node kali-worker is running more than one daemon pod
Aug  4 11:05:05.663: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  4 11:05:05.667: INFO: Number of nodes with available pods: 0
Aug  4 11:05:05.667: INFO: Node kali-worker is running more than one daemon pod
Aug  4 11:05:06.678: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  4 11:05:06.701: INFO: Number of nodes with available pods: 0
Aug  4 11:05:06.701: INFO: Node kali-worker is running more than one daemon pod
Aug  4 11:05:07.666: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  4 11:05:07.669: INFO: Number of nodes with available pods: 0
Aug  4 11:05:07.669: INFO: Node kali-worker is running more than one daemon pod
Aug  4 11:05:08.772: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  4 11:05:08.776: INFO: Number of nodes with available pods: 0
Aug  4 11:05:08.776: INFO: Node kali-worker is running more than one daemon pod
Aug  4 11:05:09.916: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  4 11:05:09.989: INFO: Number of nodes with available pods: 1
Aug  4 11:05:09.989: INFO: Node kali-worker is running more than one daemon pod
Aug  4 11:05:10.719: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  4 11:05:10.725: INFO: Number of nodes with available pods: 1
Aug  4 11:05:10.725: INFO: Node kali-worker is running more than one daemon pod
Aug  4 11:05:11.676: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  4 11:05:11.682: INFO: Number of nodes with available pods: 2
Aug  4 11:05:11.682: INFO: Number of running nodes: 2, number of available pods: 2
Aug  4 11:05:11.682: INFO: Update the DaemonSet to trigger a rollout
Aug  4 11:05:12.095: INFO: Updating DaemonSet daemon-set
Aug  4 11:05:23.459: INFO: Roll back the DaemonSet before rollout is complete
Aug  4 11:05:23.492: INFO: Updating DaemonSet daemon-set
Aug  4 11:05:23.492: INFO: Make sure DaemonSet rollback is complete
Aug  4 11:05:23.507: INFO: Wrong image for pod: daemon-set-2ctvc. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Aug  4 11:05:23.507: INFO: Pod daemon-set-2ctvc is not available
Aug  4 11:05:23.530: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  4 11:05:24.534: INFO: Wrong image for pod: daemon-set-2ctvc. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Aug  4 11:05:24.534: INFO: Pod daemon-set-2ctvc is not available
Aug  4 11:05:24.537: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  4 11:05:25.536: INFO: Wrong image for pod: daemon-set-2ctvc. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Aug  4 11:05:25.536: INFO: Pod daemon-set-2ctvc is not available
Aug  4 11:05:25.540: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  4 11:05:26.534: INFO: Wrong image for pod: daemon-set-2ctvc. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Aug  4 11:05:26.534: INFO: Pod daemon-set-2ctvc is not available
Aug  4 11:05:26.537: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  4 11:05:27.535: INFO: Wrong image for pod: daemon-set-2ctvc. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Aug  4 11:05:27.535: INFO: Pod daemon-set-2ctvc is not available
Aug  4 11:05:27.538: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  4 11:05:29.155: INFO: Pod daemon-set-s5cmm is not available
Aug  4 11:05:29.199: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4977, will wait for the garbage collector to delete the pods
Aug  4 11:05:30.333: INFO: Deleting DaemonSet.extensions daemon-set took: 30.155652ms
Aug  4 11:05:33.033: INFO: Terminating DaemonSet.extensions daemon-set pods took: 2.700253143s
Aug  4 11:05:43.447: INFO: Number of nodes with available pods: 0
Aug  4 11:05:43.447: INFO: Number of running nodes: 0, number of available pods: 0
Aug  4 11:05:43.450: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4977/daemonsets","resourceVersion":"6671998"},"items":null}

Aug  4 11:05:43.452: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4977/pods","resourceVersion":"6671998"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:05:43.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-4977" for this suite.

• [SLOW TEST:39.822 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":275,"completed":130,"skipped":2321,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:05:43.467: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:05:47.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-2555" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":131,"skipped":2330,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:05:47.710: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug  4 11:05:47.783: INFO: Waiting up to 5m0s for pod "downwardapi-volume-291108da-21b3-497a-b94d-d4bdae23d3d8" in namespace "downward-api-8062" to be "Succeeded or Failed"
Aug  4 11:05:47.802: INFO: Pod "downwardapi-volume-291108da-21b3-497a-b94d-d4bdae23d3d8": Phase="Pending", Reason="", readiness=false. Elapsed: 18.574253ms
Aug  4 11:05:49.844: INFO: Pod "downwardapi-volume-291108da-21b3-497a-b94d-d4bdae23d3d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060255274s
Aug  4 11:05:52.022: INFO: Pod "downwardapi-volume-291108da-21b3-497a-b94d-d4bdae23d3d8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.238878206s
Aug  4 11:05:54.145: INFO: Pod "downwardapi-volume-291108da-21b3-497a-b94d-d4bdae23d3d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.361970156s
STEP: Saw pod success
Aug  4 11:05:54.145: INFO: Pod "downwardapi-volume-291108da-21b3-497a-b94d-d4bdae23d3d8" satisfied condition "Succeeded or Failed"
Aug  4 11:05:54.262: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-291108da-21b3-497a-b94d-d4bdae23d3d8 container client-container: 
STEP: delete the pod
Aug  4 11:05:54.319: INFO: Waiting for pod downwardapi-volume-291108da-21b3-497a-b94d-d4bdae23d3d8 to disappear
Aug  4 11:05:54.337: INFO: Pod downwardapi-volume-291108da-21b3-497a-b94d-d4bdae23d3d8 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:05:54.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8062" for this suite.

• [SLOW TEST:6.636 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":132,"skipped":2347,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:05:54.347: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name s-test-opt-del-3bed993a-e716-4383-bb83-7b8d41368f3d
STEP: Creating secret with name s-test-opt-upd-86b7e86d-e0b1-458b-b40e-b8e97711438b
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-3bed993a-e716-4383-bb83-7b8d41368f3d
STEP: Updating secret s-test-opt-upd-86b7e86d-e0b1-458b-b40e-b8e97711438b
STEP: Creating secret with name s-test-opt-create-6e16708a-c19e-4a40-835d-818377f0a0cd
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:06:02.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3476" for this suite.

• [SLOW TEST:8.495 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":133,"skipped":2399,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should patch a secret [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:06:02.842: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should patch a secret [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a secret
STEP: listing secrets in all namespaces to ensure that there are more than zero
STEP: patching the secret
STEP: deleting the secret using a LabelSelector
STEP: listing secrets in all namespaces, searching for label name and value in patch
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:06:03.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4691" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":275,"completed":134,"skipped":2417,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:06:03.033: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating replication controller my-hostname-basic-949c10c3-61ef-4dc3-bcd2-1a6d94697770
Aug  4 11:06:03.227: INFO: Pod name my-hostname-basic-949c10c3-61ef-4dc3-bcd2-1a6d94697770: Found 0 pods out of 1
Aug  4 11:06:08.236: INFO: Pod name my-hostname-basic-949c10c3-61ef-4dc3-bcd2-1a6d94697770: Found 1 pods out of 1
Aug  4 11:06:08.236: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-949c10c3-61ef-4dc3-bcd2-1a6d94697770" are running
Aug  4 11:06:08.240: INFO: Pod "my-hostname-basic-949c10c3-61ef-4dc3-bcd2-1a6d94697770-48sjw" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-04 11:06:03 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-04 11:06:06 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-04 11:06:06 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-04 11:06:03 +0000 UTC Reason: Message:}])
Aug  4 11:06:08.240: INFO: Trying to dial the pod
Aug  4 11:06:13.251: INFO: Controller my-hostname-basic-949c10c3-61ef-4dc3-bcd2-1a6d94697770: Got expected result from replica 1 [my-hostname-basic-949c10c3-61ef-4dc3-bcd2-1a6d94697770-48sjw]: "my-hostname-basic-949c10c3-61ef-4dc3-bcd2-1a6d94697770-48sjw", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:06:13.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-7118" for this suite.

• [SLOW TEST:10.225 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":275,"completed":135,"skipped":2435,"failed":0}
S
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:06:13.258: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap that has name configmap-test-emptyKey-08e01b3c-beaf-42ce-9ee5-e00f17f61cb4
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:06:13.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6740" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":275,"completed":136,"skipped":2436,"failed":0}
SSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:06:13.439: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod busybox-1a59e3d7-b6a2-44a7-ae1b-71d814965aa7 in namespace container-probe-6567
Aug  4 11:06:19.522: INFO: Started pod busybox-1a59e3d7-b6a2-44a7-ae1b-71d814965aa7 in namespace container-probe-6567
STEP: checking the pod's current state and verifying that restartCount is present
Aug  4 11:06:19.525: INFO: Initial restart count of pod busybox-1a59e3d7-b6a2-44a7-ae1b-71d814965aa7 is 0
Aug  4 11:07:13.717: INFO: Restart count of pod container-probe-6567/busybox-1a59e3d7-b6a2-44a7-ae1b-71d814965aa7 is now 1 (54.192232626s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:07:13.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-6567" for this suite.

• [SLOW TEST:60.324 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":275,"completed":137,"skipped":2442,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:07:13.764: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-map-3ff497f8-53e8-4dd5-8abf-d76ad14dcec5
STEP: Creating a pod to test consume configMaps
Aug  4 11:07:13.855: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3e4e9b26-bd99-47de-bc10-bd30befae593" in namespace "projected-993" to be "Succeeded or Failed"
Aug  4 11:07:13.878: INFO: Pod "pod-projected-configmaps-3e4e9b26-bd99-47de-bc10-bd30befae593": Phase="Pending", Reason="", readiness=false. Elapsed: 22.411598ms
Aug  4 11:07:15.882: INFO: Pod "pod-projected-configmaps-3e4e9b26-bd99-47de-bc10-bd30befae593": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026525221s
Aug  4 11:07:17.886: INFO: Pod "pod-projected-configmaps-3e4e9b26-bd99-47de-bc10-bd30befae593": Phase="Running", Reason="", readiness=true. Elapsed: 4.030945294s
Aug  4 11:07:19.891: INFO: Pod "pod-projected-configmaps-3e4e9b26-bd99-47de-bc10-bd30befae593": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.035584477s
STEP: Saw pod success
Aug  4 11:07:19.891: INFO: Pod "pod-projected-configmaps-3e4e9b26-bd99-47de-bc10-bd30befae593" satisfied condition "Succeeded or Failed"
Aug  4 11:07:19.894: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-3e4e9b26-bd99-47de-bc10-bd30befae593 container projected-configmap-volume-test: 
STEP: delete the pod
Aug  4 11:07:19.952: INFO: Waiting for pod pod-projected-configmaps-3e4e9b26-bd99-47de-bc10-bd30befae593 to disappear
Aug  4 11:07:19.961: INFO: Pod pod-projected-configmaps-3e4e9b26-bd99-47de-bc10-bd30befae593 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:07:19.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-993" for this suite.

• [SLOW TEST:6.205 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":138,"skipped":2472,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:07:19.970: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:157
[It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:07:20.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1734" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":275,"completed":139,"skipped":2504,"failed":0}
SSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:07:20.124: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Aug  4 11:07:21.813: INFO: Pod name wrapped-volume-race-cc8f270a-b15b-455f-b73c-7652d12ac024: Found 0 pods out of 5
Aug  4 11:07:26.828: INFO: Pod name wrapped-volume-race-cc8f270a-b15b-455f-b73c-7652d12ac024: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-cc8f270a-b15b-455f-b73c-7652d12ac024 in namespace emptydir-wrapper-3962, will wait for the garbage collector to delete the pods
Aug  4 11:07:40.909: INFO: Deleting ReplicationController wrapped-volume-race-cc8f270a-b15b-455f-b73c-7652d12ac024 took: 8.816047ms
Aug  4 11:07:41.209: INFO: Terminating ReplicationController wrapped-volume-race-cc8f270a-b15b-455f-b73c-7652d12ac024 pods took: 300.342403ms
STEP: Creating RC which spawns configmap-volume pods
Aug  4 11:07:53.738: INFO: Pod name wrapped-volume-race-33523586-14ec-4eae-947b-9b5f4b11802c: Found 0 pods out of 5
Aug  4 11:07:58.747: INFO: Pod name wrapped-volume-race-33523586-14ec-4eae-947b-9b5f4b11802c: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-33523586-14ec-4eae-947b-9b5f4b11802c in namespace emptydir-wrapper-3962, will wait for the garbage collector to delete the pods
Aug  4 11:08:14.828: INFO: Deleting ReplicationController wrapped-volume-race-33523586-14ec-4eae-947b-9b5f4b11802c took: 7.202263ms
Aug  4 11:08:15.129: INFO: Terminating ReplicationController wrapped-volume-race-33523586-14ec-4eae-947b-9b5f4b11802c pods took: 300.263831ms
STEP: Creating RC which spawns configmap-volume pods
Aug  4 11:08:33.460: INFO: Pod name wrapped-volume-race-3415d885-863c-4d01-b421-2afe6f1671dd: Found 0 pods out of 5
Aug  4 11:08:38.469: INFO: Pod name wrapped-volume-race-3415d885-863c-4d01-b421-2afe6f1671dd: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-3415d885-863c-4d01-b421-2afe6f1671dd in namespace emptydir-wrapper-3962, will wait for the garbage collector to delete the pods
Aug  4 11:08:56.557: INFO: Deleting ReplicationController wrapped-volume-race-3415d885-863c-4d01-b421-2afe6f1671dd took: 11.956909ms
Aug  4 11:08:56.957: INFO: Terminating ReplicationController wrapped-volume-race-3415d885-863c-4d01-b421-2afe6f1671dd pods took: 400.320093ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:09:15.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-3962" for this suite.

• [SLOW TEST:115.019 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":275,"completed":140,"skipped":2511,"failed":0}
SSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:09:15.144: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test override all
Aug  4 11:09:15.230: INFO: Waiting up to 5m0s for pod "client-containers-a584fbb9-e2c3-40c1-9bdd-8cae671f24fa" in namespace "containers-2169" to be "Succeeded or Failed"
Aug  4 11:09:15.233: INFO: Pod "client-containers-a584fbb9-e2c3-40c1-9bdd-8cae671f24fa": Phase="Pending", Reason="", readiness=false. Elapsed: 3.521285ms
Aug  4 11:09:17.238: INFO: Pod "client-containers-a584fbb9-e2c3-40c1-9bdd-8cae671f24fa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007848472s
Aug  4 11:09:19.242: INFO: Pod "client-containers-a584fbb9-e2c3-40c1-9bdd-8cae671f24fa": Phase="Running", Reason="", readiness=true. Elapsed: 4.01262598s
Aug  4 11:09:21.251: INFO: Pod "client-containers-a584fbb9-e2c3-40c1-9bdd-8cae671f24fa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.021625142s
STEP: Saw pod success
Aug  4 11:09:21.251: INFO: Pod "client-containers-a584fbb9-e2c3-40c1-9bdd-8cae671f24fa" satisfied condition "Succeeded or Failed"
Aug  4 11:09:21.261: INFO: Trying to get logs from node kali-worker2 pod client-containers-a584fbb9-e2c3-40c1-9bdd-8cae671f24fa container test-container: 
STEP: delete the pod
Aug  4 11:09:21.327: INFO: Waiting for pod client-containers-a584fbb9-e2c3-40c1-9bdd-8cae671f24fa to disappear
Aug  4 11:09:21.333: INFO: Pod client-containers-a584fbb9-e2c3-40c1-9bdd-8cae671f24fa no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:09:21.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-2169" for this suite.

• [SLOW TEST:6.202 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":275,"completed":141,"skipped":2524,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:09:21.346: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug  4 11:09:21.976: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug  4 11:09:24.772: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732136161, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732136161, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732136162, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732136161, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug  4 11:09:26.777: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732136161, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732136161, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732136162, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732136161, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug  4 11:09:29.832: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug  4 11:09:29.836: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-9534-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:09:30.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3515" for this suite.
STEP: Destroying namespace "webhook-3515-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:9.762 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":275,"completed":142,"skipped":2530,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:09:31.109: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0804 11:10:12.373757       7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug  4 11:10:12.373: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:10:12.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-8182" for this suite.

• [SLOW TEST:41.272 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":275,"completed":143,"skipped":2541,"failed":0}
SSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:10:12.382: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:171
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating server pod server in namespace prestop-8444
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-8444
STEP: Deleting pre-stop pod
Aug  4 11:10:27.568: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:10:27.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-8444" for this suite.

• [SLOW TEST:15.248 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":275,"completed":144,"skipped":2547,"failed":0}
SSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:10:27.630: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-3140
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-3140
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-3140
Aug  4 11:10:28.096: INFO: Found 0 stateful pods, waiting for 1
Aug  4 11:10:38.100: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Aug  4 11:10:38.102: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3140 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug  4 11:10:38.352: INFO: stderr: "I0804 11:10:38.233617    1394 log.go:172] (0xc0008fe000) (0xc000954000) Create stream\nI0804 11:10:38.233691    1394 log.go:172] (0xc0008fe000) (0xc000954000) Stream added, broadcasting: 1\nI0804 11:10:38.237194    1394 log.go:172] (0xc0008fe000) Reply frame received for 1\nI0804 11:10:38.237260    1394 log.go:172] (0xc0008fe000) (0xc0007d3540) Create stream\nI0804 11:10:38.237296    1394 log.go:172] (0xc0008fe000) (0xc0007d3540) Stream added, broadcasting: 3\nI0804 11:10:38.238403    1394 log.go:172] (0xc0008fe000) Reply frame received for 3\nI0804 11:10:38.238478    1394 log.go:172] (0xc0008fe000) (0xc0009540a0) Create stream\nI0804 11:10:38.238511    1394 log.go:172] (0xc0008fe000) (0xc0009540a0) Stream added, broadcasting: 5\nI0804 11:10:38.239476    1394 log.go:172] (0xc0008fe000) Reply frame received for 5\nI0804 11:10:38.314859    1394 log.go:172] (0xc0008fe000) Data frame received for 5\nI0804 11:10:38.314887    1394 log.go:172] (0xc0009540a0) (5) Data frame handling\nI0804 11:10:38.314904    1394 log.go:172] (0xc0009540a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0804 11:10:38.343126    1394 log.go:172] (0xc0008fe000) Data frame received for 3\nI0804 11:10:38.343166    1394 log.go:172] (0xc0007d3540) (3) Data frame handling\nI0804 11:10:38.343199    1394 log.go:172] (0xc0007d3540) (3) Data frame sent\nI0804 11:10:38.343436    1394 log.go:172] (0xc0008fe000) Data frame received for 5\nI0804 11:10:38.343449    1394 log.go:172] (0xc0009540a0) (5) Data frame handling\nI0804 11:10:38.343522    1394 log.go:172] (0xc0008fe000) Data frame received for 3\nI0804 11:10:38.343561    1394 log.go:172] (0xc0007d3540) (3) Data frame handling\nI0804 11:10:38.345749    1394 log.go:172] (0xc0008fe000) Data frame received for 1\nI0804 11:10:38.345776    1394 log.go:172] (0xc000954000) (1) Data frame handling\nI0804 11:10:38.345816    1394 log.go:172] (0xc000954000) (1) Data frame sent\nI0804 11:10:38.345844    1394 log.go:172] (0xc0008fe000) (0xc000954000) Stream removed, broadcasting: 1\nI0804 11:10:38.345909    1394 log.go:172] (0xc0008fe000) Go away received\nI0804 11:10:38.346393    1394 log.go:172] (0xc0008fe000) (0xc000954000) Stream removed, broadcasting: 1\nI0804 11:10:38.346428    1394 log.go:172] (0xc0008fe000) (0xc0007d3540) Stream removed, broadcasting: 3\nI0804 11:10:38.346448    1394 log.go:172] (0xc0008fe000) (0xc0009540a0) Stream removed, broadcasting: 5\n"
Aug  4 11:10:38.352: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug  4 11:10:38.352: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug  4 11:10:38.363: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Aug  4 11:10:48.368: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Aug  4 11:10:48.368: INFO: Waiting for statefulset status.replicas updated to 0
Aug  4 11:10:48.381: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999373s
Aug  4 11:10:49.426: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.996952479s
Aug  4 11:10:50.429: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.952254873s
Aug  4 11:10:51.461: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.948602245s
Aug  4 11:10:52.466: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.916605949s
Aug  4 11:10:53.469: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.912475873s
Aug  4 11:10:54.474: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.908748057s
Aug  4 11:10:55.478: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.903813772s
Aug  4 11:10:56.484: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.899590609s
Aug  4 11:10:57.489: INFO: Verifying statefulset ss doesn't scale past 1 for another 893.603024ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-3140
Aug  4 11:10:58.494: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3140 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug  4 11:10:58.735: INFO: stderr: "I0804 11:10:58.621203    1417 log.go:172] (0xc00003b080) (0xc0003e75e0) Create stream\nI0804 11:10:58.621263    1417 log.go:172] (0xc00003b080) (0xc0003e75e0) Stream added, broadcasting: 1\nI0804 11:10:58.627315    1417 log.go:172] (0xc00003b080) Reply frame received for 1\nI0804 11:10:58.627363    1417 log.go:172] (0xc00003b080) (0xc000878000) Create stream\nI0804 11:10:58.627379    1417 log.go:172] (0xc00003b080) (0xc000878000) Stream added, broadcasting: 3\nI0804 11:10:58.629250    1417 log.go:172] (0xc00003b080) Reply frame received for 3\nI0804 11:10:58.629335    1417 log.go:172] (0xc00003b080) (0xc00039e000) Create stream\nI0804 11:10:58.629356    1417 log.go:172] (0xc00003b080) (0xc00039e000) Stream added, broadcasting: 5\nI0804 11:10:58.630283    1417 log.go:172] (0xc00003b080) Reply frame received for 5\nI0804 11:10:58.727585    1417 log.go:172] (0xc00003b080) Data frame received for 5\nI0804 11:10:58.727646    1417 log.go:172] (0xc00039e000) (5) Data frame handling\nI0804 11:10:58.727662    1417 log.go:172] (0xc00039e000) (5) Data frame sent\nI0804 11:10:58.727671    1417 log.go:172] (0xc00003b080) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0804 11:10:58.727675    1417 log.go:172] (0xc00039e000) (5) Data frame handling\nI0804 11:10:58.727734    1417 log.go:172] (0xc00003b080) Data frame received for 3\nI0804 11:10:58.727749    1417 log.go:172] (0xc000878000) (3) Data frame handling\nI0804 11:10:58.727759    1417 log.go:172] (0xc000878000) (3) Data frame sent\nI0804 11:10:58.727767    1417 log.go:172] (0xc00003b080) Data frame received for 3\nI0804 11:10:58.727773    1417 log.go:172] (0xc000878000) (3) Data frame handling\nI0804 11:10:58.729323    1417 log.go:172] (0xc00003b080) Data frame received for 1\nI0804 11:10:58.729373    1417 log.go:172] (0xc0003e75e0) (1) Data frame handling\nI0804 11:10:58.729409    1417 log.go:172] (0xc0003e75e0) (1) Data frame sent\nI0804 11:10:58.729436    1417 log.go:172] (0xc00003b080) (0xc0003e75e0) Stream removed, broadcasting: 1\nI0804 11:10:58.729613    1417 log.go:172] (0xc00003b080) Go away received\nI0804 11:10:58.729995    1417 log.go:172] (0xc00003b080) (0xc0003e75e0) Stream removed, broadcasting: 1\nI0804 11:10:58.730029    1417 log.go:172] (0xc00003b080) (0xc000878000) Stream removed, broadcasting: 3\nI0804 11:10:58.730049    1417 log.go:172] (0xc00003b080) (0xc00039e000) Stream removed, broadcasting: 5\n"
Aug  4 11:10:58.735: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug  4 11:10:58.735: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug  4 11:10:58.739: INFO: Found 1 stateful pods, waiting for 3
Aug  4 11:11:08.745: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Aug  4 11:11:08.745: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Aug  4 11:11:08.745: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Aug  4 11:11:08.751: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3140 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug  4 11:11:08.984: INFO: stderr: "I0804 11:11:08.881851    1438 log.go:172] (0xc000a7ca50) (0xc0006e1540) Create stream\nI0804 11:11:08.881899    1438 log.go:172] (0xc000a7ca50) (0xc0006e1540) Stream added, broadcasting: 1\nI0804 11:11:08.884663    1438 log.go:172] (0xc000a7ca50) Reply frame received for 1\nI0804 11:11:08.884685    1438 log.go:172] (0xc000a7ca50) (0xc000a32000) Create stream\nI0804 11:11:08.884693    1438 log.go:172] (0xc000a7ca50) (0xc000a32000) Stream added, broadcasting: 3\nI0804 11:11:08.885863    1438 log.go:172] (0xc000a7ca50) Reply frame received for 3\nI0804 11:11:08.885887    1438 log.go:172] (0xc000a7ca50) (0xc000a320a0) Create stream\nI0804 11:11:08.885894    1438 log.go:172] (0xc000a7ca50) (0xc000a320a0) Stream added, broadcasting: 5\nI0804 11:11:08.886962    1438 log.go:172] (0xc000a7ca50) Reply frame received for 5\nI0804 11:11:08.976818    1438 log.go:172] (0xc000a7ca50) Data frame received for 5\nI0804 11:11:08.976853    1438 log.go:172] (0xc000a320a0) (5) Data frame handling\nI0804 11:11:08.976865    1438 log.go:172] (0xc000a320a0) (5) Data frame sent\nI0804 11:11:08.976873    1438 log.go:172] (0xc000a7ca50) Data frame received for 5\nI0804 11:11:08.976897    1438 log.go:172] (0xc000a320a0) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0804 11:11:08.976918    1438 log.go:172] (0xc000a7ca50) Data frame received for 3\nI0804 11:11:08.976925    1438 log.go:172] (0xc000a32000) (3) Data frame handling\nI0804 11:11:08.976937    1438 log.go:172] (0xc000a32000) (3) Data frame sent\nI0804 11:11:08.977170    1438 log.go:172] (0xc000a7ca50) Data frame received for 3\nI0804 11:11:08.977197    1438 log.go:172] (0xc000a32000) (3) Data frame handling\nI0804 11:11:08.978912    1438 log.go:172] (0xc000a7ca50) Data frame received for 1\nI0804 11:11:08.978934    1438 log.go:172] (0xc0006e1540) (1) Data frame handling\nI0804 11:11:08.978945    1438 log.go:172] (0xc0006e1540) (1) Data frame sent\nI0804 11:11:08.978958    1438 log.go:172] (0xc000a7ca50) (0xc0006e1540) Stream removed, broadcasting: 1\nI0804 11:11:08.978974    1438 log.go:172] (0xc000a7ca50) Go away received\nI0804 11:11:08.979397    1438 log.go:172] (0xc000a7ca50) (0xc0006e1540) Stream removed, broadcasting: 1\nI0804 11:11:08.979416    1438 log.go:172] (0xc000a7ca50) (0xc000a32000) Stream removed, broadcasting: 3\nI0804 11:11:08.979424    1438 log.go:172] (0xc000a7ca50) (0xc000a320a0) Stream removed, broadcasting: 5\n"
Aug  4 11:11:08.984: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug  4 11:11:08.984: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug  4 11:11:08.984: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3140 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug  4 11:11:09.246: INFO: stderr: "I0804 11:11:09.115602    1461 log.go:172] (0xc0009b0790) (0xc0003fac80) Create stream\nI0804 11:11:09.115671    1461 log.go:172] (0xc0009b0790) (0xc0003fac80) Stream added, broadcasting: 1\nI0804 11:11:09.119059    1461 log.go:172] (0xc0009b0790) Reply frame received for 1\nI0804 11:11:09.119104    1461 log.go:172] (0xc0009b0790) (0xc0006cd400) Create stream\nI0804 11:11:09.119115    1461 log.go:172] (0xc0009b0790) (0xc0006cd400) Stream added, broadcasting: 3\nI0804 11:11:09.120221    1461 log.go:172] (0xc0009b0790) Reply frame received for 3\nI0804 11:11:09.120291    1461 log.go:172] (0xc0009b0790) (0xc0006cd5e0) Create stream\nI0804 11:11:09.120328    1461 log.go:172] (0xc0009b0790) (0xc0006cd5e0) Stream added, broadcasting: 5\nI0804 11:11:09.121479    1461 log.go:172] (0xc0009b0790) Reply frame received for 5\nI0804 11:11:09.197458    1461 log.go:172] (0xc0009b0790) Data frame received for 5\nI0804 11:11:09.197506    1461 log.go:172] (0xc0006cd5e0) (5) Data frame handling\nI0804 11:11:09.197545    1461 log.go:172] (0xc0006cd5e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0804 11:11:09.238501    1461 log.go:172] (0xc0009b0790) Data frame received for 3\nI0804 11:11:09.238529    1461 log.go:172] (0xc0006cd400) (3) Data frame handling\nI0804 11:11:09.238544    1461 log.go:172] (0xc0006cd400) (3) Data frame sent\nI0804 11:11:09.238562    1461 log.go:172] (0xc0009b0790) Data frame received for 3\nI0804 11:11:09.238573    1461 log.go:172] (0xc0006cd400) (3) Data frame handling\nI0804 11:11:09.238674    1461 log.go:172] (0xc0009b0790) Data frame received for 5\nI0804 11:11:09.238693    1461 log.go:172] (0xc0006cd5e0) (5) Data frame handling\nI0804 11:11:09.240507    1461 log.go:172] (0xc0009b0790) Data frame received for 1\nI0804 11:11:09.240518    1461 log.go:172] (0xc0003fac80) (1) Data frame handling\nI0804 11:11:09.240527    1461 log.go:172] (0xc0003fac80) (1) Data frame sent\nI0804 11:11:09.240535    1461 log.go:172] (0xc0009b0790) (0xc0003fac80) Stream removed, broadcasting: 1\nI0804 11:11:09.240544    1461 log.go:172] (0xc0009b0790) Go away received\nI0804 11:11:09.240980    1461 log.go:172] (0xc0009b0790) (0xc0003fac80) Stream removed, broadcasting: 1\nI0804 11:11:09.241011    1461 log.go:172] (0xc0009b0790) (0xc0006cd400) Stream removed, broadcasting: 3\nI0804 11:11:09.241021    1461 log.go:172] (0xc0009b0790) (0xc0006cd5e0) Stream removed, broadcasting: 5\n"
Aug  4 11:11:09.246: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug  4 11:11:09.246: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug  4 11:11:09.246: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3140 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug  4 11:11:09.540: INFO: stderr: "I0804 11:11:09.414925    1482 log.go:172] (0xc000961760) (0xc000930780) Create stream\nI0804 11:11:09.415006    1482 log.go:172] (0xc000961760) (0xc000930780) Stream added, broadcasting: 1\nI0804 11:11:09.421077    1482 log.go:172] (0xc000961760) Reply frame received for 1\nI0804 11:11:09.421150    1482 log.go:172] (0xc000961760) (0xc00063b5e0) Create stream\nI0804 11:11:09.421175    1482 log.go:172] (0xc000961760) (0xc00063b5e0) Stream added, broadcasting: 3\nI0804 11:11:09.422317    1482 log.go:172] (0xc000961760) Reply frame received for 3\nI0804 11:11:09.422369    1482 log.go:172] (0xc000961760) (0xc000528a00) Create stream\nI0804 11:11:09.422385    1482 log.go:172] (0xc000961760) (0xc000528a00) Stream added, broadcasting: 5\nI0804 11:11:09.423347    1482 log.go:172] (0xc000961760) Reply frame received for 5\nI0804 11:11:09.477296    1482 log.go:172] (0xc000961760) Data frame received for 5\nI0804 11:11:09.477325    1482 log.go:172] (0xc000528a00) (5) Data frame handling\nI0804 11:11:09.477344    1482 log.go:172] (0xc000528a00) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0804 11:11:09.530998    1482 log.go:172] (0xc000961760) Data frame received for 3\nI0804 11:11:09.531031    1482 log.go:172] (0xc00063b5e0) (3) Data frame handling\nI0804 11:11:09.531059    1482 log.go:172] (0xc00063b5e0) (3) Data frame sent\nI0804 11:11:09.531184    1482 log.go:172] (0xc000961760) Data frame received for 5\nI0804 11:11:09.531198    1482 log.go:172] (0xc000528a00) (5) Data frame handling\nI0804 11:11:09.531427    1482 log.go:172] (0xc000961760) Data frame received for 3\nI0804 11:11:09.531447    1482 log.go:172] (0xc00063b5e0) (3) Data frame handling\nI0804 11:11:09.533273    1482 log.go:172] (0xc000961760) Data frame received for 1\nI0804 11:11:09.533296    1482 log.go:172] (0xc000930780) (1) Data frame handling\nI0804 11:11:09.533312    1482 log.go:172] (0xc000930780) (1) Data frame sent\nI0804 11:11:09.533478    1482 log.go:172] (0xc000961760) (0xc000930780) Stream removed, broadcasting: 1\nI0804 11:11:09.533857    1482 log.go:172] (0xc000961760) Go away received\nI0804 11:11:09.533953    1482 log.go:172] (0xc000961760) (0xc000930780) Stream removed, broadcasting: 1\nI0804 11:11:09.533977    1482 log.go:172] (0xc000961760) (0xc00063b5e0) Stream removed, broadcasting: 3\nI0804 11:11:09.533989    1482 log.go:172] (0xc000961760) (0xc000528a00) Stream removed, broadcasting: 5\n"
Aug  4 11:11:09.540: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug  4 11:11:09.540: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug  4 11:11:09.540: INFO: Waiting for statefulset status.replicas updated to 0
Aug  4 11:11:09.551: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Aug  4 11:11:19.559: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Aug  4 11:11:19.559: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Aug  4 11:11:19.559: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Aug  4 11:11:19.591: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999484s
Aug  4 11:11:20.595: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.975306084s
Aug  4 11:11:21.599: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.971302508s
Aug  4 11:11:22.604: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.967441036s
Aug  4 11:11:23.609: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.962873033s
Aug  4 11:11:24.624: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.957815613s
Aug  4 11:11:25.820: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.942351202s
Aug  4 11:11:26.846: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.747180869s
Aug  4 11:11:27.850: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.721162348s
Aug  4 11:11:28.854: INFO: Verifying statefulset ss doesn't scale past 3 for another 716.639913ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-3140
Aug  4 11:11:29.859: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3140 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug  4 11:11:30.068: INFO: stderr: "I0804 11:11:29.976052    1504 log.go:172] (0xc0000e8d10) (0xc0006c94a0) Create stream\nI0804 11:11:29.976117    1504 log.go:172] (0xc0000e8d10) (0xc0006c94a0) Stream added, broadcasting: 1\nI0804 11:11:29.979217    1504 log.go:172] (0xc0000e8d10) Reply frame received for 1\nI0804 11:11:29.979281    1504 log.go:172] (0xc0000e8d10) (0xc0006c9540) Create stream\nI0804 11:11:29.979311    1504 log.go:172] (0xc0000e8d10) (0xc0006c9540) Stream added, broadcasting: 3\nI0804 11:11:29.980385    1504 log.go:172] (0xc0000e8d10) Reply frame received for 3\nI0804 11:11:29.980429    1504 log.go:172] (0xc0000e8d10) (0xc00039e960) Create stream\nI0804 11:11:29.980441    1504 log.go:172] (0xc0000e8d10) (0xc00039e960) Stream added, broadcasting: 5\nI0804 11:11:29.981688    1504 log.go:172] (0xc0000e8d10) Reply frame received for 5\nI0804 11:11:30.060640    1504 log.go:172] (0xc0000e8d10) Data frame received for 3\nI0804 11:11:30.060671    1504 log.go:172] (0xc0006c9540) (3) Data frame handling\nI0804 11:11:30.060682    1504 log.go:172] (0xc0006c9540) (3) Data frame sent\nI0804 11:11:30.060689    1504 log.go:172] (0xc0000e8d10) Data frame received for 3\nI0804 11:11:30.060695    1504 log.go:172] (0xc0006c9540) (3) Data frame handling\nI0804 11:11:30.060854    1504 log.go:172] (0xc0000e8d10) Data frame received for 5\nI0804 11:11:30.060880    1504 log.go:172] (0xc00039e960) (5) Data frame handling\nI0804 11:11:30.060903    1504 log.go:172] (0xc00039e960) (5) Data frame sent\nI0804 11:11:30.060919    1504 log.go:172] (0xc0000e8d10) Data frame received for 5\nI0804 11:11:30.060930    1504 log.go:172] (0xc00039e960) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0804 11:11:30.062109    1504 log.go:172] (0xc0000e8d10) Data frame received for 1\nI0804 11:11:30.062131    1504 log.go:172] (0xc0006c94a0) (1) Data frame handling\nI0804 11:11:30.062143    1504 log.go:172] (0xc0006c94a0) (1) Data frame sent\nI0804 11:11:30.062156    1504 log.go:172] (0xc0000e8d10) (0xc0006c94a0) Stream removed, broadcasting: 1\nI0804 11:11:30.062494    1504 log.go:172] (0xc0000e8d10) (0xc0006c94a0) Stream removed, broadcasting: 1\nI0804 11:11:30.062514    1504 log.go:172] (0xc0000e8d10) (0xc0006c9540) Stream removed, broadcasting: 3\nI0804 11:11:30.062671    1504 log.go:172] (0xc0000e8d10) (0xc00039e960) Stream removed, broadcasting: 5\n"
Aug  4 11:11:30.068: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug  4 11:11:30.068: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug  4 11:11:30.068: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3140 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug  4 11:11:30.301: INFO: stderr: "I0804 11:11:30.223559    1525 log.go:172] (0xc0003c9e40) (0xc0006994a0) Create stream\nI0804 11:11:30.223610    1525 log.go:172] (0xc0003c9e40) (0xc0006994a0) Stream added, broadcasting: 1\nI0804 11:11:30.225934    1525 log.go:172] (0xc0003c9e40) Reply frame received for 1\nI0804 11:11:30.225986    1525 log.go:172] (0xc0003c9e40) (0xc000699540) Create stream\nI0804 11:11:30.226000    1525 log.go:172] (0xc0003c9e40) (0xc000699540) Stream added, broadcasting: 3\nI0804 11:11:30.226953    1525 log.go:172] (0xc0003c9e40) Reply frame received for 3\nI0804 11:11:30.227010    1525 log.go:172] (0xc0003c9e40) (0xc000946000) Create stream\nI0804 11:11:30.227026    1525 log.go:172] (0xc0003c9e40) (0xc000946000) Stream added, broadcasting: 5\nI0804 11:11:30.228238    1525 log.go:172] (0xc0003c9e40) Reply frame received for 5\nI0804 11:11:30.293048    1525 log.go:172] (0xc0003c9e40) Data frame received for 3\nI0804 11:11:30.293111    1525 log.go:172] (0xc000699540) (3) Data frame handling\nI0804 11:11:30.293139    1525 log.go:172] (0xc000699540) (3) Data frame sent\nI0804 11:11:30.293192    1525 log.go:172] (0xc0003c9e40) Data frame received for 5\nI0804 11:11:30.293219    1525 log.go:172] (0xc000946000) (5) Data frame handling\nI0804 11:11:30.293235    1525 log.go:172] (0xc000946000) (5) Data frame sent\nI0804 11:11:30.293255    1525 log.go:172] (0xc0003c9e40) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0804 11:11:30.293273    1525 log.go:172] (0xc000946000) (5) Data frame handling\nI0804 11:11:30.293365    1525 log.go:172] (0xc0003c9e40) Data frame received for 3\nI0804 11:11:30.293416    1525 log.go:172] (0xc000699540) (3) Data frame handling\nI0804 11:11:30.295074    1525 log.go:172] (0xc0003c9e40) Data frame received for 1\nI0804 11:11:30.295102    1525 log.go:172] (0xc0006994a0) (1) Data frame handling\nI0804 11:11:30.295116    1525 log.go:172] (0xc0006994a0) (1) Data frame sent\nI0804 11:11:30.295143    1525 log.go:172] (0xc0003c9e40) (0xc0006994a0) Stream removed, broadcasting: 1\nI0804 11:11:30.295178    1525 log.go:172] (0xc0003c9e40) Go away received\nI0804 11:11:30.295565    1525 log.go:172] (0xc0003c9e40) (0xc0006994a0) Stream removed, broadcasting: 1\nI0804 11:11:30.295591    1525 log.go:172] (0xc0003c9e40) (0xc000699540) Stream removed, broadcasting: 3\nI0804 11:11:30.295605    1525 log.go:172] (0xc0003c9e40) (0xc000946000) Stream removed, broadcasting: 5\n"
Aug  4 11:11:30.301: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug  4 11:11:30.301: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug  4 11:11:30.301: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3140 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug  4 11:11:30.593: INFO: stderr: "I0804 11:11:30.518127    1548 log.go:172] (0xc000bb6840) (0xc000a34500) Create stream\nI0804 11:11:30.518211    1548 log.go:172] (0xc000bb6840) (0xc000a34500) Stream added, broadcasting: 1\nI0804 11:11:30.522302    1548 log.go:172] (0xc000bb6840) Reply frame received for 1\nI0804 11:11:30.522335    1548 log.go:172] (0xc000bb6840) (0xc0005cfae0) Create stream\nI0804 11:11:30.522343    1548 log.go:172] (0xc000bb6840) (0xc0005cfae0) Stream added, broadcasting: 3\nI0804 11:11:30.523194    1548 log.go:172] (0xc000bb6840) Reply frame received for 3\nI0804 11:11:30.523229    1548 log.go:172] (0xc000bb6840) (0xc000717720) Create stream\nI0804 11:11:30.523243    1548 log.go:172] (0xc000bb6840) (0xc000717720) Stream added, broadcasting: 5\nI0804 11:11:30.524055    1548 log.go:172] (0xc000bb6840) Reply frame received for 5\nI0804 11:11:30.585828    1548 log.go:172] (0xc000bb6840) Data frame received for 3\nI0804 11:11:30.585864    1548 log.go:172] (0xc0005cfae0) (3) Data frame handling\nI0804 11:11:30.585872    1548 log.go:172] (0xc0005cfae0) (3) Data frame sent\nI0804 11:11:30.585889    1548 log.go:172] (0xc000bb6840) Data frame received for 5\nI0804 11:11:30.585894    1548 log.go:172] (0xc000717720) (5) Data frame handling\nI0804 11:11:30.585900    1548 log.go:172] (0xc000717720) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0804 11:11:30.586003    1548 log.go:172] (0xc000bb6840) Data frame received for 5\nI0804 11:11:30.586037    1548 log.go:172] (0xc000717720) (5) Data frame handling\nI0804 11:11:30.586432    1548 log.go:172] (0xc000bb6840) Data frame received for 3\nI0804 11:11:30.586466    1548 log.go:172] (0xc0005cfae0) (3) Data frame handling\nI0804 11:11:30.587585    1548 log.go:172] (0xc000bb6840) Data frame received for 1\nI0804 11:11:30.587624    1548 log.go:172] (0xc000a34500) (1) Data frame handling\nI0804 11:11:30.587654    1548 log.go:172] (0xc000a34500) (1) Data frame sent\nI0804 11:11:30.587780    1548 log.go:172] (0xc000bb6840) (0xc000a34500) Stream removed, broadcasting: 1\nI0804 11:11:30.588066    1548 log.go:172] (0xc000bb6840) Go away received\nI0804 11:11:30.588402    1548 log.go:172] (0xc000bb6840) (0xc000a34500) Stream removed, broadcasting: 1\nI0804 11:11:30.588425    1548 log.go:172] (0xc000bb6840) (0xc0005cfae0) Stream removed, broadcasting: 3\nI0804 11:11:30.588436    1548 log.go:172] (0xc000bb6840) (0xc000717720) Stream removed, broadcasting: 5\n"
Aug  4 11:11:30.594: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug  4 11:11:30.594: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug  4 11:11:30.594: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Aug  4 11:12:00.645: INFO: Deleting all statefulset in ns statefulset-3140
Aug  4 11:12:00.648: INFO: Scaling statefulset ss to 0
Aug  4 11:12:00.658: INFO: Waiting for statefulset status.replicas updated to 0
Aug  4 11:12:00.660: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:12:00.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-3140" for this suite.

• [SLOW TEST:93.054 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":275,"completed":145,"skipped":2552,"failed":0}
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:12:00.684: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Starting the proxy
Aug  4 11:12:00.724: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix857575513/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:12:00.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-762" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":275,"completed":146,"skipped":2561,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:12:00.797: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug  4 11:12:00.908: INFO: Waiting up to 5m0s for pod "downwardapi-volume-04939866-4ab7-4066-bc3b-579bc6c7e73e" in namespace "projected-1682" to be "Succeeded or Failed"
Aug  4 11:12:00.926: INFO: Pod "downwardapi-volume-04939866-4ab7-4066-bc3b-579bc6c7e73e": Phase="Pending", Reason="", readiness=false. Elapsed: 18.466346ms
Aug  4 11:12:03.043: INFO: Pod "downwardapi-volume-04939866-4ab7-4066-bc3b-579bc6c7e73e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.135034468s
Aug  4 11:12:05.047: INFO: Pod "downwardapi-volume-04939866-4ab7-4066-bc3b-579bc6c7e73e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.139061828s
STEP: Saw pod success
Aug  4 11:12:05.047: INFO: Pod "downwardapi-volume-04939866-4ab7-4066-bc3b-579bc6c7e73e" satisfied condition "Succeeded or Failed"
Aug  4 11:12:05.050: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-04939866-4ab7-4066-bc3b-579bc6c7e73e container client-container: 
STEP: delete the pod
Aug  4 11:12:05.317: INFO: Waiting for pod downwardapi-volume-04939866-4ab7-4066-bc3b-579bc6c7e73e to disappear
Aug  4 11:12:05.328: INFO: Pod downwardapi-volume-04939866-4ab7-4066-bc3b-579bc6c7e73e no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:12:05.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1682" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":147,"skipped":2573,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:12:05.335: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir volume type on node default medium
Aug  4 11:12:05.467: INFO: Waiting up to 5m0s for pod "pod-87c1b3e7-10ef-4108-bb53-85f81e508e8c" in namespace "emptydir-4082" to be "Succeeded or Failed"
Aug  4 11:12:05.479: INFO: Pod "pod-87c1b3e7-10ef-4108-bb53-85f81e508e8c": Phase="Pending", Reason="", readiness=false. Elapsed: 11.258566ms
Aug  4 11:12:07.522: INFO: Pod "pod-87c1b3e7-10ef-4108-bb53-85f81e508e8c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054389377s
Aug  4 11:12:09.526: INFO: Pod "pod-87c1b3e7-10ef-4108-bb53-85f81e508e8c": Phase="Running", Reason="", readiness=true. Elapsed: 4.05905247s
Aug  4 11:12:11.531: INFO: Pod "pod-87c1b3e7-10ef-4108-bb53-85f81e508e8c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.063420765s
STEP: Saw pod success
Aug  4 11:12:11.531: INFO: Pod "pod-87c1b3e7-10ef-4108-bb53-85f81e508e8c" satisfied condition "Succeeded or Failed"
Aug  4 11:12:11.535: INFO: Trying to get logs from node kali-worker2 pod pod-87c1b3e7-10ef-4108-bb53-85f81e508e8c container test-container: 
STEP: delete the pod
Aug  4 11:12:11.602: INFO: Waiting for pod pod-87c1b3e7-10ef-4108-bb53-85f81e508e8c to disappear
Aug  4 11:12:11.610: INFO: Pod pod-87c1b3e7-10ef-4108-bb53-85f81e508e8c no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:12:11.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4082" for this suite.

• [SLOW TEST:6.288 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":148,"skipped":2581,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:12:11.624: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
Aug  4 11:12:12.262: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
Aug  4 11:12:14.396: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732136332, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732136332, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732136332, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732136332, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-65c6cd5fdf\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug  4 11:12:16.414: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732136332, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732136332, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732136332, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732136332, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-65c6cd5fdf\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug  4 11:12:19.440: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug  4 11:12:19.445: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: v2 custom resource should be converted
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:12:20.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-4691" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137

• [SLOW TEST:9.085 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":275,"completed":149,"skipped":2598,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:12:20.709: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Aug  4 11:12:26.831: INFO: &Pod{ObjectMeta:{send-events-15964317-0fb0-44bb-8007-e8603f8df015  events-9825 /api/v1/namespaces/events-9825/pods/send-events-15964317-0fb0-44bb-8007-e8603f8df015 31a444a6-3812-4d8d-9a63-6e95a982fda1 6674901 0 2020-08-04 11:12:20 +0000 UTC   map[name:foo time:768527255] map[] [] []  [{e2e.test Update v1 2020-08-04 11:12:20 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 116 105 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 112 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 114 103 115 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 114 116 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 99 111 110 116 97 105 110 101 114 80 111 114 116 92 34 58 56 48 44 92 34 112 114 111 116 111 99 111 108 92 34 58 92 34 84 67 80 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 99 111 110 116 97 105 110 101 114 80 111 114 116 34 58 123 125 44 34 102 58 112 114 111 116 111 99 111 108 34 58 123 125 125 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-04 11:12:24 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 50 53 48 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5w522,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5w522,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5w522,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:12:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:12:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:12:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:12:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.250,StartTime:2020-08-04 11:12:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-04 11:12:24 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://d135fed50562ad6bf5081831a427382d2e8fe37696316f24db046228dd0846a5,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.250,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

STEP: checking for scheduler event about the pod
Aug  4 11:12:28.836: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Aug  4 11:12:30.841: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:12:30.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-9825" for this suite.

• [SLOW TEST:10.197 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":275,"completed":150,"skipped":2612,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:12:30.907: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating the pod
Aug  4 11:12:35.522: INFO: Successfully updated pod "annotationupdate5965d63d-6dda-468e-a637-208773e253dc"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:12:39.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3838" for this suite.

• [SLOW TEST:8.692 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":275,"completed":151,"skipped":2623,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:12:39.600: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Aug  4 11:12:44.232: INFO: Successfully updated pod "pod-update-activedeadlineseconds-3a0e3139-3be4-4b14-bd07-48a92dbdc79a"
Aug  4 11:12:44.233: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-3a0e3139-3be4-4b14-bd07-48a92dbdc79a" in namespace "pods-8363" to be "terminated due to deadline exceeded"
Aug  4 11:12:44.325: INFO: Pod "pod-update-activedeadlineseconds-3a0e3139-3be4-4b14-bd07-48a92dbdc79a": Phase="Running", Reason="", readiness=true. Elapsed: 92.388603ms
Aug  4 11:12:46.333: INFO: Pod "pod-update-activedeadlineseconds-3a0e3139-3be4-4b14-bd07-48a92dbdc79a": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.100307763s
Aug  4 11:12:46.333: INFO: Pod "pod-update-activedeadlineseconds-3a0e3139-3be4-4b14-bd07-48a92dbdc79a" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:12:46.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8363" for this suite.

• [SLOW TEST:6.741 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":275,"completed":152,"skipped":2666,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:12:46.342: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6481.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6481.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug  4 11:12:52.608: INFO: DNS probes using dns-6481/dns-test-f786fa88-31a2-402a-af76-64f9d9a8e359 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:12:52.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-6481" for this suite.

• [SLOW TEST:6.431 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":275,"completed":153,"skipped":2723,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:12:52.774: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug  4 11:12:53.273: INFO: Waiting up to 5m0s for pod "downwardapi-volume-af68bd8f-448a-444a-bf1f-e615a6c1e7f4" in namespace "projected-4813" to be "Succeeded or Failed"
Aug  4 11:12:53.331: INFO: Pod "downwardapi-volume-af68bd8f-448a-444a-bf1f-e615a6c1e7f4": Phase="Pending", Reason="", readiness=false. Elapsed: 58.314041ms
Aug  4 11:12:55.349: INFO: Pod "downwardapi-volume-af68bd8f-448a-444a-bf1f-e615a6c1e7f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076307945s
Aug  4 11:12:57.367: INFO: Pod "downwardapi-volume-af68bd8f-448a-444a-bf1f-e615a6c1e7f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.094581433s
STEP: Saw pod success
Aug  4 11:12:57.367: INFO: Pod "downwardapi-volume-af68bd8f-448a-444a-bf1f-e615a6c1e7f4" satisfied condition "Succeeded or Failed"
Aug  4 11:12:57.370: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-af68bd8f-448a-444a-bf1f-e615a6c1e7f4 container client-container: 
STEP: delete the pod
Aug  4 11:12:57.476: INFO: Waiting for pod downwardapi-volume-af68bd8f-448a-444a-bf1f-e615a6c1e7f4 to disappear
Aug  4 11:12:57.488: INFO: Pod downwardapi-volume-af68bd8f-448a-444a-bf1f-e615a6c1e7f4 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:12:57.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4813" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":275,"completed":154,"skipped":2743,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:12:57.497: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: set up a multi version CRD
Aug  4 11:12:57.597: INFO: >>> kubeConfig: /root/.kube/config
STEP: mark a version not serverd
STEP: check the unserved version gets removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:13:13.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-4780" for this suite.

• [SLOW TEST:15.609 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":275,"completed":155,"skipped":2750,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:13:13.106: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug  4 11:13:13.200: INFO: Pod name rollover-pod: Found 0 pods out of 1
Aug  4 11:13:18.205: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Aug  4 11:13:18.205: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Aug  4 11:13:20.217: INFO: Creating deployment "test-rollover-deployment"
Aug  4 11:13:20.241: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Aug  4 11:13:22.289: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Aug  4 11:13:22.295: INFO: Ensure that both replica sets have 1 created replica
Aug  4 11:13:22.299: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Aug  4 11:13:22.306: INFO: Updating deployment test-rollover-deployment
Aug  4 11:13:22.306: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Aug  4 11:13:24.427: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Aug  4 11:13:24.432: INFO: Make sure deployment "test-rollover-deployment" is complete
Aug  4 11:13:24.437: INFO: all replica sets need to contain the pod-template-hash label
Aug  4 11:13:24.437: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732136400, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732136400, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732136402, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732136400, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug  4 11:13:26.444: INFO: all replica sets need to contain the pod-template-hash label
Aug  4 11:13:26.444: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732136400, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732136400, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732136406, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732136400, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug  4 11:13:28.444: INFO: all replica sets need to contain the pod-template-hash label
Aug  4 11:13:28.444: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732136400, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732136400, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732136406, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732136400, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug  4 11:13:30.587: INFO: all replica sets need to contain the pod-template-hash label
Aug  4 11:13:30.587: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732136400, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732136400, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732136406, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732136400, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug  4 11:13:32.444: INFO: all replica sets need to contain the pod-template-hash label
Aug  4 11:13:32.444: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732136400, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732136400, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732136406, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732136400, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug  4 11:13:34.514: INFO: all replica sets need to contain the pod-template-hash label
Aug  4 11:13:34.514: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732136400, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732136400, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732136406, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732136400, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug  4 11:13:36.496: INFO: 
Aug  4 11:13:36.496: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
Aug  4 11:13:36.562: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:{test-rollover-deployment  deployment-1280 /apis/apps/v1/namespaces/deployment-1280/deployments/test-rollover-deployment 5c796f74-3558-4d10-85d1-17468f35572c 6675350 2 2020-08-04 11:13:20 +0000 UTC   map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] []  [{e2e.test Update apps/v1 2020-08-04 11:13:22 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 109 105 110 82 101 97 100 121 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 114 111 108 108 105 110 103 85 112 100 97 116 101 34 58 123 34 46 34 58 123 125 44 34 102 58 109 97 120 83 117 114 103 101 34 58 123 125 44 34 102 58 109 97 120 85 110 97 118 97 105 108 97 98 108 101 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-08-04 11:13:36 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0030312b8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-08-04 11:13:20 +0000 UTC,LastTransitionTime:2020-08-04 11:13:20 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-84f7f6f64b" has successfully progressed.,LastUpdateTime:2020-08-04 11:13:36 +0000 UTC,LastTransitionTime:2020-08-04 11:13:20 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Aug  4 11:13:36.566: INFO: New ReplicaSet "test-rollover-deployment-84f7f6f64b" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:{test-rollover-deployment-84f7f6f64b  deployment-1280 /apis/apps/v1/namespaces/deployment-1280/replicasets/test-rollover-deployment-84f7f6f64b f32b807a-719f-4064-8c8c-2fac47664aa0 6675339 2 2020-08-04 11:13:22 +0000 UTC   map[name:rollover-pod pod-template-hash:84f7f6f64b] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 5c796f74-3558-4d10-85d1-17468f35572c 0xc002387ad7 0xc002387ad8}] []  [{kube-controller-manager Update apps/v1 2020-08-04 11:13:36 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 53 99 55 57 54 102 55 52 45 51 53 53 56 45 52 100 49 48 45 56 53 100 49 45 49 55 52 54 56 102 51 53 53 55 50 99 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 109 105 110 82 101 97 100 121 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 84f7f6f64b,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:84f7f6f64b] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002387be8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Aug  4 11:13:36.566: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Aug  4 11:13:36.566: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller  deployment-1280 /apis/apps/v1/namespaces/deployment-1280/replicasets/test-rollover-controller 271bfcd2-3161-48c4-b097-d328466ee2ad 6675349 2 2020-08-04 11:13:13 +0000 UTC   map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 5c796f74-3558-4d10-85d1-17468f35572c 0xc002387847 0xc002387848}] []  [{e2e.test Update apps/v1 2020-08-04 11:13:13 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-08-04 11:13:36 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 53 99 55 57 54 102 55 52 45 51 53 53 56 45 52 100 49 48 45 56 53 100 49 45 49 55 52 54 56 102 51 53 53 55 50 99 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0023878f8  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Aug  4 11:13:36.566: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-5686c4cfd5  deployment-1280 /apis/apps/v1/namespaces/deployment-1280/replicasets/test-rollover-deployment-5686c4cfd5 92358dee-daa8-4b55-a8af-4c316e7c2a5e 6675292 2 2020-08-04 11:13:20 +0000 UTC   map[name:rollover-pod pod-template-hash:5686c4cfd5] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 5c796f74-3558-4d10-85d1-17468f35572c 0xc002387967 0xc002387968}] []  [{kube-controller-manager Update apps/v1 2020-08-04 11:13:22 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 53 99 55 57 54 102 55 52 45 51 53 53 56 45 52 100 49 48 45 56 53 100 49 45 49 55 52 54 56 102 51 53 53 55 50 99 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 109 105 110 82 101 97 100 121 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 114 101 100 105 115 45 115 108 97 118 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5686c4cfd5,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:5686c4cfd5] map[] [] []  []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002387a48  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Aug  4 11:13:36.570: INFO: Pod "test-rollover-deployment-84f7f6f64b-6cxbt" is available:
&Pod{ObjectMeta:{test-rollover-deployment-84f7f6f64b-6cxbt test-rollover-deployment-84f7f6f64b- deployment-1280 /api/v1/namespaces/deployment-1280/pods/test-rollover-deployment-84f7f6f64b-6cxbt e14c2c5d-f909-4219-8bd9-0fbb7737d1d6 6675307 0 2020-08-04 11:13:22 +0000 UTC   map[name:rollover-pod pod-template-hash:84f7f6f64b] map[] [{apps/v1 ReplicaSet test-rollover-deployment-84f7f6f64b f32b807a-719f-4064-8c8c-2fac47664aa0 0xc000dc81b7 0xc000dc81b8}] []  [{kube-controller-manager Update v1 2020-08-04 11:13:22 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 102 51 50 98 56 48 55 97 45 55 49 57 102 45 52 48 54 52 45 56 99 56 99 45 50 102 97 99 52 55 54 54 52 97 97 48 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-04 11:13:25 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 49 56 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dlmjw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dlmjw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dlmjw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:13:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:13:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:13:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:13:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.1.181,StartTime:2020-08-04 11:13:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-04 11:13:25 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://702096ea09abb946c5caba3bf26a87be3569a9fcf6af0e8191d52a08e5a47907,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.181,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:13:36.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-1280" for this suite.

• [SLOW TEST:23.471 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":275,"completed":156,"skipped":2764,"failed":0}
SSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:13:36.577: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test override command
Aug  4 11:13:36.778: INFO: Waiting up to 5m0s for pod "client-containers-01f6d096-e359-411f-b977-1f81e0c543e6" in namespace "containers-2055" to be "Succeeded or Failed"
Aug  4 11:13:37.002: INFO: Pod "client-containers-01f6d096-e359-411f-b977-1f81e0c543e6": Phase="Pending", Reason="", readiness=false. Elapsed: 223.881992ms
Aug  4 11:13:39.103: INFO: Pod "client-containers-01f6d096-e359-411f-b977-1f81e0c543e6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.325095705s
Aug  4 11:13:41.187: INFO: Pod "client-containers-01f6d096-e359-411f-b977-1f81e0c543e6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.408983837s
Aug  4 11:13:43.271: INFO: Pod "client-containers-01f6d096-e359-411f-b977-1f81e0c543e6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.492818434s
STEP: Saw pod success
Aug  4 11:13:43.271: INFO: Pod "client-containers-01f6d096-e359-411f-b977-1f81e0c543e6" satisfied condition "Succeeded or Failed"
Aug  4 11:13:43.279: INFO: Trying to get logs from node kali-worker pod client-containers-01f6d096-e359-411f-b977-1f81e0c543e6 container test-container: 
STEP: delete the pod
Aug  4 11:13:43.333: INFO: Waiting for pod client-containers-01f6d096-e359-411f-b977-1f81e0c543e6 to disappear
Aug  4 11:13:43.339: INFO: Pod client-containers-01f6d096-e359-411f-b977-1f81e0c543e6 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:13:43.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-2055" for this suite.

• [SLOW TEST:6.772 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":275,"completed":157,"skipped":2769,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:13:43.349: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a ResourceQuota with terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a long running pod
STEP: Ensuring resource quota with not terminating scope captures the pod usage
STEP: Ensuring resource quota with terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a terminating pod
STEP: Ensuring resource quota with terminating scope captures the pod usage
STEP: Ensuring resource quota with not terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:13:59.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-7599" for this suite.

• [SLOW TEST:16.631 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":275,"completed":158,"skipped":2778,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:13:59.981: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-7ba38b0f-1f5b-4a73-9a5d-89c5d6c6dc60
STEP: Creating a pod to test consume secrets
Aug  4 11:14:00.109: INFO: Waiting up to 5m0s for pod "pod-secrets-d6c270a1-5233-4a48-87fc-9aca03fd9e48" in namespace "secrets-2544" to be "Succeeded or Failed"
Aug  4 11:14:00.125: INFO: Pod "pod-secrets-d6c270a1-5233-4a48-87fc-9aca03fd9e48": Phase="Pending", Reason="", readiness=false. Elapsed: 15.775764ms
Aug  4 11:14:02.146: INFO: Pod "pod-secrets-d6c270a1-5233-4a48-87fc-9aca03fd9e48": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037435427s
Aug  4 11:14:04.151: INFO: Pod "pod-secrets-d6c270a1-5233-4a48-87fc-9aca03fd9e48": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042304685s
STEP: Saw pod success
Aug  4 11:14:04.151: INFO: Pod "pod-secrets-d6c270a1-5233-4a48-87fc-9aca03fd9e48" satisfied condition "Succeeded or Failed"
Aug  4 11:14:04.154: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-d6c270a1-5233-4a48-87fc-9aca03fd9e48 container secret-volume-test: 
STEP: delete the pod
Aug  4 11:14:04.286: INFO: Waiting for pod pod-secrets-d6c270a1-5233-4a48-87fc-9aca03fd9e48 to disappear
Aug  4 11:14:04.304: INFO: Pod pod-secrets-d6c270a1-5233-4a48-87fc-9aca03fd9e48 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:14:04.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2544" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":159,"skipped":2802,"failed":0}
SSSSSSSS
------------------------------
[sig-network] DNS 
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:14:04.315: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-959 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-959;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-959 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-959;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-959.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-959.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-959.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-959.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-959.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-959.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-959.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-959.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-959.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-959.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-959.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-959.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-959.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 193.223.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.223.193_udp@PTR;check="$$(dig +tcp +noall +answer +search 193.223.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.223.193_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-959 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-959;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-959 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-959;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-959.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-959.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-959.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-959.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-959.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-959.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-959.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-959.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-959.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-959.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-959.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-959.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-959.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 193.223.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.223.193_udp@PTR;check="$$(dig +tcp +noall +answer +search 193.223.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.223.193_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug  4 11:14:10.589: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:10.592: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:10.595: INFO: Unable to read wheezy_udp@dns-test-service.dns-959 from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:10.598: INFO: Unable to read wheezy_tcp@dns-test-service.dns-959 from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:10.601: INFO: Unable to read wheezy_udp@dns-test-service.dns-959.svc from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:10.603: INFO: Unable to read wheezy_tcp@dns-test-service.dns-959.svc from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:10.606: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-959.svc from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:10.609: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-959.svc from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:10.629: INFO: Unable to read jessie_udp@dns-test-service from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:10.633: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:10.635: INFO: Unable to read jessie_udp@dns-test-service.dns-959 from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:10.638: INFO: Unable to read jessie_tcp@dns-test-service.dns-959 from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:10.641: INFO: Unable to read jessie_udp@dns-test-service.dns-959.svc from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:10.644: INFO: Unable to read jessie_tcp@dns-test-service.dns-959.svc from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:10.647: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-959.svc from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:10.650: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-959.svc from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:10.668: INFO: Lookups using dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-959 wheezy_tcp@dns-test-service.dns-959 wheezy_udp@dns-test-service.dns-959.svc wheezy_tcp@dns-test-service.dns-959.svc wheezy_udp@_http._tcp.dns-test-service.dns-959.svc wheezy_tcp@_http._tcp.dns-test-service.dns-959.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-959 jessie_tcp@dns-test-service.dns-959 jessie_udp@dns-test-service.dns-959.svc jessie_tcp@dns-test-service.dns-959.svc jessie_udp@_http._tcp.dns-test-service.dns-959.svc jessie_tcp@_http._tcp.dns-test-service.dns-959.svc]

Aug  4 11:14:15.673: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:15.676: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:15.680: INFO: Unable to read wheezy_udp@dns-test-service.dns-959 from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:15.683: INFO: Unable to read wheezy_tcp@dns-test-service.dns-959 from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:15.685: INFO: Unable to read wheezy_udp@dns-test-service.dns-959.svc from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:15.687: INFO: Unable to read wheezy_tcp@dns-test-service.dns-959.svc from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:15.690: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-959.svc from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:15.692: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-959.svc from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:15.710: INFO: Unable to read jessie_udp@dns-test-service from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:15.712: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:15.715: INFO: Unable to read jessie_udp@dns-test-service.dns-959 from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:15.718: INFO: Unable to read jessie_tcp@dns-test-service.dns-959 from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:15.721: INFO: Unable to read jessie_udp@dns-test-service.dns-959.svc from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:15.724: INFO: Unable to read jessie_tcp@dns-test-service.dns-959.svc from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:15.727: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-959.svc from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:15.730: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-959.svc from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:15.749: INFO: Lookups using dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-959 wheezy_tcp@dns-test-service.dns-959 wheezy_udp@dns-test-service.dns-959.svc wheezy_tcp@dns-test-service.dns-959.svc wheezy_udp@_http._tcp.dns-test-service.dns-959.svc wheezy_tcp@_http._tcp.dns-test-service.dns-959.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-959 jessie_tcp@dns-test-service.dns-959 jessie_udp@dns-test-service.dns-959.svc jessie_tcp@dns-test-service.dns-959.svc jessie_udp@_http._tcp.dns-test-service.dns-959.svc jessie_tcp@_http._tcp.dns-test-service.dns-959.svc]

Aug  4 11:14:20.674: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:20.678: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:20.681: INFO: Unable to read wheezy_udp@dns-test-service.dns-959 from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:20.685: INFO: Unable to read wheezy_tcp@dns-test-service.dns-959 from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:20.687: INFO: Unable to read wheezy_udp@dns-test-service.dns-959.svc from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:20.690: INFO: Unable to read wheezy_tcp@dns-test-service.dns-959.svc from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:20.693: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-959.svc from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:20.696: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-959.svc from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:20.717: INFO: Unable to read jessie_udp@dns-test-service from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:20.720: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:20.723: INFO: Unable to read jessie_udp@dns-test-service.dns-959 from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:20.726: INFO: Unable to read jessie_tcp@dns-test-service.dns-959 from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:20.729: INFO: Unable to read jessie_udp@dns-test-service.dns-959.svc from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:20.736: INFO: Unable to read jessie_tcp@dns-test-service.dns-959.svc from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:20.738: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-959.svc from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:20.781: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-959.svc from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:20.823: INFO: Lookups using dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-959 wheezy_tcp@dns-test-service.dns-959 wheezy_udp@dns-test-service.dns-959.svc wheezy_tcp@dns-test-service.dns-959.svc wheezy_udp@_http._tcp.dns-test-service.dns-959.svc wheezy_tcp@_http._tcp.dns-test-service.dns-959.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-959 jessie_tcp@dns-test-service.dns-959 jessie_udp@dns-test-service.dns-959.svc jessie_tcp@dns-test-service.dns-959.svc jessie_udp@_http._tcp.dns-test-service.dns-959.svc jessie_tcp@_http._tcp.dns-test-service.dns-959.svc]

Aug  4 11:14:25.674: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:25.677: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:25.681: INFO: Unable to read wheezy_udp@dns-test-service.dns-959 from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:25.684: INFO: Unable to read wheezy_tcp@dns-test-service.dns-959 from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:25.687: INFO: Unable to read wheezy_udp@dns-test-service.dns-959.svc from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:25.690: INFO: Unable to read wheezy_tcp@dns-test-service.dns-959.svc from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:25.694: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-959.svc from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:25.697: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-959.svc from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:25.718: INFO: Unable to read jessie_udp@dns-test-service from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:25.721: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:25.723: INFO: Unable to read jessie_udp@dns-test-service.dns-959 from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:25.726: INFO: Unable to read jessie_tcp@dns-test-service.dns-959 from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:25.728: INFO: Unable to read jessie_udp@dns-test-service.dns-959.svc from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:25.730: INFO: Unable to read jessie_tcp@dns-test-service.dns-959.svc from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:25.732: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-959.svc from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:25.734: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-959.svc from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:25.746: INFO: Lookups using dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-959 wheezy_tcp@dns-test-service.dns-959 wheezy_udp@dns-test-service.dns-959.svc wheezy_tcp@dns-test-service.dns-959.svc wheezy_udp@_http._tcp.dns-test-service.dns-959.svc wheezy_tcp@_http._tcp.dns-test-service.dns-959.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-959 jessie_tcp@dns-test-service.dns-959 jessie_udp@dns-test-service.dns-959.svc jessie_tcp@dns-test-service.dns-959.svc jessie_udp@_http._tcp.dns-test-service.dns-959.svc jessie_tcp@_http._tcp.dns-test-service.dns-959.svc]

Aug  4 11:14:30.674: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:30.677: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:30.680: INFO: Unable to read wheezy_udp@dns-test-service.dns-959 from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:30.684: INFO: Unable to read wheezy_tcp@dns-test-service.dns-959 from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:30.687: INFO: Unable to read wheezy_udp@dns-test-service.dns-959.svc from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:30.690: INFO: Unable to read wheezy_tcp@dns-test-service.dns-959.svc from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:30.693: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-959.svc from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:30.696: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-959.svc from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:30.716: INFO: Unable to read jessie_udp@dns-test-service from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:30.719: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:30.721: INFO: Unable to read jessie_udp@dns-test-service.dns-959 from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:30.724: INFO: Unable to read jessie_tcp@dns-test-service.dns-959 from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:30.727: INFO: Unable to read jessie_udp@dns-test-service.dns-959.svc from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:30.730: INFO: Unable to read jessie_tcp@dns-test-service.dns-959.svc from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:30.733: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-959.svc from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:30.736: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-959.svc from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:30.755: INFO: Lookups using dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-959 wheezy_tcp@dns-test-service.dns-959 wheezy_udp@dns-test-service.dns-959.svc wheezy_tcp@dns-test-service.dns-959.svc wheezy_udp@_http._tcp.dns-test-service.dns-959.svc wheezy_tcp@_http._tcp.dns-test-service.dns-959.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-959 jessie_tcp@dns-test-service.dns-959 jessie_udp@dns-test-service.dns-959.svc jessie_tcp@dns-test-service.dns-959.svc jessie_udp@_http._tcp.dns-test-service.dns-959.svc jessie_tcp@_http._tcp.dns-test-service.dns-959.svc]

Aug  4 11:14:35.674: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:35.678: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:35.681: INFO: Unable to read wheezy_udp@dns-test-service.dns-959 from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:35.684: INFO: Unable to read wheezy_tcp@dns-test-service.dns-959 from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:35.687: INFO: Unable to read wheezy_udp@dns-test-service.dns-959.svc from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:35.690: INFO: Unable to read wheezy_tcp@dns-test-service.dns-959.svc from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:35.693: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-959.svc from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:35.696: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-959.svc from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:35.717: INFO: Unable to read jessie_udp@dns-test-service from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:35.720: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:35.723: INFO: Unable to read jessie_udp@dns-test-service.dns-959 from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:35.726: INFO: Unable to read jessie_tcp@dns-test-service.dns-959 from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:35.730: INFO: Unable to read jessie_udp@dns-test-service.dns-959.svc from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:35.733: INFO: Unable to read jessie_tcp@dns-test-service.dns-959.svc from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:35.736: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-959.svc from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:35.740: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-959.svc from pod dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2: the server could not find the requested resource (get pods dns-test-5972930f-9601-45d9-b605-f528f56f3bb2)
Aug  4 11:14:35.763: INFO: Lookups using dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-959 wheezy_tcp@dns-test-service.dns-959 wheezy_udp@dns-test-service.dns-959.svc wheezy_tcp@dns-test-service.dns-959.svc wheezy_udp@_http._tcp.dns-test-service.dns-959.svc wheezy_tcp@_http._tcp.dns-test-service.dns-959.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-959 jessie_tcp@dns-test-service.dns-959 jessie_udp@dns-test-service.dns-959.svc jessie_tcp@dns-test-service.dns-959.svc jessie_udp@_http._tcp.dns-test-service.dns-959.svc jessie_tcp@_http._tcp.dns-test-service.dns-959.svc]

Aug  4 11:14:40.759: INFO: DNS probes using dns-959/dns-test-5972930f-9601-45d9-b605-f528f56f3bb2 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:14:41.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-959" for this suite.

• [SLOW TEST:37.129 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":275,"completed":160,"skipped":2810,"failed":0}
S
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:14:41.444: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug  4 11:14:42.458: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug  4 11:14:44.467: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732136482, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732136482, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732136482, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732136482, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug  4 11:14:46.470: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732136482, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732136482, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732136482, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732136482, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug  4 11:14:49.499: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the mutating pod webhook via the AdmissionRegistration API
STEP: create a pod that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:14:49.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5173" for this suite.
STEP: Destroying namespace "webhook-5173-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:8.339 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":275,"completed":161,"skipped":2811,"failed":0}
S
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:14:49.783: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Aug  4 11:14:49.871: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-1110 /api/v1/namespaces/watch-1110/configmaps/e2e-watch-test-label-changed 4f0e79c2-766f-4543-8125-16a8200b8355 6675824 0 2020-08-04 11:14:49 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-08-04 11:14:49 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Aug  4 11:14:49.871: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-1110 /api/v1/namespaces/watch-1110/configmaps/e2e-watch-test-label-changed 4f0e79c2-766f-4543-8125-16a8200b8355 6675825 0 2020-08-04 11:14:49 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-08-04 11:14:49 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
Aug  4 11:14:49.872: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-1110 /api/v1/namespaces/watch-1110/configmaps/e2e-watch-test-label-changed 4f0e79c2-766f-4543-8125-16a8200b8355 6675826 0 2020-08-04 11:14:49 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-08-04 11:14:49 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Aug  4 11:14:59.907: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-1110 /api/v1/namespaces/watch-1110/configmaps/e2e-watch-test-label-changed 4f0e79c2-766f-4543-8125-16a8200b8355 6675875 0 2020-08-04 11:14:49 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-08-04 11:14:59 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Aug  4 11:14:59.908: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-1110 /api/v1/namespaces/watch-1110/configmaps/e2e-watch-test-label-changed 4f0e79c2-766f-4543-8125-16a8200b8355 6675876 0 2020-08-04 11:14:49 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-08-04 11:14:59 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,}
Aug  4 11:14:59.908: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-1110 /api/v1/namespaces/watch-1110/configmaps/e2e-watch-test-label-changed 4f0e79c2-766f-4543-8125-16a8200b8355 6675877 0 2020-08-04 11:14:49 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-08-04 11:14:59 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:14:59.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-1110" for this suite.

• [SLOW TEST:10.132 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":275,"completed":162,"skipped":2812,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:14:59.916: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug  4 11:14:59.988: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Aug  4 11:15:01.903: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4362 create -f -'
Aug  4 11:15:05.740: INFO: stderr: ""
Aug  4 11:15:05.740: INFO: stdout: "e2e-test-crd-publish-openapi-994-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
Aug  4 11:15:05.741: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4362 delete e2e-test-crd-publish-openapi-994-crds test-cr'
Aug  4 11:15:05.863: INFO: stderr: ""
Aug  4 11:15:05.864: INFO: stdout: "e2e-test-crd-publish-openapi-994-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
Aug  4 11:15:05.864: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4362 apply -f -'
Aug  4 11:15:06.108: INFO: stderr: ""
Aug  4 11:15:06.108: INFO: stdout: "e2e-test-crd-publish-openapi-994-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
Aug  4 11:15:06.108: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4362 delete e2e-test-crd-publish-openapi-994-crds test-cr'
Aug  4 11:15:06.214: INFO: stderr: ""
Aug  4 11:15:06.214: INFO: stdout: "e2e-test-crd-publish-openapi-994-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
Aug  4 11:15:06.214: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-994-crds'
Aug  4 11:15:06.498: INFO: stderr: ""
Aug  4 11:15:06.498: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-994-crd\nVERSION:  crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:15:08.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-4362" for this suite.

• [SLOW TEST:8.541 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":275,"completed":163,"skipped":2822,"failed":0}
SSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:15:08.457: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating service multi-endpoint-test in namespace services-8919
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8919 to expose endpoints map[]
Aug  4 11:15:08.722: INFO: Get endpoints failed (6.956584ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Aug  4 11:15:09.725: INFO: successfully validated that service multi-endpoint-test in namespace services-8919 exposes endpoints map[] (1.010376658s elapsed)
STEP: Creating pod pod1 in namespace services-8919
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8919 to expose endpoints map[pod1:[100]]
Aug  4 11:15:12.796: INFO: successfully validated that service multi-endpoint-test in namespace services-8919 exposes endpoints map[pod1:[100]] (3.06223503s elapsed)
STEP: Creating pod pod2 in namespace services-8919
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8919 to expose endpoints map[pod1:[100] pod2:[101]]
Aug  4 11:15:16.938: INFO: successfully validated that service multi-endpoint-test in namespace services-8919 exposes endpoints map[pod1:[100] pod2:[101]] (4.137817903s elapsed)
STEP: Deleting pod pod1 in namespace services-8919
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8919 to expose endpoints map[pod2:[101]]
Aug  4 11:15:18.004: INFO: successfully validated that service multi-endpoint-test in namespace services-8919 exposes endpoints map[pod2:[101]] (1.061428858s elapsed)
STEP: Deleting pod pod2 in namespace services-8919
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8919 to expose endpoints map[]
Aug  4 11:15:19.037: INFO: successfully validated that service multi-endpoint-test in namespace services-8919 exposes endpoints map[] (1.028451215s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:15:19.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8919" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:10.614 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":275,"completed":164,"skipped":2825,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:15:19.071: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap configmap-299/configmap-test-e70ecd2a-48e1-406d-9e1a-d713f6c5ed46
STEP: Creating a pod to test consume configMaps
Aug  4 11:15:19.169: INFO: Waiting up to 5m0s for pod "pod-configmaps-536d8e1b-a393-4c6e-af8c-0743dd1f0717" in namespace "configmap-299" to be "Succeeded or Failed"
Aug  4 11:15:19.193: INFO: Pod "pod-configmaps-536d8e1b-a393-4c6e-af8c-0743dd1f0717": Phase="Pending", Reason="", readiness=false. Elapsed: 24.777624ms
Aug  4 11:15:21.200: INFO: Pod "pod-configmaps-536d8e1b-a393-4c6e-af8c-0743dd1f0717": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031001378s
Aug  4 11:15:23.204: INFO: Pod "pod-configmaps-536d8e1b-a393-4c6e-af8c-0743dd1f0717": Phase="Running", Reason="", readiness=true. Elapsed: 4.03541709s
Aug  4 11:15:25.209: INFO: Pod "pod-configmaps-536d8e1b-a393-4c6e-af8c-0743dd1f0717": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.040146216s
STEP: Saw pod success
Aug  4 11:15:25.209: INFO: Pod "pod-configmaps-536d8e1b-a393-4c6e-af8c-0743dd1f0717" satisfied condition "Succeeded or Failed"
Aug  4 11:15:25.212: INFO: Trying to get logs from node kali-worker pod pod-configmaps-536d8e1b-a393-4c6e-af8c-0743dd1f0717 container env-test: 
STEP: delete the pod
Aug  4 11:15:25.266: INFO: Waiting for pod pod-configmaps-536d8e1b-a393-4c6e-af8c-0743dd1f0717 to disappear
Aug  4 11:15:25.272: INFO: Pod pod-configmaps-536d8e1b-a393-4c6e-af8c-0743dd1f0717 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:15:25.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-299" for this suite.

• [SLOW TEST:6.209 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:34
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":275,"completed":165,"skipped":2837,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:15:25.281: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug  4 11:15:25.345: INFO: Creating deployment "webserver-deployment"
Aug  4 11:15:25.350: INFO: Waiting for observed generation 1
Aug  4 11:15:27.530: INFO: Waiting for all required pods to come up
Aug  4 11:15:27.534: INFO: Pod name httpd: Found 10 pods out of 10
STEP: ensuring each pod is running
Aug  4 11:15:37.551: INFO: Waiting for deployment "webserver-deployment" to complete
Aug  4 11:15:37.557: INFO: Updating deployment "webserver-deployment" with a non-existent image
Aug  4 11:15:37.563: INFO: Updating deployment webserver-deployment
Aug  4 11:15:37.563: INFO: Waiting for observed generation 2
Aug  4 11:15:39.607: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Aug  4 11:15:39.610: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Aug  4 11:15:39.613: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Aug  4 11:15:39.620: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Aug  4 11:15:39.620: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Aug  4 11:15:39.622: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Aug  4 11:15:39.626: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas
Aug  4 11:15:39.626: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30
Aug  4 11:15:39.632: INFO: Updating deployment webserver-deployment
Aug  4 11:15:39.632: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas
Aug  4 11:15:39.811: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Aug  4 11:15:39.837: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
Aug  4 11:15:40.190: INFO: Deployment "webserver-deployment":
&Deployment{ObjectMeta:{webserver-deployment  deployment-6524 /apis/apps/v1/namespaces/deployment-6524/deployments/webserver-deployment d4287286-44a9-47ca-bcf3-02259d7c2c59 6676292 3 2020-08-04 11:15:25 +0000 UTC   map[name:httpd] map[deployment.kubernetes.io/revision:2] [] []  [{e2e.test Update apps/v1 2020-08-04 11:15:39 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 114 111 108 108 105 110 103 85 112 100 97 116 101 34 58 123 34 46 34 58 123 125 44 34 102 58 109 97 120 83 117 114 103 101 34 58 123 125 44 34 102 58 109 97 120 85 110 97 118 97 105 108 97 98 108 101 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-08-04 11:15:39 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 110 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004212cd8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-6676bcd6d4" is progressing.,LastUpdateTime:2020-08-04 11:15:37 +0000 UTC,LastTransitionTime:2020-08-04 11:15:25 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-08-04 11:15:39 +0000 UTC,LastTransitionTime:2020-08-04 11:15:39 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},}

Aug  4 11:15:40.248: INFO: New ReplicaSet "webserver-deployment-6676bcd6d4" of Deployment "webserver-deployment":
&ReplicaSet{ObjectMeta:{webserver-deployment-6676bcd6d4  deployment-6524 /apis/apps/v1/namespaces/deployment-6524/replicasets/webserver-deployment-6676bcd6d4 32369ef4-170e-47f8-b202-6924de29a08a 6676351 3 2020-08-04 11:15:37 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment d4287286-44a9-47ca-bcf3-02259d7c2c59 0xc004213197 0xc004213198}] []  [{kube-controller-manager Update apps/v1 2020-08-04 11:15:40 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 100 52 50 56 55 50 56 54 45 52 52 97 57 45 52 55 99 97 45 98 99 102 51 45 48 50 50 53 57 100 55 99 50 99 53 57 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 6676bcd6d4,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004213218  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Aug  4 11:15:40.248: INFO: All old ReplicaSets of Deployment "webserver-deployment":
Aug  4 11:15:40.248: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-84855cf797  deployment-6524 /apis/apps/v1/namespaces/deployment-6524/replicasets/webserver-deployment-84855cf797 a8327a16-0b3d-4265-82e5-9f551419e26c 6676327 3 2020-08-04 11:15:25 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment d4287286-44a9-47ca-bcf3-02259d7c2c59 0xc004213277 0xc004213278}] []  [{kube-controller-manager Update apps/v1 2020-08-04 11:15:40 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 100 52 50 56 55 50 56 54 45 52 52 97 57 45 52 55 99 97 45 98 99 102 51 45 48 50 50 53 57 100 55 99 50 99 53 57 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 84855cf797,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0042132e8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},}
Aug  4 11:15:40.350: INFO: Pod "webserver-deployment-6676bcd6d4-6dc6s" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-6dc6s webserver-deployment-6676bcd6d4- deployment-6524 /api/v1/namespaces/deployment-6524/pods/webserver-deployment-6676bcd6d4-6dc6s 56ca06ad-448d-4845-928c-a2dd90b2d390 6676269 0 2020-08-04 11:15:37 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 32369ef4-170e-47f8-b202-6924de29a08a 0xc004213817 0xc004213818}] []  [{kube-controller-manager Update v1 2020-08-04 11:15:37 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 51 50 51 54 57 101 102 52 45 49 55 48 101 45 52 55 102 56 45 98 50 48 50 45 54 57 50 52 100 101 50 57 97 48 56 97 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-04 11:15:38 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dlwk5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dlwk5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dlwk5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:15:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:15:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:15:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:15:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-08-04 11:15:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug  4 11:15:40.350: INFO: Pod "webserver-deployment-6676bcd6d4-7vztq" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-7vztq webserver-deployment-6676bcd6d4- deployment-6524 /api/v1/namespaces/deployment-6524/pods/webserver-deployment-6676bcd6d4-7vztq 2fccd738-b030-4cad-8a2b-6da523df6b17 6676340 0 2020-08-04 11:15:40 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 32369ef4-170e-47f8-b202-6924de29a08a 0xc0042139c7 0xc0042139c8}] []  [{kube-controller-manager Update v1 2020-08-04 11:15:40 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 51 50 51 54 57 101 102 52 45 49 55 48 101 45 52 55 102 56 45 98 50 48 50 45 54 57 50 52 100 101 50 57 97 48 56 97 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dlwk5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dlwk5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dlwk5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:15:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug  4 11:15:40.351: INFO: Pod "webserver-deployment-6676bcd6d4-8ckmr" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-8ckmr webserver-deployment-6676bcd6d4- deployment-6524 /api/v1/namespaces/deployment-6524/pods/webserver-deployment-6676bcd6d4-8ckmr 34dc4af0-d4aa-4a5a-9ffb-466c019ec3c1 6676257 0 2020-08-04 11:15:37 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 32369ef4-170e-47f8-b202-6924de29a08a 0xc004213b07 0xc004213b08}] []  [{kube-controller-manager Update v1 2020-08-04 11:15:37 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 51 50 51 54 57 101 102 52 45 49 55 48 101 45 52 55 102 56 45 98 50 48 50 45 54 57 50 52 100 101 50 57 97 48 56 97 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-04 11:15:37 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dlwk5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dlwk5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dlwk5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:15:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:15:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:15:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:15:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-08-04 11:15:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug  4 11:15:40.351: INFO: Pod "webserver-deployment-6676bcd6d4-9zr9p" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-9zr9p webserver-deployment-6676bcd6d4- deployment-6524 /api/v1/namespaces/deployment-6524/pods/webserver-deployment-6676bcd6d4-9zr9p 30b21df4-4158-4210-8ff9-84b0c1030e6c 6676309 0 2020-08-04 11:15:39 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 32369ef4-170e-47f8-b202-6924de29a08a 0xc004213cb7 0xc004213cb8}] []  [{kube-controller-manager Update v1 2020-08-04 11:15:39 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 51 50 51 54 57 101 102 52 45 49 55 48 101 45 52 55 102 56 45 98 50 48 50 45 54 57 50 52 100 101 50 57 97 48 56 97 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dlwk5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dlwk5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dlwk5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:15:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug  4 11:15:40.351: INFO: Pod "webserver-deployment-6676bcd6d4-fk9bj" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-fk9bj webserver-deployment-6676bcd6d4- deployment-6524 /api/v1/namespaces/deployment-6524/pods/webserver-deployment-6676bcd6d4-fk9bj d0abfa18-dadc-4fea-9059-64646a63f4b3 6676322 0 2020-08-04 11:15:39 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 32369ef4-170e-47f8-b202-6924de29a08a 0xc004213df7 0xc004213df8}] []  [{kube-controller-manager Update v1 2020-08-04 11:15:39 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 51 50 51 54 57 101 102 52 45 49 55 48 101 45 52 55 102 56 45 98 50 48 50 45 54 57 50 52 100 101 50 57 97 48 56 97 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dlwk5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dlwk5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dlwk5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:15:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug  4 11:15:40.351: INFO: Pod "webserver-deployment-6676bcd6d4-kkc6n" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-kkc6n webserver-deployment-6676bcd6d4- deployment-6524 /api/v1/namespaces/deployment-6524/pods/webserver-deployment-6676bcd6d4-kkc6n 5ce69721-fcba-4757-9ce2-23d905fa3a88 6676330 0 2020-08-04 11:15:39 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 32369ef4-170e-47f8-b202-6924de29a08a 0xc004213f37 0xc004213f38}] []  [{kube-controller-manager Update v1 2020-08-04 11:15:39 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 51 50 51 54 57 101 102 52 45 49 55 48 101 45 52 55 102 56 45 98 50 48 50 45 54 57 50 52 100 101 50 57 97 48 56 97 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dlwk5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dlwk5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dlwk5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:15:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug  4 11:15:40.352: INFO: Pod "webserver-deployment-6676bcd6d4-nbk7h" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-nbk7h webserver-deployment-6676bcd6d4- deployment-6524 /api/v1/namespaces/deployment-6524/pods/webserver-deployment-6676bcd6d4-nbk7h 45a07f65-5748-450e-ae61-60fff615c1ea 6676242 0 2020-08-04 11:15:37 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 32369ef4-170e-47f8-b202-6924de29a08a 0xc00263c4a7 0xc00263c4a8}] []  [{kube-controller-manager Update v1 2020-08-04 11:15:37 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 51 50 51 54 57 101 102 52 45 49 55 48 101 45 52 55 102 56 45 98 50 48 50 45 54 57 50 52 100 101 50 57 97 48 56 97 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-04 11:15:37 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dlwk5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dlwk5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dlwk5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:15:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:15:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:15:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:15:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-08-04 11:15:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug  4 11:15:40.352: INFO: Pod "webserver-deployment-6676bcd6d4-ppznw" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-ppznw webserver-deployment-6676bcd6d4- deployment-6524 /api/v1/namespaces/deployment-6524/pods/webserver-deployment-6676bcd6d4-ppznw 46cfdd04-2ad0-46ee-8fc7-9a233964d641 6676306 0 2020-08-04 11:15:39 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 32369ef4-170e-47f8-b202-6924de29a08a 0xc00263c6c7 0xc00263c6c8}] []  [{kube-controller-manager Update v1 2020-08-04 11:15:39 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 51 50 51 54 57 101 102 52 45 49 55 48 101 45 52 55 102 56 45 98 50 48 50 45 54 57 50 52 100 101 50 57 97 48 56 97 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dlwk5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dlwk5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dlwk5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:15:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug  4 11:15:40.352: INFO: Pod "webserver-deployment-6676bcd6d4-pz5kb" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-pz5kb webserver-deployment-6676bcd6d4- deployment-6524 /api/v1/namespaces/deployment-6524/pods/webserver-deployment-6676bcd6d4-pz5kb 9b444174-7fac-4e9e-a211-996a1b21cf4a 6676329 0 2020-08-04 11:15:39 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 32369ef4-170e-47f8-b202-6924de29a08a 0xc00263c807 0xc00263c808}] []  [{kube-controller-manager Update v1 2020-08-04 11:15:39 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 51 50 51 54 57 101 102 52 45 49 55 48 101 45 52 55 102 56 45 98 50 48 50 45 54 57 50 52 100 101 50 57 97 48 56 97 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dlwk5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dlwk5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dlwk5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:15:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug  4 11:15:40.352: INFO: Pod "webserver-deployment-6676bcd6d4-qz467" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-qz467 webserver-deployment-6676bcd6d4- deployment-6524 /api/v1/namespaces/deployment-6524/pods/webserver-deployment-6676bcd6d4-qz467 7710a8cd-3c55-478d-a0fe-b2d4ded07bd1 6676291 0 2020-08-04 11:15:39 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 32369ef4-170e-47f8-b202-6924de29a08a 0xc00263c957 0xc00263c958}] []  [{kube-controller-manager Update v1 2020-08-04 11:15:39 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 51 50 51 54 57 101 102 52 45 49 55 48 101 45 52 55 102 56 45 98 50 48 50 45 54 57 50 52 100 101 50 57 97 48 56 97 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dlwk5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dlwk5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dlwk5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:15:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug  4 11:15:40.353: INFO: Pod "webserver-deployment-6676bcd6d4-rdh9c" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-rdh9c webserver-deployment-6676bcd6d4- deployment-6524 /api/v1/namespaces/deployment-6524/pods/webserver-deployment-6676bcd6d4-rdh9c ab7bafde-e3ee-4e26-aca6-b170a8922f95 6676267 0 2020-08-04 11:15:37 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 32369ef4-170e-47f8-b202-6924de29a08a 0xc00263ca97 0xc00263ca98}] []  [{kube-controller-manager Update v1 2020-08-04 11:15:37 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 51 50 51 54 57 101 102 52 45 49 55 48 101 45 52 55 102 56 45 98 50 48 50 45 54 57 50 52 100 101 50 57 97 48 56 97 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-04 11:15:37 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dlwk5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dlwk5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dlwk5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:15:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:15:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:15:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:15:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-08-04 11:15:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug  4 11:15:40.353: INFO: Pod "webserver-deployment-6676bcd6d4-s2zc5" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-s2zc5 webserver-deployment-6676bcd6d4- deployment-6524 /api/v1/namespaces/deployment-6524/pods/webserver-deployment-6676bcd6d4-s2zc5 40ab741e-2fa9-44d3-abb7-700141b66cde 6676336 0 2020-08-04 11:15:39 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 32369ef4-170e-47f8-b202-6924de29a08a 0xc00263cc47 0xc00263cc48}] []  [{kube-controller-manager Update v1 2020-08-04 11:15:39 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 51 50 51 54 57 101 102 52 45 49 55 48 101 45 52 55 102 56 45 98 50 48 50 45 54 57 50 52 100 101 50 57 97 48 56 97 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dlwk5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dlwk5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dlwk5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:15:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug  4 11:15:40.353: INFO: Pod "webserver-deployment-6676bcd6d4-zx4hf" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-zx4hf webserver-deployment-6676bcd6d4- deployment-6524 /api/v1/namespaces/deployment-6524/pods/webserver-deployment-6676bcd6d4-zx4hf 8f80be42-f505-40d0-89eb-a68a9e78d641 6676266 0 2020-08-04 11:15:37 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 32369ef4-170e-47f8-b202-6924de29a08a 0xc00263cda7 0xc00263cda8}] []  [{kube-controller-manager Update v1 2020-08-04 11:15:37 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 51 50 51 54 57 101 102 52 45 49 55 48 101 45 52 55 102 56 45 98 50 48 50 45 54 57 50 52 100 101 50 57 97 48 56 97 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-04 11:15:37 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dlwk5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dlwk5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dlwk5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:15:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:15:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:15:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:15:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-08-04 11:15:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug  4 11:15:40.353: INFO: Pod "webserver-deployment-84855cf797-2524d" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-2524d webserver-deployment-84855cf797- deployment-6524 /api/v1/namespaces/deployment-6524/pods/webserver-deployment-84855cf797-2524d 61e5cd7f-e696-44a1-ad4f-f12309f62fc9 6676333 0 2020-08-04 11:15:39 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 a8327a16-0b3d-4265-82e5-9f551419e26c 0xc00263cf57 0xc00263cf58}] []  [{kube-controller-manager Update v1 2020-08-04 11:15:39 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 97 56 51 50 55 97 49 54 45 48 98 51 100 45 52 50 54 53 45 56 50 101 53 45 57 102 53 53 49 52 49 57 101 50 54 99 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dlwk5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dlwk5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dlwk5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:15:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug  4 11:15:40.353: INFO: Pod "webserver-deployment-84855cf797-524wd" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-524wd webserver-deployment-84855cf797- deployment-6524 /api/v1/namespaces/deployment-6524/pods/webserver-deployment-84855cf797-524wd 0213768d-6bcf-470d-adbb-4cde5520f650 6676341 0 2020-08-04 11:15:39 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 a8327a16-0b3d-4265-82e5-9f551419e26c 0xc00263d087 0xc00263d088}] []  [{kube-controller-manager Update v1 2020-08-04 11:15:39 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 97 56 51 50 55 97 49 54 45 48 98 51 100 45 52 50 54 53 45 56 50 101 53 45 57 102 53 53 49 52 49 57 101 50 54 99 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-04 11:15:40 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dlwk5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dlwk5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dlwk5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:15:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:15:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:15:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:15:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-08-04 11:15:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug  4 11:15:40.354: INFO: Pod "webserver-deployment-84855cf797-6dk2p" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-6dk2p webserver-deployment-84855cf797- deployment-6524 /api/v1/namespaces/deployment-6524/pods/webserver-deployment-84855cf797-6dk2p 8d8c0b32-82e9-4079-83a0-550174994d2a 6676169 0 2020-08-04 11:15:25 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 a8327a16-0b3d-4265-82e5-9f551419e26c 0xc00263d217 0xc00263d218}] []  [{kube-controller-manager Update v1 2020-08-04 11:15:25 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 97 56 51 50 55 97 49 54 45 48 98 51 100 45 52 50 54 53 45 56 50 101 53 45 57 102 53 53 49 52 49 57 101 50 54 99 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-04 11:15:34 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dlwk5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dlwk5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dlwk5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:15:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:15:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:15:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:15:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.7,StartTime:2020-08-04 11:15:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-04 11:15:33 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://64d17fcc3e28eb07d367d77b2297f10397bb596af200494dcce8c8b60e91068e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.7,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug  4 11:15:40.354: INFO: Pod "webserver-deployment-84855cf797-8xcqm" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-8xcqm webserver-deployment-84855cf797- deployment-6524 /api/v1/namespaces/deployment-6524/pods/webserver-deployment-84855cf797-8xcqm d7b79478-cb8e-4bf1-921f-a0d1c6db6ab3 6676159 0 2020-08-04 11:15:25 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 a8327a16-0b3d-4265-82e5-9f551419e26c 0xc00263d3d7 0xc00263d3d8}] []  [{kube-controller-manager Update v1 2020-08-04 11:15:25 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 97 56 51 50 55 97 49 54 45 48 98 51 100 45 52 50 54 53 45 56 50 101 53 45 57 102 53 53 49 52 49 57 101 50 54 99 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-04 11:15:32 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 49 56 54 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dlwk5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dlwk5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dlwk5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:15:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:15:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:15:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:15:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.1.186,StartTime:2020-08-04 11:15:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-04 11:15:31 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://33cee918926678f08c16cc5cca4926cf712b94d14a99f3355d670fda42b4c7d1,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.186,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug  4 11:15:40.354: INFO: Pod "webserver-deployment-84855cf797-f962b" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-f962b webserver-deployment-84855cf797- deployment-6524 /api/v1/namespaces/deployment-6524/pods/webserver-deployment-84855cf797-f962b 42fde53c-87d2-43a8-8632-b238db21b1b4 6676294 0 2020-08-04 11:15:39 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 a8327a16-0b3d-4265-82e5-9f551419e26c 0xc00263d997 0xc00263d998}] []  [{kube-controller-manager Update v1 2020-08-04 11:15:39 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 97 56 51 50 55 97 49 54 45 48 98 51 100 45 52 50 54 53 45 56 50 101 53 45 57 102 53 53 49 52 49 57 101 50 54 99 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dlwk5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dlwk5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dlwk5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:15:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug  4 11:15:40.354: INFO: Pod "webserver-deployment-84855cf797-flft2" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-flft2 webserver-deployment-84855cf797- deployment-6524 /api/v1/namespaces/deployment-6524/pods/webserver-deployment-84855cf797-flft2 e44faebb-766b-4725-b087-632992a943f7 6676304 0 2020-08-04 11:15:39 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 a8327a16-0b3d-4265-82e5-9f551419e26c 0xc004ea0057 0xc004ea0058}] []  [{kube-controller-manager Update v1 2020-08-04 11:15:39 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 97 56 51 50 55 97 49 54 45 48 98 51 100 45 52 50 54 53 45 56 50 101 53 45 57 102 53 53 49 52 49 57 101 50 54 99 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dlwk5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dlwk5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dlwk5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:15:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug  4 11:15:40.354: INFO: Pod "webserver-deployment-84855cf797-jfgr8" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-jfgr8 webserver-deployment-84855cf797- deployment-6524 /api/v1/namespaces/deployment-6524/pods/webserver-deployment-84855cf797-jfgr8 1bf7e9b4-4e29-42a8-a784-1db2ae1ba708 6676207 0 2020-08-04 11:15:25 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 a8327a16-0b3d-4265-82e5-9f551419e26c 0xc004ea0187 0xc004ea0188}] []  [{kube-controller-manager Update v1 2020-08-04 11:15:25 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 97 56 51 50 55 97 49 54 45 48 98 51 100 45 52 50 54 53 45 56 50 101 53 45 57 102 53 53 49 52 49 57 101 50 54 99 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-04 11:15:36 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 57 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dlwk5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dlwk5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dlwk5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:15:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:15:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:15:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:15:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.9,StartTime:2020-08-04 11:15:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-04 11:15:35 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://ae72fcf196ad54328ee4151d41a110bdd51172dabc3c1e7462ddc655df5c2aac,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.9,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug  4 11:15:40.355: INFO: Pod "webserver-deployment-84855cf797-jlznm" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-jlznm webserver-deployment-84855cf797- deployment-6524 /api/v1/namespaces/deployment-6524/pods/webserver-deployment-84855cf797-jlznm 45e674f8-2350-4158-9ae1-acba16fd33bf 6676334 0 2020-08-04 11:15:39 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 a8327a16-0b3d-4265-82e5-9f551419e26c 0xc004ea0337 0xc004ea0338}] []  [{kube-controller-manager Update v1 2020-08-04 11:15:39 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 97 56 51 50 55 97 49 54 45 48 98 51 100 45 52 50 54 53 45 56 50 101 53 45 57 102 53 53 49 52 49 57 101 50 54 99 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dlwk5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dlwk5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dlwk5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:15:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug  4 11:15:40.355: INFO: Pod "webserver-deployment-84855cf797-jwq7z" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-jwq7z webserver-deployment-84855cf797- deployment-6524 /api/v1/namespaces/deployment-6524/pods/webserver-deployment-84855cf797-jwq7z 8ea503b0-c404-497f-bb06-0d2b4ac1b42e 6676189 0 2020-08-04 11:15:25 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 a8327a16-0b3d-4265-82e5-9f551419e26c 0xc004ea0467 0xc004ea0468}] []  [{kube-controller-manager Update v1 2020-08-04 11:15:25 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 97 56 51 50 55 97 49 54 45 48 98 51 100 45 52 50 54 53 45 56 50 101 53 45 57 102 53 53 49 52 49 57 101 50 54 99 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-04 11:15:35 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dlwk5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dlwk5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dlwk5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:15:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:15:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:15:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:15:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.8,StartTime:2020-08-04 11:15:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-04 11:15:34 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://5ef571f7a54b311a779a1618ac569244f653c42c6d4dd02522f24cca0579e54e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.8,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug  4 11:15:40.355: INFO: Pod "webserver-deployment-84855cf797-l5n8h" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-l5n8h webserver-deployment-84855cf797- deployment-6524 /api/v1/namespaces/deployment-6524/pods/webserver-deployment-84855cf797-l5n8h 56b922ae-1647-4e0a-8722-06fb98f67c82 6676332 0 2020-08-04 11:15:39 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 a8327a16-0b3d-4265-82e5-9f551419e26c 0xc004ea0617 0xc004ea0618}] []  [{kube-controller-manager Update v1 2020-08-04 11:15:39 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 97 56 51 50 55 97 49 54 45 48 98 51 100 45 52 50 54 53 45 56 50 101 53 45 57 102 53 53 49 52 49 57 101 50 54 99 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dlwk5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dlwk5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dlwk5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:15:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug  4 11:15:40.355: INFO: Pod "webserver-deployment-84855cf797-pk22g" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-pk22g webserver-deployment-84855cf797- deployment-6524 /api/v1/namespaces/deployment-6524/pods/webserver-deployment-84855cf797-pk22g 238a710f-cab9-47c8-b9fd-e95371d4a360 6676198 0 2020-08-04 11:15:25 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 a8327a16-0b3d-4265-82e5-9f551419e26c 0xc004ea0747 0xc004ea0748}] []  [{kube-controller-manager Update v1 2020-08-04 11:15:25 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 97 56 51 50 55 97 49 54 45 48 98 51 100 45 52 50 54 53 45 56 50 101 53 45 57 102 53 53 49 52 49 57 101 50 54 99 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-04 11:15:36 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 49 56 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dlwk5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dlwk5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dlwk5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:15:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:15:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:15:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:15:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.1.187,StartTime:2020-08-04 11:15:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-04 11:15:34 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://7fe07b6d68c6292fa553a2a511a4e11462c8b3b849f8d3728e1f3f7b483ce684,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.187,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug  4 11:15:40.355: INFO: Pod "webserver-deployment-84855cf797-qrf7p" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-qrf7p webserver-deployment-84855cf797- deployment-6524 /api/v1/namespaces/deployment-6524/pods/webserver-deployment-84855cf797-qrf7p a6c8bbcf-f554-4188-aa36-0997011839b9 6676335 0 2020-08-04 11:15:39 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 a8327a16-0b3d-4265-82e5-9f551419e26c 0xc004ea08f7 0xc004ea08f8}] []  [{kube-controller-manager Update v1 2020-08-04 11:15:39 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 97 56 51 50 55 97 49 54 45 48 98 51 100 45 52 50 54 53 45 56 50 101 53 45 57 102 53 53 49 52 49 57 101 50 54 99 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dlwk5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dlwk5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dlwk5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:15:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug  4 11:15:40.356: INFO: Pod "webserver-deployment-84855cf797-qs2tv" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-qs2tv webserver-deployment-84855cf797- deployment-6524 /api/v1/namespaces/deployment-6524/pods/webserver-deployment-84855cf797-qs2tv 6adbecd1-59a1-497b-84b2-04a6341c409d 6676308 0 2020-08-04 11:15:39 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 a8327a16-0b3d-4265-82e5-9f551419e26c 0xc004ea0a27 0xc004ea0a28}] []  [{kube-controller-manager Update v1 2020-08-04 11:15:39 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 97 56 51 50 55 97 49 54 45 48 98 51 100 45 52 50 54 53 45 56 50 101 53 45 57 102 53 53 49 52 49 57 101 50 54 99 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dlwk5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dlwk5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dlwk5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:15:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug  4 11:15:40.356: INFO: Pod "webserver-deployment-84855cf797-r2fgl" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-r2fgl webserver-deployment-84855cf797- deployment-6524 /api/v1/namespaces/deployment-6524/pods/webserver-deployment-84855cf797-r2fgl 6022273a-1673-4c0f-8b65-15638c6f3edd 6676355 0 2020-08-04 11:15:39 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 a8327a16-0b3d-4265-82e5-9f551419e26c 0xc004ea0b57 0xc004ea0b58}] []  [{kube-controller-manager Update v1 2020-08-04 11:15:39 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 97 56 51 50 55 97 49 54 45 48 98 51 100 45 52 50 54 53 45 56 50 101 53 45 57 102 53 53 49 52 49 57 101 50 54 99 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-04 11:15:40 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dlwk5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dlwk5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dlwk5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:15:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:15:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:15:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:15:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-08-04 11:15:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug  4 11:15:40.356: INFO: Pod "webserver-deployment-84855cf797-rxpdb" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-rxpdb webserver-deployment-84855cf797- deployment-6524 /api/v1/namespaces/deployment-6524/pods/webserver-deployment-84855cf797-rxpdb e80f9a3d-b92d-4063-9caa-633134e34d59 6676142 0 2020-08-04 11:15:25 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 a8327a16-0b3d-4265-82e5-9f551419e26c 0xc004ea0ce7 0xc004ea0ce8}] []  [{kube-controller-manager Update v1 2020-08-04 11:15:25 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 97 56 51 50 55 97 49 54 45 48 98 51 100 45 52 50 54 53 45 56 50 101 53 45 57 102 53 53 49 52 49 57 101 50 54 99 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-04 11:15:30 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 49 56 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dlwk5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dlwk5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dlwk5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:15:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:15:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:15:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:15:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.1.185,StartTime:2020-08-04 11:15:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-04 11:15:29 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://7ec41d4e6f6b26581705dd93ff9b6bfa77e2988d63ef7f0313e444e0d4e8f6c9,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.185,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug  4 11:15:40.356: INFO: Pod "webserver-deployment-84855cf797-t6gcw" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-t6gcw webserver-deployment-84855cf797- deployment-6524 /api/v1/namespaces/deployment-6524/pods/webserver-deployment-84855cf797-t6gcw 52089cb3-eff2-4792-9485-00aefa35b0e4 6676307 0 2020-08-04 11:15:39 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 a8327a16-0b3d-4265-82e5-9f551419e26c 0xc004ea0ea7 0xc004ea0ea8}] []  [{kube-controller-manager Update v1 2020-08-04 11:15:39 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 97 56 51 50 55 97 49 54 45 48 98 51 100 45 52 50 54 53 45 56 50 101 53 45 57 102 53 53 49 52 49 57 101 50 54 99 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dlwk5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dlwk5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dlwk5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:15:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug  4 11:15:40.356: INFO: Pod "webserver-deployment-84855cf797-t7msx" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-t7msx webserver-deployment-84855cf797- deployment-6524 /api/v1/namespaces/deployment-6524/pods/webserver-deployment-84855cf797-t7msx 73512741-4906-488d-b03e-d16b0f5f2ab5 6676331 0 2020-08-04 11:15:39 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 a8327a16-0b3d-4265-82e5-9f551419e26c 0xc004ea0fd7 0xc004ea0fd8}] []  [{kube-controller-manager Update v1 2020-08-04 11:15:39 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 97 56 51 50 55 97 49 54 45 48 98 51 100 45 52 50 54 53 45 56 50 101 53 45 57 102 53 53 49 52 49 57 101 50 54 99 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dlwk5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dlwk5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dlwk5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:15:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug  4 11:15:40.357: INFO: Pod "webserver-deployment-84855cf797-t7vss" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-t7vss webserver-deployment-84855cf797- deployment-6524 /api/v1/namespaces/deployment-6524/pods/webserver-deployment-84855cf797-t7vss c440732c-30a3-4663-8d55-cfcec211edb8 6676353 0 2020-08-04 11:15:39 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 a8327a16-0b3d-4265-82e5-9f551419e26c 0xc004ea1107 0xc004ea1108}] []  [{kube-controller-manager Update v1 2020-08-04 11:15:39 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 97 56 51 50 55 97 49 54 45 48 98 51 100 45 52 50 54 53 45 56 50 101 53 45 57 102 53 53 49 52 49 57 101 50 54 99 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-04 11:15:40 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dlwk5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dlwk5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dlwk5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:15:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:15:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:15:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:15:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-08-04 11:15:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug  4 11:15:40.357: INFO: Pod "webserver-deployment-84855cf797-w7j58" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-w7j58 webserver-deployment-84855cf797- deployment-6524 /api/v1/namespaces/deployment-6524/pods/webserver-deployment-84855cf797-w7j58 305f6642-d0c8-471b-99b8-edae24d555d5 6676191 0 2020-08-04 11:15:25 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 a8327a16-0b3d-4265-82e5-9f551419e26c 0xc004ea1297 0xc004ea1298}] []  [{kube-controller-manager Update v1 2020-08-04 11:15:25 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 97 56 51 50 55 97 49 54 45 48 98 51 100 45 52 50 54 53 45 56 50 101 53 45 57 102 53 53 49 52 49 57 101 50 54 99 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-04 11:15:35 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 49 56 57 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dlwk5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dlwk5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dlwk5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:15:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:15:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:15:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:15:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.1.189,StartTime:2020-08-04 11:15:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-04 11:15:35 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://e888f86fb4e1829331f2f1b442b4a014e11386c0177bc576963da099f1c15d93,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.189,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug  4 11:15:40.357: INFO: Pod "webserver-deployment-84855cf797-xf2vt" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-xf2vt webserver-deployment-84855cf797- deployment-6524 /api/v1/namespaces/deployment-6524/pods/webserver-deployment-84855cf797-xf2vt 9940b558-77ad-417d-b7bb-3e06cc112fe7 6676199 0 2020-08-04 11:15:25 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 a8327a16-0b3d-4265-82e5-9f551419e26c 0xc004ea1447 0xc004ea1448}] []  [{kube-controller-manager Update v1 2020-08-04 11:15:25 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 97 56 51 50 55 97 49 54 45 48 98 51 100 45 52 50 54 53 45 56 50 101 53 45 57 102 53 53 49 52 49 57 101 50 54 99 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-04 11:15:36 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 49 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dlwk5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dlwk5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dlwk5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:15:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:15:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:15:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:15:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.11,StartTime:2020-08-04 11:15:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-04 11:15:36 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://d0106e2153ff0517e5a73e4d214791474c880e292022f8bf69918e962878a496,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.11,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:15:40.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-6524" for this suite.

• [SLOW TEST:15.239 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":275,"completed":166,"skipped":2851,"failed":0}
SSS
------------------------------
[sig-network] Services 
  should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:15:40.520: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating service nodeport-test with type=NodePort in namespace services-6980
STEP: creating replication controller nodeport-test in namespace services-6980
I0804 11:15:41.222038       7 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-6980, replica count: 2
I0804 11:15:44.272547       7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0804 11:15:47.272911       7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0804 11:15:50.273132       7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0804 11:15:53.273342       7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0804 11:15:56.273549       7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0804 11:15:59.273836       7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Aug  4 11:15:59.273: INFO: Creating new exec pod
Aug  4 11:16:06.398: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=services-6980 execpodx628p -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80'
Aug  4 11:16:07.052: INFO: stderr: "I0804 11:16:06.986079    1711 log.go:172] (0xc000c551e0) (0xc00096a8c0) Create stream\nI0804 11:16:06.986154    1711 log.go:172] (0xc000c551e0) (0xc00096a8c0) Stream added, broadcasting: 1\nI0804 11:16:06.992279    1711 log.go:172] (0xc000c551e0) Reply frame received for 1\nI0804 11:16:06.992327    1711 log.go:172] (0xc000c551e0) (0xc00052aaa0) Create stream\nI0804 11:16:06.992338    1711 log.go:172] (0xc000c551e0) (0xc00052aaa0) Stream added, broadcasting: 3\nI0804 11:16:06.993399    1711 log.go:172] (0xc000c551e0) Reply frame received for 3\nI0804 11:16:06.993428    1711 log.go:172] (0xc000c551e0) (0xc00096a000) Create stream\nI0804 11:16:06.993438    1711 log.go:172] (0xc000c551e0) (0xc00096a000) Stream added, broadcasting: 5\nI0804 11:16:06.994049    1711 log.go:172] (0xc000c551e0) Reply frame received for 5\nI0804 11:16:07.045305    1711 log.go:172] (0xc000c551e0) Data frame received for 5\nI0804 11:16:07.045322    1711 log.go:172] (0xc00096a000) (5) Data frame handling\nI0804 11:16:07.045338    1711 log.go:172] (0xc00096a000) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0804 11:16:07.045752    1711 log.go:172] (0xc000c551e0) Data frame received for 5\nI0804 11:16:07.045763    1711 log.go:172] (0xc00096a000) (5) Data frame handling\nI0804 11:16:07.045774    1711 log.go:172] (0xc00096a000) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0804 11:16:07.046020    1711 log.go:172] (0xc000c551e0) Data frame received for 3\nI0804 11:16:07.046032    1711 log.go:172] (0xc00052aaa0) (3) Data frame handling\nI0804 11:16:07.046124    1711 log.go:172] (0xc000c551e0) Data frame received for 5\nI0804 11:16:07.046144    1711 log.go:172] (0xc00096a000) (5) Data frame handling\nI0804 11:16:07.047584    1711 log.go:172] (0xc000c551e0) Data frame received for 1\nI0804 11:16:07.047599    1711 log.go:172] (0xc00096a8c0) (1) Data frame handling\nI0804 11:16:07.047617    1711 log.go:172] (0xc00096a8c0) (1) Data frame sent\nI0804 11:16:07.047631    1711 log.go:172] (0xc000c551e0) (0xc00096a8c0) Stream removed, broadcasting: 1\nI0804 11:16:07.047741    1711 log.go:172] (0xc000c551e0) Go away received\nI0804 11:16:07.047907    1711 log.go:172] (0xc000c551e0) (0xc00096a8c0) Stream removed, broadcasting: 1\nI0804 11:16:07.047925    1711 log.go:172] (0xc000c551e0) (0xc00052aaa0) Stream removed, broadcasting: 3\nI0804 11:16:07.047933    1711 log.go:172] (0xc000c551e0) (0xc00096a000) Stream removed, broadcasting: 5\n"
Aug  4 11:16:07.053: INFO: stdout: ""
Aug  4 11:16:07.053: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=services-6980 execpodx628p -- /bin/sh -x -c nc -zv -t -w 2 10.99.244.210 80'
Aug  4 11:16:08.121: INFO: stderr: "I0804 11:16:08.046878    1732 log.go:172] (0xc0009e4b00) (0xc0006755e0) Create stream\nI0804 11:16:08.047045    1732 log.go:172] (0xc0009e4b00) (0xc0006755e0) Stream added, broadcasting: 1\nI0804 11:16:08.053275    1732 log.go:172] (0xc0009e4b00) Reply frame received for 1\nI0804 11:16:08.053331    1732 log.go:172] (0xc0009e4b00) (0xc000675680) Create stream\nI0804 11:16:08.053347    1732 log.go:172] (0xc0009e4b00) (0xc000675680) Stream added, broadcasting: 3\nI0804 11:16:08.054772    1732 log.go:172] (0xc0009e4b00) Reply frame received for 3\nI0804 11:16:08.054812    1732 log.go:172] (0xc0009e4b00) (0xc000b94000) Create stream\nI0804 11:16:08.054820    1732 log.go:172] (0xc0009e4b00) (0xc000b94000) Stream added, broadcasting: 5\nI0804 11:16:08.055521    1732 log.go:172] (0xc0009e4b00) Reply frame received for 5\nI0804 11:16:08.114721    1732 log.go:172] (0xc0009e4b00) Data frame received for 3\nI0804 11:16:08.114749    1732 log.go:172] (0xc000675680) (3) Data frame handling\nI0804 11:16:08.114781    1732 log.go:172] (0xc0009e4b00) Data frame received for 5\nI0804 11:16:08.114793    1732 log.go:172] (0xc000b94000) (5) Data frame handling\nI0804 11:16:08.114804    1732 log.go:172] (0xc000b94000) (5) Data frame sent\nI0804 11:16:08.114813    1732 log.go:172] (0xc0009e4b00) Data frame received for 5\nI0804 11:16:08.114826    1732 log.go:172] (0xc000b94000) (5) Data frame handling\n+ nc -zv -t -w 2 10.99.244.210 80\nConnection to 10.99.244.210 80 port [tcp/http] succeeded!\nI0804 11:16:08.116328    1732 log.go:172] (0xc0009e4b00) Data frame received for 1\nI0804 11:16:08.116341    1732 log.go:172] (0xc0006755e0) (1) Data frame handling\nI0804 11:16:08.116354    1732 log.go:172] (0xc0006755e0) (1) Data frame sent\nI0804 11:16:08.116363    1732 log.go:172] (0xc0009e4b00) (0xc0006755e0) Stream removed, broadcasting: 1\nI0804 11:16:08.116404    1732 log.go:172] (0xc0009e4b00) Go away received\nI0804 11:16:08.116590    1732 log.go:172] (0xc0009e4b00) (0xc0006755e0) Stream removed, broadcasting: 1\nI0804 11:16:08.116603    1732 log.go:172] (0xc0009e4b00) (0xc000675680) Stream removed, broadcasting: 3\nI0804 11:16:08.116611    1732 log.go:172] (0xc0009e4b00) (0xc000b94000) Stream removed, broadcasting: 5\n"
Aug  4 11:16:08.121: INFO: stdout: ""
Aug  4 11:16:08.121: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=services-6980 execpodx628p -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.13 31624'
Aug  4 11:16:08.969: INFO: stderr: "I0804 11:16:08.895167    1752 log.go:172] (0xc000a44000) (0xc0006a1360) Create stream\nI0804 11:16:08.895224    1752 log.go:172] (0xc000a44000) (0xc0006a1360) Stream added, broadcasting: 1\nI0804 11:16:08.898007    1752 log.go:172] (0xc000a44000) Reply frame received for 1\nI0804 11:16:08.898032    1752 log.go:172] (0xc000a44000) (0xc0008d4000) Create stream\nI0804 11:16:08.898042    1752 log.go:172] (0xc000a44000) (0xc0008d4000) Stream added, broadcasting: 3\nI0804 11:16:08.898764    1752 log.go:172] (0xc000a44000) Reply frame received for 3\nI0804 11:16:08.898788    1752 log.go:172] (0xc000a44000) (0xc000376000) Create stream\nI0804 11:16:08.898795    1752 log.go:172] (0xc000a44000) (0xc000376000) Stream added, broadcasting: 5\nI0804 11:16:08.904918    1752 log.go:172] (0xc000a44000) Reply frame received for 5\nI0804 11:16:08.963324    1752 log.go:172] (0xc000a44000) Data frame received for 3\nI0804 11:16:08.963355    1752 log.go:172] (0xc0008d4000) (3) Data frame handling\nI0804 11:16:08.963386    1752 log.go:172] (0xc000a44000) Data frame received for 5\nI0804 11:16:08.963412    1752 log.go:172] (0xc000376000) (5) Data frame handling\nI0804 11:16:08.963434    1752 log.go:172] (0xc000376000) (5) Data frame sent\nI0804 11:16:08.963450    1752 log.go:172] (0xc000a44000) Data frame received for 5\nI0804 11:16:08.963461    1752 log.go:172] (0xc000376000) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.13 31624\nConnection to 172.18.0.13 31624 port [tcp/31624] succeeded!\nI0804 11:16:08.964565    1752 log.go:172] (0xc000a44000) Data frame received for 1\nI0804 11:16:08.964584    1752 log.go:172] (0xc0006a1360) (1) Data frame handling\nI0804 11:16:08.964592    1752 log.go:172] (0xc0006a1360) (1) Data frame sent\nI0804 11:16:08.964599    1752 log.go:172] (0xc000a44000) (0xc0006a1360) Stream removed, broadcasting: 1\nI0804 11:16:08.964672    1752 log.go:172] (0xc000a44000) Go away received\nI0804 11:16:08.964930    1752 log.go:172] (0xc000a44000) (0xc0006a1360) Stream removed, broadcasting: 1\nI0804 11:16:08.964941    1752 log.go:172] (0xc000a44000) (0xc0008d4000) Stream removed, broadcasting: 3\nI0804 11:16:08.964946    1752 log.go:172] (0xc000a44000) (0xc000376000) Stream removed, broadcasting: 5\n"
Aug  4 11:16:08.969: INFO: stdout: ""
Aug  4 11:16:08.969: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=services-6980 execpodx628p -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.15 31624'
Aug  4 11:16:09.458: INFO: stderr: "I0804 11:16:09.373634    1771 log.go:172] (0xc00003aa50) (0xc00069f4a0) Create stream\nI0804 11:16:09.373682    1771 log.go:172] (0xc00003aa50) (0xc00069f4a0) Stream added, broadcasting: 1\nI0804 11:16:09.376550    1771 log.go:172] (0xc00003aa50) Reply frame received for 1\nI0804 11:16:09.376610    1771 log.go:172] (0xc00003aa50) (0xc0009f8000) Create stream\nI0804 11:16:09.376626    1771 log.go:172] (0xc00003aa50) (0xc0009f8000) Stream added, broadcasting: 3\nI0804 11:16:09.377676    1771 log.go:172] (0xc00003aa50) Reply frame received for 3\nI0804 11:16:09.377722    1771 log.go:172] (0xc00003aa50) (0xc00069f540) Create stream\nI0804 11:16:09.377735    1771 log.go:172] (0xc00003aa50) (0xc00069f540) Stream added, broadcasting: 5\nI0804 11:16:09.378621    1771 log.go:172] (0xc00003aa50) Reply frame received for 5\nI0804 11:16:09.453261    1771 log.go:172] (0xc00003aa50) Data frame received for 3\nI0804 11:16:09.453278    1771 log.go:172] (0xc0009f8000) (3) Data frame handling\nI0804 11:16:09.453291    1771 log.go:172] (0xc00003aa50) Data frame received for 5\nI0804 11:16:09.453298    1771 log.go:172] (0xc00069f540) (5) Data frame handling\nI0804 11:16:09.453309    1771 log.go:172] (0xc00069f540) (5) Data frame sent\nI0804 11:16:09.453315    1771 log.go:172] (0xc00003aa50) Data frame received for 5\nI0804 11:16:09.453319    1771 log.go:172] (0xc00069f540) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.15 31624\nConnection to 172.18.0.15 31624 port [tcp/31624] succeeded!\nI0804 11:16:09.454108    1771 log.go:172] (0xc00003aa50) Data frame received for 1\nI0804 11:16:09.454126    1771 log.go:172] (0xc00069f4a0) (1) Data frame handling\nI0804 11:16:09.454134    1771 log.go:172] (0xc00069f4a0) (1) Data frame sent\nI0804 11:16:09.454141    1771 log.go:172] (0xc00003aa50) (0xc00069f4a0) Stream removed, broadcasting: 1\nI0804 11:16:09.454156    1771 log.go:172] (0xc00003aa50) Go away received\nI0804 11:16:09.454400    1771 log.go:172] (0xc00003aa50) (0xc00069f4a0) Stream removed, broadcasting: 1\nI0804 11:16:09.454411    1771 log.go:172] (0xc00003aa50) (0xc0009f8000) Stream removed, broadcasting: 3\nI0804 11:16:09.454415    1771 log.go:172] (0xc00003aa50) (0xc00069f540) Stream removed, broadcasting: 5\n"
Aug  4 11:16:09.458: INFO: stdout: ""
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:16:09.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-6980" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:29.388 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":275,"completed":167,"skipped":2854,"failed":0}
SSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:16:09.908: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug  4 11:16:11.620: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating replication controller svc-latency-rc in namespace svc-latency-8296
I0804 11:16:11.851869       7 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-8296, replica count: 1
I0804 11:16:12.902319       7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0804 11:16:13.902542       7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0804 11:16:14.902811       7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0804 11:16:15.903030       7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0804 11:16:16.903211       7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0804 11:16:17.903401       7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0804 11:16:18.903615       7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Aug  4 11:16:19.387: INFO: Created: latency-svc-kdpqz
Aug  4 11:16:19.508: INFO: Got endpoints: latency-svc-kdpqz [504.349915ms]
Aug  4 11:16:20.148: INFO: Created: latency-svc-c5q9b
Aug  4 11:16:20.542: INFO: Got endpoints: latency-svc-c5q9b [1.034404262s]
Aug  4 11:16:20.908: INFO: Created: latency-svc-p2wtz
Aug  4 11:16:20.914: INFO: Got endpoints: latency-svc-p2wtz [1.406045218s]
Aug  4 11:16:21.150: INFO: Created: latency-svc-9dqpn
Aug  4 11:16:21.239: INFO: Got endpoints: latency-svc-9dqpn [1.730899534s]
Aug  4 11:16:21.906: INFO: Created: latency-svc-5xr76
Aug  4 11:16:21.975: INFO: Got endpoints: latency-svc-5xr76 [2.466675289s]
Aug  4 11:16:22.419: INFO: Created: latency-svc-nzd7h
Aug  4 11:16:22.454: INFO: Got endpoints: latency-svc-nzd7h [2.945601934s]
Aug  4 11:16:22.525: INFO: Created: latency-svc-2vlb9
Aug  4 11:16:22.550: INFO: Got endpoints: latency-svc-2vlb9 [3.041813684s]
Aug  4 11:16:22.659: INFO: Created: latency-svc-jgn4b
Aug  4 11:16:22.667: INFO: Got endpoints: latency-svc-jgn4b [3.158796142s]
Aug  4 11:16:22.749: INFO: Created: latency-svc-kgxwb
Aug  4 11:16:22.779: INFO: Got endpoints: latency-svc-kgxwb [3.271134792s]
Aug  4 11:16:22.872: INFO: Created: latency-svc-dhfz9
Aug  4 11:16:22.916: INFO: Got endpoints: latency-svc-dhfz9 [3.407552606s]
Aug  4 11:16:23.058: INFO: Created: latency-svc-rqxk4
Aug  4 11:16:23.103: INFO: Got endpoints: latency-svc-rqxk4 [3.594949874s]
Aug  4 11:16:23.130: INFO: Created: latency-svc-zn8xk
Aug  4 11:16:23.213: INFO: Got endpoints: latency-svc-zn8xk [3.704806729s]
Aug  4 11:16:23.285: INFO: Created: latency-svc-qzzkf
Aug  4 11:16:23.375: INFO: Got endpoints: latency-svc-qzzkf [3.866557101s]
Aug  4 11:16:23.506: INFO: Created: latency-svc-xjtkm
Aug  4 11:16:23.530: INFO: Got endpoints: latency-svc-xjtkm [4.021464957s]
Aug  4 11:16:23.665: INFO: Created: latency-svc-5zqdr
Aug  4 11:16:24.009: INFO: Got endpoints: latency-svc-5zqdr [4.501525853s]
Aug  4 11:16:24.303: INFO: Created: latency-svc-m4htd
Aug  4 11:16:24.351: INFO: Got endpoints: latency-svc-m4htd [4.842841696s]
Aug  4 11:16:24.656: INFO: Created: latency-svc-rbz52
Aug  4 11:16:24.925: INFO: Got endpoints: latency-svc-rbz52 [4.383235163s]
Aug  4 11:16:24.940: INFO: Created: latency-svc-gjsg5
Aug  4 11:16:25.194: INFO: Got endpoints: latency-svc-gjsg5 [4.279990569s]
Aug  4 11:16:25.393: INFO: Created: latency-svc-bzsl7
Aug  4 11:16:25.460: INFO: Got endpoints: latency-svc-bzsl7 [4.220989916s]
Aug  4 11:16:25.585: INFO: Created: latency-svc-vnwzr
Aug  4 11:16:25.616: INFO: Got endpoints: latency-svc-vnwzr [3.64090367s]
Aug  4 11:16:25.813: INFO: Created: latency-svc-4dvc2
Aug  4 11:16:25.818: INFO: Got endpoints: latency-svc-4dvc2 [3.364622469s]
Aug  4 11:16:25.882: INFO: Created: latency-svc-2hhfv
Aug  4 11:16:25.981: INFO: Got endpoints: latency-svc-2hhfv [3.431585852s]
Aug  4 11:16:25.982: INFO: Created: latency-svc-8j6j5
Aug  4 11:16:26.023: INFO: Got endpoints: latency-svc-8j6j5 [3.35627046s]
Aug  4 11:16:26.056: INFO: Created: latency-svc-v7gxz
Aug  4 11:16:26.123: INFO: Got endpoints: latency-svc-v7gxz [3.343407959s]
Aug  4 11:16:26.279: INFO: Created: latency-svc-r2mv6
Aug  4 11:16:26.309: INFO: Got endpoints: latency-svc-r2mv6 [3.39311852s]
Aug  4 11:16:26.347: INFO: Created: latency-svc-j97hv
Aug  4 11:16:26.437: INFO: Got endpoints: latency-svc-j97hv [3.333895163s]
Aug  4 11:16:26.477: INFO: Created: latency-svc-zsqnm
Aug  4 11:16:26.499: INFO: Got endpoints: latency-svc-zsqnm [3.286283499s]
Aug  4 11:16:26.646: INFO: Created: latency-svc-szg2j
Aug  4 11:16:26.662: INFO: Got endpoints: latency-svc-szg2j [3.287381177s]
Aug  4 11:16:26.763: INFO: Created: latency-svc-5gqft
Aug  4 11:16:26.769: INFO: Got endpoints: latency-svc-5gqft [3.239696879s]
Aug  4 11:16:27.011: INFO: Created: latency-svc-kj88z
Aug  4 11:16:27.016: INFO: Got endpoints: latency-svc-kj88z [3.006901504s]
Aug  4 11:16:27.201: INFO: Created: latency-svc-vj8vp
Aug  4 11:16:27.209: INFO: Got endpoints: latency-svc-vj8vp [2.857278927s]
Aug  4 11:16:27.251: INFO: Created: latency-svc-b6n7b
Aug  4 11:16:27.299: INFO: Got endpoints: latency-svc-b6n7b [2.373326544s]
Aug  4 11:16:27.471: INFO: Created: latency-svc-7sgdw
Aug  4 11:16:27.503: INFO: Got endpoints: latency-svc-7sgdw [2.308749963s]
Aug  4 11:16:27.699: INFO: Created: latency-svc-r6rf2
Aug  4 11:16:27.705: INFO: Got endpoints: latency-svc-r6rf2 [2.2451905s]
Aug  4 11:16:27.795: INFO: Created: latency-svc-zb2pm
Aug  4 11:16:27.865: INFO: Got endpoints: latency-svc-zb2pm [2.249683844s]
Aug  4 11:16:27.915: INFO: Created: latency-svc-mnf9b
Aug  4 11:16:27.930: INFO: Got endpoints: latency-svc-mnf9b [2.111455003s]
Aug  4 11:16:27.997: INFO: Created: latency-svc-6hr8d
Aug  4 11:16:28.066: INFO: Got endpoints: latency-svc-6hr8d [2.085023576s]
Aug  4 11:16:28.151: INFO: Created: latency-svc-mkvh8
Aug  4 11:16:28.194: INFO: Got endpoints: latency-svc-mkvh8 [2.170879489s]
Aug  4 11:16:28.319: INFO: Created: latency-svc-p86vb
Aug  4 11:16:28.333: INFO: Got endpoints: latency-svc-p86vb [2.209840428s]
Aug  4 11:16:28.399: INFO: Created: latency-svc-czx22
Aug  4 11:16:28.483: INFO: Got endpoints: latency-svc-czx22 [2.17456621s]
Aug  4 11:16:28.555: INFO: Created: latency-svc-cqbfk
Aug  4 11:16:28.650: INFO: Got endpoints: latency-svc-cqbfk [317.62471ms]
Aug  4 11:16:28.675: INFO: Created: latency-svc-v6cm5
Aug  4 11:16:28.812: INFO: Got endpoints: latency-svc-v6cm5 [2.375008992s]
Aug  4 11:16:28.824: INFO: Created: latency-svc-kskqz
Aug  4 11:16:28.843: INFO: Got endpoints: latency-svc-kskqz [2.343854048s]
Aug  4 11:16:28.876: INFO: Created: latency-svc-6c66b
Aug  4 11:16:28.892: INFO: Got endpoints: latency-svc-6c66b [2.229323045s]
Aug  4 11:16:29.126: INFO: Created: latency-svc-fn5kn
Aug  4 11:16:29.133: INFO: Got endpoints: latency-svc-fn5kn [2.363687174s]
Aug  4 11:16:29.291: INFO: Created: latency-svc-4lx2r
Aug  4 11:16:29.309: INFO: Got endpoints: latency-svc-4lx2r [2.292282001s]
Aug  4 11:16:29.364: INFO: Created: latency-svc-srbsp
Aug  4 11:16:29.379: INFO: Got endpoints: latency-svc-srbsp [2.170144116s]
Aug  4 11:16:29.488: INFO: Created: latency-svc-gfsln
Aug  4 11:16:29.517: INFO: Got endpoints: latency-svc-gfsln [2.218132422s]
Aug  4 11:16:29.551: INFO: Created: latency-svc-rsw8k
Aug  4 11:16:29.602: INFO: Got endpoints: latency-svc-rsw8k [2.098862128s]
Aug  4 11:16:29.645: INFO: Created: latency-svc-t5h55
Aug  4 11:16:29.663: INFO: Got endpoints: latency-svc-t5h55 [1.95760267s]
Aug  4 11:16:29.692: INFO: Created: latency-svc-7qbxq
Aug  4 11:16:29.823: INFO: Got endpoints: latency-svc-7qbxq [1.957881416s]
Aug  4 11:16:29.832: INFO: Created: latency-svc-pxf8f
Aug  4 11:16:29.859: INFO: Got endpoints: latency-svc-pxf8f [1.928787074s]
Aug  4 11:16:29.909: INFO: Created: latency-svc-qpp8g
Aug  4 11:16:30.009: INFO: Got endpoints: latency-svc-qpp8g [1.942774509s]
Aug  4 11:16:30.042: INFO: Created: latency-svc-5hh5s
Aug  4 11:16:30.101: INFO: Got endpoints: latency-svc-5hh5s [1.906994985s]
Aug  4 11:16:31.080: INFO: Created: latency-svc-925np
Aug  4 11:16:31.213: INFO: Got endpoints: latency-svc-925np [2.730016776s]
Aug  4 11:16:31.291: INFO: Created: latency-svc-fk7pk
Aug  4 11:16:31.384: INFO: Got endpoints: latency-svc-fk7pk [2.733584047s]
Aug  4 11:16:31.743: INFO: Created: latency-svc-lvqcj
Aug  4 11:16:31.746: INFO: Got endpoints: latency-svc-lvqcj [2.933321104s]
Aug  4 11:16:31.813: INFO: Created: latency-svc-tbbtn
Aug  4 11:16:31.998: INFO: Got endpoints: latency-svc-tbbtn [3.154276161s]
Aug  4 11:16:32.020: INFO: Created: latency-svc-g4xwq
Aug  4 11:16:32.055: INFO: Got endpoints: latency-svc-g4xwq [3.163606749s]
Aug  4 11:16:32.182: INFO: Created: latency-svc-w2wnr
Aug  4 11:16:32.199: INFO: Got endpoints: latency-svc-w2wnr [3.066155605s]
Aug  4 11:16:32.248: INFO: Created: latency-svc-r88gq
Aug  4 11:16:32.357: INFO: Got endpoints: latency-svc-r88gq [3.047736434s]
Aug  4 11:16:32.373: INFO: Created: latency-svc-6pn7f
Aug  4 11:16:32.399: INFO: Got endpoints: latency-svc-6pn7f [3.02013933s]
Aug  4 11:16:32.578: INFO: Created: latency-svc-xljb9
Aug  4 11:16:32.652: INFO: Got endpoints: latency-svc-xljb9 [3.134840486s]
Aug  4 11:16:32.800: INFO: Created: latency-svc-s9knd
Aug  4 11:16:32.836: INFO: Got endpoints: latency-svc-s9knd [3.234558031s]
Aug  4 11:16:32.974: INFO: Created: latency-svc-dsl6p
Aug  4 11:16:32.980: INFO: Got endpoints: latency-svc-dsl6p [3.317358515s]
Aug  4 11:16:33.016: INFO: Created: latency-svc-ktzsk
Aug  4 11:16:33.117: INFO: Got endpoints: latency-svc-ktzsk [3.293323696s]
Aug  4 11:16:33.163: INFO: Created: latency-svc-mnhk5
Aug  4 11:16:33.191: INFO: Got endpoints: latency-svc-mnhk5 [3.331861505s]
Aug  4 11:16:33.285: INFO: Created: latency-svc-mxbnz
Aug  4 11:16:33.370: INFO: Got endpoints: latency-svc-mxbnz [3.361015383s]
Aug  4 11:16:33.371: INFO: Created: latency-svc-88bwb
Aug  4 11:16:33.447: INFO: Got endpoints: latency-svc-88bwb [3.345847859s]
Aug  4 11:16:33.490: INFO: Created: latency-svc-9s8mn
Aug  4 11:16:33.608: INFO: Got endpoints: latency-svc-9s8mn [2.394842955s]
Aug  4 11:16:33.673: INFO: Created: latency-svc-5x9tg
Aug  4 11:16:33.690: INFO: Got endpoints: latency-svc-5x9tg [2.305995526s]
Aug  4 11:16:33.817: INFO: Created: latency-svc-vsk6q
Aug  4 11:16:33.835: INFO: Got endpoints: latency-svc-vsk6q [2.088992807s]
Aug  4 11:16:33.931: INFO: Created: latency-svc-vwkdd
Aug  4 11:16:33.936: INFO: Got endpoints: latency-svc-vwkdd [1.937934746s]
Aug  4 11:16:33.970: INFO: Created: latency-svc-98wgd
Aug  4 11:16:34.000: INFO: Got endpoints: latency-svc-98wgd [1.945097213s]
Aug  4 11:16:34.183: INFO: Created: latency-svc-hsxv6
Aug  4 11:16:34.231: INFO: Got endpoints: latency-svc-hsxv6 [2.032025091s]
Aug  4 11:16:34.233: INFO: Created: latency-svc-vd85h
Aug  4 11:16:34.259: INFO: Got endpoints: latency-svc-vd85h [1.90198334s]
Aug  4 11:16:34.335: INFO: Created: latency-svc-4tqtx
Aug  4 11:16:34.415: INFO: Got endpoints: latency-svc-4tqtx [2.016359222s]
Aug  4 11:16:34.416: INFO: Created: latency-svc-kcgc7
Aug  4 11:16:34.490: INFO: Got endpoints: latency-svc-kcgc7 [1.837779755s]
Aug  4 11:16:34.692: INFO: Created: latency-svc-gtff2
Aug  4 11:16:34.988: INFO: Got endpoints: latency-svc-gtff2 [2.151566214s]
Aug  4 11:16:35.192: INFO: Created: latency-svc-ngc9g
Aug  4 11:16:35.234: INFO: Got endpoints: latency-svc-ngc9g [2.254333605s]
Aug  4 11:16:35.369: INFO: Created: latency-svc-wmdxd
Aug  4 11:16:35.578: INFO: Got endpoints: latency-svc-wmdxd [2.460712776s]
Aug  4 11:16:35.584: INFO: Created: latency-svc-pbkfh
Aug  4 11:16:35.613: INFO: Got endpoints: latency-svc-pbkfh [2.422034509s]
Aug  4 11:16:35.651: INFO: Created: latency-svc-k2kc6
Aug  4 11:16:35.673: INFO: Got endpoints: latency-svc-k2kc6 [2.302488758s]
Aug  4 11:16:35.764: INFO: Created: latency-svc-kt7vs
Aug  4 11:16:35.826: INFO: Created: latency-svc-9vtzg
Aug  4 11:16:35.826: INFO: Got endpoints: latency-svc-kt7vs [2.378633758s]
Aug  4 11:16:35.914: INFO: Got endpoints: latency-svc-9vtzg [2.305090459s]
Aug  4 11:16:36.109: INFO: Created: latency-svc-fspdw
Aug  4 11:16:36.154: INFO: Got endpoints: latency-svc-fspdw [2.46365019s]
Aug  4 11:16:36.273: INFO: Created: latency-svc-8lwt9
Aug  4 11:16:36.302: INFO: Got endpoints: latency-svc-8lwt9 [2.467353012s]
Aug  4 11:16:36.351: INFO: Created: latency-svc-wmb22
Aug  4 11:16:36.371: INFO: Got endpoints: latency-svc-wmb22 [2.434876661s]
Aug  4 11:16:36.440: INFO: Created: latency-svc-xc8dz
Aug  4 11:16:36.444: INFO: Got endpoints: latency-svc-xc8dz [2.443911834s]
Aug  4 11:16:36.486: INFO: Created: latency-svc-xw2p9
Aug  4 11:16:36.509: INFO: Got endpoints: latency-svc-xw2p9 [2.277511624s]
Aug  4 11:16:36.750: INFO: Created: latency-svc-bg4cm
Aug  4 11:16:36.884: INFO: Got endpoints: latency-svc-bg4cm [2.625081s]
Aug  4 11:16:36.957: INFO: Created: latency-svc-8gbw2
Aug  4 11:16:36.971: INFO: Got endpoints: latency-svc-8gbw2 [2.556021458s]
Aug  4 11:16:37.075: INFO: Created: latency-svc-lllhq
Aug  4 11:16:37.104: INFO: Got endpoints: latency-svc-lllhq [2.614039715s]
Aug  4 11:16:37.297: INFO: Created: latency-svc-67mkg
Aug  4 11:16:37.303: INFO: Got endpoints: latency-svc-67mkg [2.315144055s]
Aug  4 11:16:37.440: INFO: Created: latency-svc-hgfl6
Aug  4 11:16:37.478: INFO: Got endpoints: latency-svc-hgfl6 [2.243397628s]
Aug  4 11:16:37.479: INFO: Created: latency-svc-9ml29
Aug  4 11:16:37.509: INFO: Got endpoints: latency-svc-9ml29 [1.931439634s]
Aug  4 11:16:37.590: INFO: Created: latency-svc-469dn
Aug  4 11:16:37.593: INFO: Got endpoints: latency-svc-469dn [1.98074716s]
Aug  4 11:16:37.677: INFO: Created: latency-svc-9rvgn
Aug  4 11:16:37.680: INFO: Got endpoints: latency-svc-9rvgn [2.007060023s]
Aug  4 11:16:37.740: INFO: Created: latency-svc-6n46q
Aug  4 11:16:37.781: INFO: Got endpoints: latency-svc-6n46q [1.954896729s]
Aug  4 11:16:37.786: INFO: Created: latency-svc-jlmt8
Aug  4 11:16:37.799: INFO: Got endpoints: latency-svc-jlmt8 [1.885688057s]
Aug  4 11:16:37.883: INFO: Created: latency-svc-rchxk
Aug  4 11:16:37.909: INFO: Got endpoints: latency-svc-rchxk [1.755119829s]
Aug  4 11:16:37.952: INFO: Created: latency-svc-gvbdm
Aug  4 11:16:37.982: INFO: Got endpoints: latency-svc-gvbdm [1.679536079s]
Aug  4 11:16:38.024: INFO: Created: latency-svc-r7956
Aug  4 11:16:38.042: INFO: Got endpoints: latency-svc-r7956 [1.671191157s]
Aug  4 11:16:38.063: INFO: Created: latency-svc-zfbls
Aug  4 11:16:38.093: INFO: Got endpoints: latency-svc-zfbls [1.648922433s]
Aug  4 11:16:38.154: INFO: Created: latency-svc-jk5qc
Aug  4 11:16:38.166: INFO: Got endpoints: latency-svc-jk5qc [1.65744779s]
Aug  4 11:16:38.216: INFO: Created: latency-svc-72cdn
Aug  4 11:16:38.228: INFO: Got endpoints: latency-svc-72cdn [1.344175467s]
Aug  4 11:16:38.296: INFO: Created: latency-svc-dqkpk
Aug  4 11:16:38.300: INFO: Got endpoints: latency-svc-dqkpk [1.328184673s]
Aug  4 11:16:38.351: INFO: Created: latency-svc-cxg2v
Aug  4 11:16:38.367: INFO: Got endpoints: latency-svc-cxg2v [1.263034929s]
Aug  4 11:16:38.388: INFO: Created: latency-svc-mnxlz
Aug  4 11:16:38.429: INFO: Got endpoints: latency-svc-mnxlz [1.126231194s]
Aug  4 11:16:38.441: INFO: Created: latency-svc-mtjcl
Aug  4 11:16:38.474: INFO: Got endpoints: latency-svc-mtjcl [996.085513ms]
Aug  4 11:16:38.503: INFO: Created: latency-svc-8lwjr
Aug  4 11:16:38.517: INFO: Got endpoints: latency-svc-8lwjr [1.007691114s]
Aug  4 11:16:38.572: INFO: Created: latency-svc-vxsk2
Aug  4 11:16:38.585: INFO: Got endpoints: latency-svc-vxsk2 [991.244236ms]
Aug  4 11:16:38.609: INFO: Created: latency-svc-n9555
Aug  4 11:16:38.627: INFO: Got endpoints: latency-svc-n9555 [946.820723ms]
Aug  4 11:16:38.651: INFO: Created: latency-svc-qs5wf
Aug  4 11:16:38.710: INFO: Got endpoints: latency-svc-qs5wf [928.621774ms]
Aug  4 11:16:38.743: INFO: Created: latency-svc-v6zpk
Aug  4 11:16:38.759: INFO: Got endpoints: latency-svc-v6zpk [959.855915ms]
Aug  4 11:16:38.791: INFO: Created: latency-svc-hfp9r
Aug  4 11:16:38.841: INFO: Got endpoints: latency-svc-hfp9r [932.386312ms]
Aug  4 11:16:38.857: INFO: Created: latency-svc-s4lxc
Aug  4 11:16:38.874: INFO: Got endpoints: latency-svc-s4lxc [892.778848ms]
Aug  4 11:16:38.904: INFO: Created: latency-svc-x7bpg
Aug  4 11:16:38.939: INFO: Got endpoints: latency-svc-x7bpg [897.031761ms]
Aug  4 11:16:39.007: INFO: Created: latency-svc-zflpn
Aug  4 11:16:39.012: INFO: Got endpoints: latency-svc-zflpn [918.800854ms]
Aug  4 11:16:39.037: INFO: Created: latency-svc-vlfcq
Aug  4 11:16:39.055: INFO: Got endpoints: latency-svc-vlfcq [888.249371ms]
Aug  4 11:16:39.079: INFO: Created: latency-svc-zj84l
Aug  4 11:16:39.097: INFO: Got endpoints: latency-svc-zj84l [869.083602ms]
Aug  4 11:16:39.153: INFO: Created: latency-svc-mqdnn
Aug  4 11:16:39.221: INFO: Got endpoints: latency-svc-mqdnn [920.943927ms]
Aug  4 11:16:39.221: INFO: Created: latency-svc-4njml
Aug  4 11:16:39.229: INFO: Got endpoints: latency-svc-4njml [861.978577ms]
Aug  4 11:16:39.302: INFO: Created: latency-svc-rbx9c
Aug  4 11:16:39.306: INFO: Got endpoints: latency-svc-rbx9c [876.671397ms]
Aug  4 11:16:39.343: INFO: Created: latency-svc-mpr9h
Aug  4 11:16:39.362: INFO: Got endpoints: latency-svc-mpr9h [888.167993ms]
Aug  4 11:16:39.385: INFO: Created: latency-svc-cs4zl
Aug  4 11:16:39.446: INFO: Got endpoints: latency-svc-cs4zl [929.497659ms]
Aug  4 11:16:39.485: INFO: Created: latency-svc-k55jk
Aug  4 11:16:39.494: INFO: Got endpoints: latency-svc-k55jk [909.389408ms]
Aug  4 11:16:39.521: INFO: Created: latency-svc-4kv6h
Aug  4 11:16:39.608: INFO: Got endpoints: latency-svc-4kv6h [981.17685ms]
Aug  4 11:16:39.631: INFO: Created: latency-svc-brh7g
Aug  4 11:16:39.650: INFO: Got endpoints: latency-svc-brh7g [940.57301ms]
Aug  4 11:16:39.757: INFO: Created: latency-svc-5rt5n
Aug  4 11:16:39.770: INFO: Got endpoints: latency-svc-5rt5n [1.011107863s]
Aug  4 11:16:39.827: INFO: Created: latency-svc-nfbrw
Aug  4 11:16:39.837: INFO: Got endpoints: latency-svc-nfbrw [995.45087ms]
Aug  4 11:16:39.895: INFO: Created: latency-svc-62nw5
Aug  4 11:16:39.909: INFO: Got endpoints: latency-svc-62nw5 [1.034715919s]
Aug  4 11:16:39.937: INFO: Created: latency-svc-vxwwz
Aug  4 11:16:39.949: INFO: Got endpoints: latency-svc-vxwwz [1.009902936s]
Aug  4 11:16:39.974: INFO: Created: latency-svc-q5qd7
Aug  4 11:16:39.985: INFO: Got endpoints: latency-svc-q5qd7 [973.057722ms]
Aug  4 11:16:40.034: INFO: Created: latency-svc-f85qh
Aug  4 11:16:40.060: INFO: Got endpoints: latency-svc-f85qh [1.005724339s]
Aug  4 11:16:40.061: INFO: Created: latency-svc-5hfd2
Aug  4 11:16:40.076: INFO: Got endpoints: latency-svc-5hfd2 [978.929746ms]
Aug  4 11:16:40.096: INFO: Created: latency-svc-v9v4d
Aug  4 11:16:40.112: INFO: Got endpoints: latency-svc-v9v4d [891.028857ms]
Aug  4 11:16:40.132: INFO: Created: latency-svc-rqcms
Aug  4 11:16:40.177: INFO: Got endpoints: latency-svc-rqcms [947.846881ms]
Aug  4 11:16:40.194: INFO: Created: latency-svc-x2rfb
Aug  4 11:16:40.208: INFO: Got endpoints: latency-svc-x2rfb [902.200508ms]
Aug  4 11:16:40.237: INFO: Created: latency-svc-9pj4f
Aug  4 11:16:40.273: INFO: Got endpoints: latency-svc-9pj4f [910.684958ms]
Aug  4 11:16:40.348: INFO: Created: latency-svc-7mmj4
Aug  4 11:16:40.365: INFO: Got endpoints: latency-svc-7mmj4 [918.660928ms]
Aug  4 11:16:40.454: INFO: Created: latency-svc-l5x5d
Aug  4 11:16:40.462: INFO: Got endpoints: latency-svc-l5x5d [967.394224ms]
Aug  4 11:16:40.488: INFO: Created: latency-svc-j6zpt
Aug  4 11:16:40.506: INFO: Got endpoints: latency-svc-j6zpt [898.391507ms]
Aug  4 11:16:40.536: INFO: Created: latency-svc-x5nfk
Aug  4 11:16:40.545: INFO: Got endpoints: latency-svc-x5nfk [895.267108ms]
Aug  4 11:16:40.602: INFO: Created: latency-svc-nt6c5
Aug  4 11:16:40.637: INFO: Got endpoints: latency-svc-nt6c5 [866.130273ms]
Aug  4 11:16:40.641: INFO: Created: latency-svc-6zqrh
Aug  4 11:16:40.678: INFO: Got endpoints: latency-svc-6zqrh [841.382325ms]
Aug  4 11:16:40.739: INFO: Created: latency-svc-l7v9v
Aug  4 11:16:40.758: INFO: Got endpoints: latency-svc-l7v9v [848.955112ms]
Aug  4 11:16:40.801: INFO: Created: latency-svc-zt5kk
Aug  4 11:16:40.810: INFO: Got endpoints: latency-svc-zt5kk [861.353524ms]
Aug  4 11:16:40.834: INFO: Created: latency-svc-b68ck
Aug  4 11:16:40.871: INFO: Got endpoints: latency-svc-b68ck [885.650263ms]
Aug  4 11:16:40.888: INFO: Created: latency-svc-skqv9
Aug  4 11:16:40.907: INFO: Got endpoints: latency-svc-skqv9 [846.394274ms]
Aug  4 11:16:40.936: INFO: Created: latency-svc-r927s
Aug  4 11:16:40.969: INFO: Got endpoints: latency-svc-r927s [892.512326ms]
Aug  4 11:16:41.027: INFO: Created: latency-svc-2qbl8
Aug  4 11:16:41.056: INFO: Got endpoints: latency-svc-2qbl8 [943.86457ms]
Aug  4 11:16:41.056: INFO: Created: latency-svc-w8nmx
Aug  4 11:16:41.092: INFO: Got endpoints: latency-svc-w8nmx [914.692381ms]
Aug  4 11:16:41.159: INFO: Created: latency-svc-9qwcz
Aug  4 11:16:41.162: INFO: Got endpoints: latency-svc-9qwcz [953.267636ms]
Aug  4 11:16:41.208: INFO: Created: latency-svc-87b4s
Aug  4 11:16:41.251: INFO: Got endpoints: latency-svc-87b4s [978.317272ms]
Aug  4 11:16:41.309: INFO: Created: latency-svc-64pk9
Aug  4 11:16:41.318: INFO: Got endpoints: latency-svc-64pk9 [952.459555ms]
Aug  4 11:16:41.344: INFO: Created: latency-svc-lcghm
Aug  4 11:16:41.361: INFO: Got endpoints: latency-svc-lcghm [899.205335ms]
Aug  4 11:16:41.392: INFO: Created: latency-svc-xzntv
Aug  4 11:16:41.446: INFO: Got endpoints: latency-svc-xzntv [939.508294ms]
Aug  4 11:16:41.460: INFO: Created: latency-svc-bv2q2
Aug  4 11:16:41.491: INFO: Got endpoints: latency-svc-bv2q2 [945.105418ms]
Aug  4 11:16:41.520: INFO: Created: latency-svc-5bsgl
Aug  4 11:16:41.536: INFO: Got endpoints: latency-svc-5bsgl [898.962738ms]
Aug  4 11:16:41.584: INFO: Created: latency-svc-hk9lv
Aug  4 11:16:41.632: INFO: Created: latency-svc-j4z2g
Aug  4 11:16:41.632: INFO: Got endpoints: latency-svc-hk9lv [953.537327ms]
Aug  4 11:16:41.673: INFO: Got endpoints: latency-svc-j4z2g [915.20632ms]
Aug  4 11:16:41.748: INFO: Created: latency-svc-v7rlx
Aug  4 11:16:41.788: INFO: Got endpoints: latency-svc-v7rlx [977.775931ms]
Aug  4 11:16:41.877: INFO: Created: latency-svc-9jlvk
Aug  4 11:16:41.881: INFO: Got endpoints: latency-svc-9jlvk [1.010305297s]
Aug  4 11:16:41.944: INFO: Created: latency-svc-gl8hj
Aug  4 11:16:41.956: INFO: Got endpoints: latency-svc-gl8hj [1.048907103s]
Aug  4 11:16:42.015: INFO: Created: latency-svc-b2p6n
Aug  4 11:16:42.042: INFO: Got endpoints: latency-svc-b2p6n [1.073013675s]
Aug  4 11:16:42.043: INFO: Created: latency-svc-kwkd5
Aug  4 11:16:42.072: INFO: Got endpoints: latency-svc-kwkd5 [1.015824334s]
Aug  4 11:16:42.105: INFO: Created: latency-svc-5wt95
Aug  4 11:16:42.153: INFO: Got endpoints: latency-svc-5wt95 [1.061425929s]
Aug  4 11:16:42.159: INFO: Created: latency-svc-kxn4f
Aug  4 11:16:42.179: INFO: Got endpoints: latency-svc-kxn4f [1.017120858s]
Aug  4 11:16:42.201: INFO: Created: latency-svc-grrf6
Aug  4 11:16:42.227: INFO: Got endpoints: latency-svc-grrf6 [975.78049ms]
Aug  4 11:16:42.291: INFO: Created: latency-svc-gw7xd
Aug  4 11:16:42.299: INFO: Got endpoints: latency-svc-gw7xd [981.381775ms]
Aug  4 11:16:42.353: INFO: Created: latency-svc-5c8rh
Aug  4 11:16:42.377: INFO: Got endpoints: latency-svc-5c8rh [1.016549777s]
Aug  4 11:16:42.430: INFO: Created: latency-svc-4mz6s
Aug  4 11:16:42.473: INFO: Got endpoints: latency-svc-4mz6s [1.02663578s]
Aug  4 11:16:42.564: INFO: Created: latency-svc-vncrw
Aug  4 11:16:42.594: INFO: Got endpoints: latency-svc-vncrw [1.103123842s]
Aug  4 11:16:42.635: INFO: Created: latency-svc-w6p2z
Aug  4 11:16:42.711: INFO: Got endpoints: latency-svc-w6p2z [1.175803122s]
Aug  4 11:16:42.761: INFO: Created: latency-svc-rmw2l
Aug  4 11:16:42.792: INFO: Got endpoints: latency-svc-rmw2l [1.160186064s]
Aug  4 11:16:42.859: INFO: Created: latency-svc-gns8v
Aug  4 11:16:42.874: INFO: Got endpoints: latency-svc-gns8v [1.200653263s]
Aug  4 11:16:42.897: INFO: Created: latency-svc-q8rgn
Aug  4 11:16:42.916: INFO: Got endpoints: latency-svc-q8rgn [1.127878948s]
Aug  4 11:16:42.939: INFO: Created: latency-svc-wbcn8
Aug  4 11:16:42.959: INFO: Got endpoints: latency-svc-wbcn8 [1.07747272s]
Aug  4 11:16:43.004: INFO: Created: latency-svc-z9mwn
Aug  4 11:16:43.025: INFO: Got endpoints: latency-svc-z9mwn [1.069121797s]
Aug  4 11:16:43.062: INFO: Created: latency-svc-fjmkv
Aug  4 11:16:43.073: INFO: Got endpoints: latency-svc-fjmkv [1.031207513s]
Aug  4 11:16:43.097: INFO: Created: latency-svc-bjd8b
Aug  4 11:16:43.135: INFO: Got endpoints: latency-svc-bjd8b [1.063133101s]
Aug  4 11:16:43.151: INFO: Created: latency-svc-6qlv6
Aug  4 11:16:43.163: INFO: Got endpoints: latency-svc-6qlv6 [1.010104822s]
Aug  4 11:16:43.197: INFO: Created: latency-svc-62nkd
Aug  4 11:16:43.296: INFO: Got endpoints: latency-svc-62nkd [1.117372636s]
Aug  4 11:16:43.300: INFO: Created: latency-svc-lkkgw
Aug  4 11:16:43.314: INFO: Got endpoints: latency-svc-lkkgw [1.08706587s]
Aug  4 11:16:43.355: INFO: Created: latency-svc-frrx9
Aug  4 11:16:43.368: INFO: Got endpoints: latency-svc-frrx9 [1.069015456s]
Aug  4 11:16:43.395: INFO: Created: latency-svc-b7wbp
Aug  4 11:16:43.430: INFO: Got endpoints: latency-svc-b7wbp [1.052568999s]
Aug  4 11:16:43.461: INFO: Created: latency-svc-nkxb9
Aug  4 11:16:43.509: INFO: Got endpoints: latency-svc-nkxb9 [1.036214301s]
Aug  4 11:16:43.584: INFO: Created: latency-svc-6cwmc
Aug  4 11:16:43.590: INFO: Got endpoints: latency-svc-6cwmc [996.290207ms]
Aug  4 11:16:43.659: INFO: Created: latency-svc-bfw4p
Aug  4 11:16:43.757: INFO: Got endpoints: latency-svc-bfw4p [1.045882279s]
Aug  4 11:16:43.829: INFO: Created: latency-svc-qn5ht
Aug  4 11:16:43.901: INFO: Got endpoints: latency-svc-qn5ht [1.10898248s]
Aug  4 11:16:43.919: INFO: Created: latency-svc-4hk67
Aug  4 11:16:43.953: INFO: Got endpoints: latency-svc-4hk67 [1.079130373s]
Aug  4 11:16:43.992: INFO: Created: latency-svc-rgfhw
Aug  4 11:16:44.045: INFO: Got endpoints: latency-svc-rgfhw [1.129134133s]
Aug  4 11:16:44.085: INFO: Created: latency-svc-sw6ps
Aug  4 11:16:44.103: INFO: Got endpoints: latency-svc-sw6ps [1.144551001s]
Aug  4 11:16:44.127: INFO: Created: latency-svc-r9x68
Aug  4 11:16:44.140: INFO: Got endpoints: latency-svc-r9x68 [1.115195069s]
Aug  4 11:16:44.220: INFO: Created: latency-svc-v4tzr
Aug  4 11:16:44.248: INFO: Got endpoints: latency-svc-v4tzr [1.174788578s]
Aug  4 11:16:44.286: INFO: Created: latency-svc-zrln2
Aug  4 11:16:44.345: INFO: Got endpoints: latency-svc-zrln2 [1.20975478s]
Aug  4 11:16:44.361: INFO: Created: latency-svc-qpbrn
Aug  4 11:16:44.391: INFO: Got endpoints: latency-svc-qpbrn [1.227753481s]
Aug  4 11:16:44.424: INFO: Created: latency-svc-w5dgx
Aug  4 11:16:44.440: INFO: Got endpoints: latency-svc-w5dgx [1.144116689s]
Aug  4 11:16:44.488: INFO: Created: latency-svc-4mvdv
Aug  4 11:16:44.508: INFO: Got endpoints: latency-svc-4mvdv [1.193286101s]
Aug  4 11:16:44.531: INFO: Created: latency-svc-6j5c2
Aug  4 11:16:44.549: INFO: Got endpoints: latency-svc-6j5c2 [1.180604153s]
Aug  4 11:16:44.549: INFO: Latencies: [317.62471ms 841.382325ms 846.394274ms 848.955112ms 861.353524ms 861.978577ms 866.130273ms 869.083602ms 876.671397ms 885.650263ms 888.167993ms 888.249371ms 891.028857ms 892.512326ms 892.778848ms 895.267108ms 897.031761ms 898.391507ms 898.962738ms 899.205335ms 902.200508ms 909.389408ms 910.684958ms 914.692381ms 915.20632ms 918.660928ms 918.800854ms 920.943927ms 928.621774ms 929.497659ms 932.386312ms 939.508294ms 940.57301ms 943.86457ms 945.105418ms 946.820723ms 947.846881ms 952.459555ms 953.267636ms 953.537327ms 959.855915ms 967.394224ms 973.057722ms 975.78049ms 977.775931ms 978.317272ms 978.929746ms 981.17685ms 981.381775ms 991.244236ms 995.45087ms 996.085513ms 996.290207ms 1.005724339s 1.007691114s 1.009902936s 1.010104822s 1.010305297s 1.011107863s 1.015824334s 1.016549777s 1.017120858s 1.02663578s 1.031207513s 1.034404262s 1.034715919s 1.036214301s 1.045882279s 1.048907103s 1.052568999s 1.061425929s 1.063133101s 1.069015456s 1.069121797s 1.073013675s 1.07747272s 1.079130373s 1.08706587s 1.103123842s 1.10898248s 1.115195069s 1.117372636s 1.126231194s 1.127878948s 1.129134133s 1.144116689s 1.144551001s 1.160186064s 1.174788578s 1.175803122s 1.180604153s 1.193286101s 1.200653263s 1.20975478s 1.227753481s 1.263034929s 1.328184673s 1.344175467s 1.406045218s 1.648922433s 1.65744779s 1.671191157s 1.679536079s 1.730899534s 1.755119829s 1.837779755s 1.885688057s 1.90198334s 1.906994985s 1.928787074s 1.931439634s 1.937934746s 1.942774509s 1.945097213s 1.954896729s 1.95760267s 1.957881416s 1.98074716s 2.007060023s 2.016359222s 2.032025091s 2.085023576s 2.088992807s 2.098862128s 2.111455003s 2.151566214s 2.170144116s 2.170879489s 2.17456621s 2.209840428s 2.218132422s 2.229323045s 2.243397628s 2.2451905s 2.249683844s 2.254333605s 2.277511624s 2.292282001s 2.302488758s 2.305090459s 2.305995526s 2.308749963s 2.315144055s 2.343854048s 2.363687174s 2.373326544s 2.375008992s 2.378633758s 2.394842955s 2.422034509s 2.434876661s 2.443911834s 2.460712776s 2.46365019s 2.466675289s 2.467353012s 2.556021458s 2.614039715s 2.625081s 2.730016776s 2.733584047s 2.857278927s 2.933321104s 2.945601934s 3.006901504s 3.02013933s 3.041813684s 3.047736434s 3.066155605s 3.134840486s 3.154276161s 3.158796142s 3.163606749s 3.234558031s 3.239696879s 3.271134792s 3.286283499s 3.287381177s 3.293323696s 3.317358515s 3.331861505s 3.333895163s 3.343407959s 3.345847859s 3.35627046s 3.361015383s 3.364622469s 3.39311852s 3.407552606s 3.431585852s 3.594949874s 3.64090367s 3.704806729s 3.866557101s 4.021464957s 4.220989916s 4.279990569s 4.383235163s 4.501525853s 4.842841696s]
Aug  4 11:16:44.549: INFO: 50 %ile: 1.65744779s
Aug  4 11:16:44.549: INFO: 90 %ile: 3.331861505s
Aug  4 11:16:44.549: INFO: 99 %ile: 4.501525853s
Aug  4 11:16:44.549: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:16:44.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-8296" for this suite.

• [SLOW TEST:34.652 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":275,"completed":168,"skipped":2863,"failed":0}
S
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:16:44.561: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:16:51.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-2370" for this suite.

• [SLOW TEST:7.304 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":275,"completed":169,"skipped":2864,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:16:51.865: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:271
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a replication controller
Aug  4 11:16:51.998: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7051'
Aug  4 11:16:52.361: INFO: stderr: ""
Aug  4 11:16:52.361: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug  4 11:16:52.361: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7051'
Aug  4 11:16:52.463: INFO: stderr: ""
Aug  4 11:16:52.463: INFO: stdout: "update-demo-nautilus-gjn4h update-demo-nautilus-tcjt5 "
Aug  4 11:16:52.463: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gjn4h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7051'
Aug  4 11:16:52.637: INFO: stderr: ""
Aug  4 11:16:52.637: INFO: stdout: ""
Aug  4 11:16:52.637: INFO: update-demo-nautilus-gjn4h is created but not running
Aug  4 11:16:57.638: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7051'
Aug  4 11:16:57.845: INFO: stderr: ""
Aug  4 11:16:57.845: INFO: stdout: "update-demo-nautilus-gjn4h update-demo-nautilus-tcjt5 "
Aug  4 11:16:57.845: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gjn4h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7051'
Aug  4 11:16:58.009: INFO: stderr: ""
Aug  4 11:16:58.009: INFO: stdout: "true"
Aug  4 11:16:58.009: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gjn4h -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7051'
Aug  4 11:16:58.125: INFO: stderr: ""
Aug  4 11:16:58.125: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug  4 11:16:58.125: INFO: validating pod update-demo-nautilus-gjn4h
Aug  4 11:16:58.256: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug  4 11:16:58.256: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug  4 11:16:58.256: INFO: update-demo-nautilus-gjn4h is verified up and running
Aug  4 11:16:58.256: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tcjt5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7051'
Aug  4 11:16:58.387: INFO: stderr: ""
Aug  4 11:16:58.387: INFO: stdout: "true"
Aug  4 11:16:58.388: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tcjt5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7051'
Aug  4 11:16:58.558: INFO: stderr: ""
Aug  4 11:16:58.558: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug  4 11:16:58.558: INFO: validating pod update-demo-nautilus-tcjt5
Aug  4 11:16:58.573: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug  4 11:16:58.573: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug  4 11:16:58.573: INFO: update-demo-nautilus-tcjt5 is verified up and running
STEP: scaling down the replication controller
Aug  4 11:16:58.575: INFO: scanned /root for discovery docs: 
Aug  4 11:16:58.575: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-7051'
Aug  4 11:16:59.908: INFO: stderr: ""
Aug  4 11:16:59.908: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug  4 11:16:59.908: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7051'
Aug  4 11:17:00.030: INFO: stderr: ""
Aug  4 11:17:00.030: INFO: stdout: "update-demo-nautilus-gjn4h update-demo-nautilus-tcjt5 "
STEP: Replicas for name=update-demo: expected=1 actual=2
Aug  4 11:17:05.030: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7051'
Aug  4 11:17:05.144: INFO: stderr: ""
Aug  4 11:17:05.144: INFO: stdout: "update-demo-nautilus-tcjt5 "
Aug  4 11:17:05.144: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tcjt5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7051'
Aug  4 11:17:05.296: INFO: stderr: ""
Aug  4 11:17:05.296: INFO: stdout: "true"
Aug  4 11:17:05.296: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tcjt5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7051'
Aug  4 11:17:05.425: INFO: stderr: ""
Aug  4 11:17:05.425: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug  4 11:17:05.425: INFO: validating pod update-demo-nautilus-tcjt5
Aug  4 11:17:05.428: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug  4 11:17:05.428: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug  4 11:17:05.428: INFO: update-demo-nautilus-tcjt5 is verified up and running
STEP: scaling up the replication controller
Aug  4 11:17:05.430: INFO: scanned /root for discovery docs: 
Aug  4 11:17:05.430: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-7051'
Aug  4 11:17:06.676: INFO: stderr: ""
Aug  4 11:17:06.676: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug  4 11:17:06.676: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7051'
Aug  4 11:17:06.876: INFO: stderr: ""
Aug  4 11:17:06.876: INFO: stdout: "update-demo-nautilus-dr9wp update-demo-nautilus-tcjt5 "
Aug  4 11:17:06.876: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dr9wp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7051'
Aug  4 11:17:07.002: INFO: stderr: ""
Aug  4 11:17:07.002: INFO: stdout: ""
Aug  4 11:17:07.003: INFO: update-demo-nautilus-dr9wp is created but not running
Aug  4 11:17:12.003: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7051'
Aug  4 11:17:12.137: INFO: stderr: ""
Aug  4 11:17:12.137: INFO: stdout: "update-demo-nautilus-dr9wp update-demo-nautilus-tcjt5 "
Aug  4 11:17:12.137: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dr9wp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7051'
Aug  4 11:17:12.258: INFO: stderr: ""
Aug  4 11:17:12.258: INFO: stdout: "true"
Aug  4 11:17:12.258: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dr9wp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7051'
Aug  4 11:17:12.363: INFO: stderr: ""
Aug  4 11:17:12.363: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug  4 11:17:12.363: INFO: validating pod update-demo-nautilus-dr9wp
Aug  4 11:17:12.398: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug  4 11:17:12.398: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug  4 11:17:12.398: INFO: update-demo-nautilus-dr9wp is verified up and running
Aug  4 11:17:12.398: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tcjt5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7051'
Aug  4 11:17:12.500: INFO: stderr: ""
Aug  4 11:17:12.500: INFO: stdout: "true"
Aug  4 11:17:12.500: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tcjt5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7051'
Aug  4 11:17:12.651: INFO: stderr: ""
Aug  4 11:17:12.651: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug  4 11:17:12.651: INFO: validating pod update-demo-nautilus-tcjt5
Aug  4 11:17:12.691: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug  4 11:17:12.691: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug  4 11:17:12.691: INFO: update-demo-nautilus-tcjt5 is verified up and running
STEP: using delete to clean up resources
Aug  4 11:17:12.691: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7051'
Aug  4 11:17:12.837: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug  4 11:17:12.837: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Aug  4 11:17:12.837: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-7051'
Aug  4 11:17:12.956: INFO: stderr: "No resources found in kubectl-7051 namespace.\n"
Aug  4 11:17:12.956: INFO: stdout: ""
Aug  4 11:17:12.956: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-7051 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Aug  4 11:17:13.095: INFO: stderr: ""
Aug  4 11:17:13.095: INFO: stdout: "update-demo-nautilus-dr9wp\nupdate-demo-nautilus-tcjt5\n"
Aug  4 11:17:13.595: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-7051'
Aug  4 11:17:14.004: INFO: stderr: "No resources found in kubectl-7051 namespace.\n"
Aug  4 11:17:14.005: INFO: stdout: ""
Aug  4 11:17:14.005: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-7051 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Aug  4 11:17:14.140: INFO: stderr: ""
Aug  4 11:17:14.140: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:17:14.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7051" for this suite.

• [SLOW TEST:22.595 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:269
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":275,"completed":170,"skipped":2880,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:17:14.461: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation
Aug  4 11:17:14.854: INFO: >>> kubeConfig: /root/.kube/config
Aug  4 11:17:17.846: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:17:29.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-7076" for this suite.

• [SLOW TEST:14.886 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":275,"completed":171,"skipped":2901,"failed":0}
SSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:17:29.348: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Aug  4 11:17:34.623: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:17:34.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-724" for this suite.

• [SLOW TEST:5.320 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:133
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":172,"skipped":2905,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:17:34.669: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a service nodeport-service with the type=NodePort in namespace services-4269
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-4269
STEP: creating replication controller externalsvc in namespace services-4269
I0804 11:17:34.942976       7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-4269, replica count: 2
I0804 11:17:37.993470       7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0804 11:17:40.993717       7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the NodePort service to type=ExternalName
Aug  4 11:17:41.076: INFO: Creating new exec pod
Aug  4 11:17:45.146: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=services-4269 execpod2w88b -- /bin/sh -x -c nslookup nodeport-service'
Aug  4 11:17:45.394: INFO: stderr: "I0804 11:17:45.295370    2274 log.go:172] (0xc000b7f290) (0xc000b70500) Create stream\nI0804 11:17:45.295429    2274 log.go:172] (0xc000b7f290) (0xc000b70500) Stream added, broadcasting: 1\nI0804 11:17:45.298086    2274 log.go:172] (0xc000b7f290) Reply frame received for 1\nI0804 11:17:45.298137    2274 log.go:172] (0xc000b7f290) (0xc000bb2280) Create stream\nI0804 11:17:45.298159    2274 log.go:172] (0xc000b7f290) (0xc000bb2280) Stream added, broadcasting: 3\nI0804 11:17:45.298943    2274 log.go:172] (0xc000b7f290) Reply frame received for 3\nI0804 11:17:45.298964    2274 log.go:172] (0xc000b7f290) (0xc000b705a0) Create stream\nI0804 11:17:45.298970    2274 log.go:172] (0xc000b7f290) (0xc000b705a0) Stream added, broadcasting: 5\nI0804 11:17:45.300063    2274 log.go:172] (0xc000b7f290) Reply frame received for 5\nI0804 11:17:45.375515    2274 log.go:172] (0xc000b7f290) Data frame received for 5\nI0804 11:17:45.375538    2274 log.go:172] (0xc000b705a0) (5) Data frame handling\nI0804 11:17:45.375549    2274 log.go:172] (0xc000b705a0) (5) Data frame sent\n+ nslookup nodeport-service\nI0804 11:17:45.387861    2274 log.go:172] (0xc000b7f290) Data frame received for 3\nI0804 11:17:45.387969    2274 log.go:172] (0xc000bb2280) (3) Data frame handling\nI0804 11:17:45.387983    2274 log.go:172] (0xc000bb2280) (3) Data frame sent\nI0804 11:17:45.387989    2274 log.go:172] (0xc000b7f290) Data frame received for 3\nI0804 11:17:45.387993    2274 log.go:172] (0xc000bb2280) (3) Data frame handling\nI0804 11:17:45.388033    2274 log.go:172] (0xc000bb2280) (3) Data frame sent\nI0804 11:17:45.388040    2274 log.go:172] (0xc000b7f290) Data frame received for 3\nI0804 11:17:45.388044    2274 log.go:172] (0xc000bb2280) (3) Data frame handling\nI0804 11:17:45.388102    2274 log.go:172] (0xc000b7f290) Data frame received for 5\nI0804 11:17:45.388144    2274 log.go:172] (0xc000b705a0) (5) Data frame handling\nI0804 11:17:45.389949    2274 log.go:172] (0xc000b7f290) Data frame received for 1\nI0804 11:17:45.389964    2274 log.go:172] (0xc000b70500) (1) Data frame handling\nI0804 11:17:45.389969    2274 log.go:172] (0xc000b70500) (1) Data frame sent\nI0804 11:17:45.389978    2274 log.go:172] (0xc000b7f290) (0xc000b70500) Stream removed, broadcasting: 1\nI0804 11:17:45.389991    2274 log.go:172] (0xc000b7f290) Go away received\nI0804 11:17:45.390269    2274 log.go:172] (0xc000b7f290) (0xc000b70500) Stream removed, broadcasting: 1\nI0804 11:17:45.390291    2274 log.go:172] (0xc000b7f290) (0xc000bb2280) Stream removed, broadcasting: 3\nI0804 11:17:45.390299    2274 log.go:172] (0xc000b7f290) (0xc000b705a0) Stream removed, broadcasting: 5\n"
Aug  4 11:17:45.394: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-4269.svc.cluster.local\tcanonical name = externalsvc.services-4269.svc.cluster.local.\nName:\texternalsvc.services-4269.svc.cluster.local\nAddress: 10.103.116.111\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-4269, will wait for the garbage collector to delete the pods
Aug  4 11:17:45.479: INFO: Deleting ReplicationController externalsvc took: 5.738752ms
Aug  4 11:17:45.779: INFO: Terminating ReplicationController externalsvc pods took: 300.278891ms
Aug  4 11:17:53.551: INFO: Cleaning up the NodePort to ExternalName test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:17:53.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4269" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:18.944 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":275,"completed":173,"skipped":2956,"failed":0}
SSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:17:53.613: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Performing setup for networking test in namespace pod-network-test-9788
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Aug  4 11:17:53.703: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Aug  4 11:17:53.782: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Aug  4 11:17:55.786: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Aug  4 11:17:57.786: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug  4 11:17:59.786: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug  4 11:18:01.786: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug  4 11:18:03.786: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug  4 11:18:05.794: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug  4 11:18:07.786: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug  4 11:18:09.786: INFO: The status of Pod netserver-0 is Running (Ready = true)
Aug  4 11:18:09.793: INFO: The status of Pod netserver-1 is Running (Ready = false)
Aug  4 11:18:11.797: INFO: The status of Pod netserver-1 is Running (Ready = false)
Aug  4 11:18:13.797: INFO: The status of Pod netserver-1 is Running (Ready = false)
Aug  4 11:18:15.797: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Aug  4 11:18:19.886: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.30 8081 | grep -v '^\s*$'] Namespace:pod-network-test-9788 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug  4 11:18:19.886: INFO: >>> kubeConfig: /root/.kube/config
I0804 11:18:19.920944       7 log.go:172] (0xc0029e46e0) (0xc000d785a0) Create stream
I0804 11:18:19.920971       7 log.go:172] (0xc0029e46e0) (0xc000d785a0) Stream added, broadcasting: 1
I0804 11:18:19.922962       7 log.go:172] (0xc0029e46e0) Reply frame received for 1
I0804 11:18:19.923012       7 log.go:172] (0xc0029e46e0) (0xc000d788c0) Create stream
I0804 11:18:19.923025       7 log.go:172] (0xc0029e46e0) (0xc000d788c0) Stream added, broadcasting: 3
I0804 11:18:19.924131       7 log.go:172] (0xc0029e46e0) Reply frame received for 3
I0804 11:18:19.924180       7 log.go:172] (0xc0029e46e0) (0xc000431360) Create stream
I0804 11:18:19.924197       7 log.go:172] (0xc0029e46e0) (0xc000431360) Stream added, broadcasting: 5
I0804 11:18:19.925458       7 log.go:172] (0xc0029e46e0) Reply frame received for 5
I0804 11:18:20.982544       7 log.go:172] (0xc0029e46e0) Data frame received for 3
I0804 11:18:20.982584       7 log.go:172] (0xc000d788c0) (3) Data frame handling
I0804 11:18:20.982608       7 log.go:172] (0xc000d788c0) (3) Data frame sent
I0804 11:18:20.982895       7 log.go:172] (0xc0029e46e0) Data frame received for 3
I0804 11:18:20.982923       7 log.go:172] (0xc000d788c0) (3) Data frame handling
I0804 11:18:20.983562       7 log.go:172] (0xc0029e46e0) Data frame received for 5
I0804 11:18:20.983632       7 log.go:172] (0xc000431360) (5) Data frame handling
I0804 11:18:20.986265       7 log.go:172] (0xc0029e46e0) Data frame received for 1
I0804 11:18:20.986286       7 log.go:172] (0xc000d785a0) (1) Data frame handling
I0804 11:18:20.986295       7 log.go:172] (0xc000d785a0) (1) Data frame sent
I0804 11:18:20.986306       7 log.go:172] (0xc0029e46e0) (0xc000d785a0) Stream removed, broadcasting: 1
I0804 11:18:20.986334       7 log.go:172] (0xc0029e46e0) Go away received
I0804 11:18:20.986426       7 log.go:172] (0xc0029e46e0) (0xc000d785a0) Stream removed, broadcasting: 1
I0804 11:18:20.986465       7 log.go:172] (0xc0029e46e0) (0xc000d788c0) Stream removed, broadcasting: 3
I0804 11:18:20.986502       7 log.go:172] (0xc0029e46e0) (0xc000431360) Stream removed, broadcasting: 5
Aug  4 11:18:20.986: INFO: Found all expected endpoints: [netserver-0]
Aug  4 11:18:20.989: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.209 8081 | grep -v '^\s*$'] Namespace:pod-network-test-9788 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug  4 11:18:20.990: INFO: >>> kubeConfig: /root/.kube/config
I0804 11:18:21.021736       7 log.go:172] (0xc002d06160) (0xc0014c4e60) Create stream
I0804 11:18:21.021821       7 log.go:172] (0xc002d06160) (0xc0014c4e60) Stream added, broadcasting: 1
I0804 11:18:21.023846       7 log.go:172] (0xc002d06160) Reply frame received for 1
I0804 11:18:21.023890       7 log.go:172] (0xc002d06160) (0xc000431680) Create stream
I0804 11:18:21.023905       7 log.go:172] (0xc002d06160) (0xc000431680) Stream added, broadcasting: 3
I0804 11:18:21.024925       7 log.go:172] (0xc002d06160) Reply frame received for 3
I0804 11:18:21.024958       7 log.go:172] (0xc002d06160) (0xc000d78d20) Create stream
I0804 11:18:21.024971       7 log.go:172] (0xc002d06160) (0xc000d78d20) Stream added, broadcasting: 5
I0804 11:18:21.025948       7 log.go:172] (0xc002d06160) Reply frame received for 5
I0804 11:18:22.101101       7 log.go:172] (0xc002d06160) Data frame received for 3
I0804 11:18:22.101140       7 log.go:172] (0xc000431680) (3) Data frame handling
I0804 11:18:22.101151       7 log.go:172] (0xc000431680) (3) Data frame sent
I0804 11:18:22.101158       7 log.go:172] (0xc002d06160) Data frame received for 3
I0804 11:18:22.101163       7 log.go:172] (0xc000431680) (3) Data frame handling
I0804 11:18:22.101223       7 log.go:172] (0xc002d06160) Data frame received for 5
I0804 11:18:22.101233       7 log.go:172] (0xc000d78d20) (5) Data frame handling
I0804 11:18:22.102522       7 log.go:172] (0xc002d06160) Data frame received for 1
I0804 11:18:22.102557       7 log.go:172] (0xc0014c4e60) (1) Data frame handling
I0804 11:18:22.102598       7 log.go:172] (0xc0014c4e60) (1) Data frame sent
I0804 11:18:22.102632       7 log.go:172] (0xc002d06160) (0xc0014c4e60) Stream removed, broadcasting: 1
I0804 11:18:22.102662       7 log.go:172] (0xc002d06160) Go away received
I0804 11:18:22.102740       7 log.go:172] (0xc002d06160) (0xc0014c4e60) Stream removed, broadcasting: 1
I0804 11:18:22.102758       7 log.go:172] (0xc002d06160) (0xc000431680) Stream removed, broadcasting: 3
I0804 11:18:22.102765       7 log.go:172] (0xc002d06160) (0xc000d78d20) Stream removed, broadcasting: 5
Aug  4 11:18:22.102: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:18:22.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-9788" for this suite.

• [SLOW TEST:28.498 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":174,"skipped":2959,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:18:22.112: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0777 on node default medium
Aug  4 11:18:22.255: INFO: Waiting up to 5m0s for pod "pod-a7086079-68b1-43c3-9bc8-e78be1998bd0" in namespace "emptydir-5287" to be "Succeeded or Failed"
Aug  4 11:18:22.412: INFO: Pod "pod-a7086079-68b1-43c3-9bc8-e78be1998bd0": Phase="Pending", Reason="", readiness=false. Elapsed: 156.606823ms
Aug  4 11:18:24.439: INFO: Pod "pod-a7086079-68b1-43c3-9bc8-e78be1998bd0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.183816082s
Aug  4 11:18:26.442: INFO: Pod "pod-a7086079-68b1-43c3-9bc8-e78be1998bd0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.186980515s
STEP: Saw pod success
Aug  4 11:18:26.442: INFO: Pod "pod-a7086079-68b1-43c3-9bc8-e78be1998bd0" satisfied condition "Succeeded or Failed"
Aug  4 11:18:26.445: INFO: Trying to get logs from node kali-worker pod pod-a7086079-68b1-43c3-9bc8-e78be1998bd0 container test-container: 
STEP: delete the pod
Aug  4 11:18:26.514: INFO: Waiting for pod pod-a7086079-68b1-43c3-9bc8-e78be1998bd0 to disappear
Aug  4 11:18:26.522: INFO: Pod pod-a7086079-68b1-43c3-9bc8-e78be1998bd0 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:18:26.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5287" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":175,"skipped":2999,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:18:26.530: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0666 on tmpfs
Aug  4 11:18:26.645: INFO: Waiting up to 5m0s for pod "pod-10650a88-b51d-4ec8-aca6-a80e8e4e93a2" in namespace "emptydir-2090" to be "Succeeded or Failed"
Aug  4 11:18:26.685: INFO: Pod "pod-10650a88-b51d-4ec8-aca6-a80e8e4e93a2": Phase="Pending", Reason="", readiness=false. Elapsed: 39.964757ms
Aug  4 11:18:28.717: INFO: Pod "pod-10650a88-b51d-4ec8-aca6-a80e8e4e93a2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071899214s
Aug  4 11:18:30.728: INFO: Pod "pod-10650a88-b51d-4ec8-aca6-a80e8e4e93a2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.083059046s
Aug  4 11:18:32.733: INFO: Pod "pod-10650a88-b51d-4ec8-aca6-a80e8e4e93a2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.087677785s
STEP: Saw pod success
Aug  4 11:18:32.733: INFO: Pod "pod-10650a88-b51d-4ec8-aca6-a80e8e4e93a2" satisfied condition "Succeeded or Failed"
Aug  4 11:18:32.736: INFO: Trying to get logs from node kali-worker pod pod-10650a88-b51d-4ec8-aca6-a80e8e4e93a2 container test-container: 
STEP: delete the pod
Aug  4 11:18:32.814: INFO: Waiting for pod pod-10650a88-b51d-4ec8-aca6-a80e8e4e93a2 to disappear
Aug  4 11:18:32.824: INFO: Pod pod-10650a88-b51d-4ec8-aca6-a80e8e4e93a2 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:18:32.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2090" for this suite.

• [SLOW TEST:6.306 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":176,"skipped":3036,"failed":0}
SSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:18:32.836: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Aug  4 11:18:40.994: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug  4 11:18:41.019: INFO: Pod pod-with-prestop-http-hook still exists
Aug  4 11:18:43.019: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug  4 11:18:43.022: INFO: Pod pod-with-prestop-http-hook still exists
Aug  4 11:18:45.019: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug  4 11:18:45.024: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:18:45.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-2766" for this suite.

• [SLOW TEST:12.215 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":275,"completed":177,"skipped":3048,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:18:45.052: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating secret secrets-9228/secret-test-fe14a4db-a72d-4f7c-958a-b0b3bf61fb50
STEP: Creating a pod to test consume secrets
Aug  4 11:18:45.147: INFO: Waiting up to 5m0s for pod "pod-configmaps-d19211e8-7a35-47d1-92c4-2148e983b485" in namespace "secrets-9228" to be "Succeeded or Failed"
Aug  4 11:18:45.151: INFO: Pod "pod-configmaps-d19211e8-7a35-47d1-92c4-2148e983b485": Phase="Pending", Reason="", readiness=false. Elapsed: 3.778298ms
Aug  4 11:18:47.156: INFO: Pod "pod-configmaps-d19211e8-7a35-47d1-92c4-2148e983b485": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008080081s
Aug  4 11:18:49.160: INFO: Pod "pod-configmaps-d19211e8-7a35-47d1-92c4-2148e983b485": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012414809s
STEP: Saw pod success
Aug  4 11:18:49.160: INFO: Pod "pod-configmaps-d19211e8-7a35-47d1-92c4-2148e983b485" satisfied condition "Succeeded or Failed"
Aug  4 11:18:49.163: INFO: Trying to get logs from node kali-worker pod pod-configmaps-d19211e8-7a35-47d1-92c4-2148e983b485 container env-test: 
STEP: delete the pod
Aug  4 11:18:49.198: INFO: Waiting for pod pod-configmaps-d19211e8-7a35-47d1-92c4-2148e983b485 to disappear
Aug  4 11:18:49.231: INFO: Pod pod-configmaps-d19211e8-7a35-47d1-92c4-2148e983b485 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:18:49.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9228" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":275,"completed":178,"skipped":3066,"failed":0}
SSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:18:49.240: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-upd-695b44f2-0151-455c-a6e3-de38ecbe5f69
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-695b44f2-0151-455c-a6e3-de38ecbe5f69
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:18:55.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1648" for this suite.

• [SLOW TEST:6.130 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":179,"skipped":3069,"failed":0}
SSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:18:55.370: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a service externalname-service with the type=ExternalName in namespace services-7581
STEP: changing the ExternalName service to type=NodePort
STEP: creating replication controller externalname-service in namespace services-7581
I0804 11:18:55.565377       7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-7581, replica count: 2
I0804 11:18:58.616115       7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0804 11:19:01.616479       7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Aug  4 11:19:01.616: INFO: Creating new exec pod
Aug  4 11:19:06.642: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=services-7581 execpodprnvx -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Aug  4 11:19:06.886: INFO: stderr: "I0804 11:19:06.777639    2294 log.go:172] (0xc00003abb0) (0xc000a34140) Create stream\nI0804 11:19:06.777704    2294 log.go:172] (0xc00003abb0) (0xc000a34140) Stream added, broadcasting: 1\nI0804 11:19:06.780998    2294 log.go:172] (0xc00003abb0) Reply frame received for 1\nI0804 11:19:06.781103    2294 log.go:172] (0xc00003abb0) (0xc0009dc000) Create stream\nI0804 11:19:06.781119    2294 log.go:172] (0xc00003abb0) (0xc0009dc000) Stream added, broadcasting: 3\nI0804 11:19:06.782378    2294 log.go:172] (0xc00003abb0) Reply frame received for 3\nI0804 11:19:06.782416    2294 log.go:172] (0xc00003abb0) (0xc0006c92c0) Create stream\nI0804 11:19:06.782432    2294 log.go:172] (0xc00003abb0) (0xc0006c92c0) Stream added, broadcasting: 5\nI0804 11:19:06.783457    2294 log.go:172] (0xc00003abb0) Reply frame received for 5\nI0804 11:19:06.878350    2294 log.go:172] (0xc00003abb0) Data frame received for 5\nI0804 11:19:06.878384    2294 log.go:172] (0xc0006c92c0) (5) Data frame handling\nI0804 11:19:06.878414    2294 log.go:172] (0xc0006c92c0) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0804 11:19:06.878767    2294 log.go:172] (0xc00003abb0) Data frame received for 5\nI0804 11:19:06.878788    2294 log.go:172] (0xc0006c92c0) (5) Data frame handling\nI0804 11:19:06.878805    2294 log.go:172] (0xc0006c92c0) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0804 11:19:06.878989    2294 log.go:172] (0xc00003abb0) Data frame received for 5\nI0804 11:19:06.879011    2294 log.go:172] (0xc0006c92c0) (5) Data frame handling\nI0804 11:19:06.879198    2294 log.go:172] (0xc00003abb0) Data frame received for 3\nI0804 11:19:06.879219    2294 log.go:172] (0xc0009dc000) (3) Data frame handling\nI0804 11:19:06.881118    2294 log.go:172] (0xc00003abb0) Data frame received for 1\nI0804 11:19:06.881139    2294 log.go:172] (0xc000a34140) (1) Data frame handling\nI0804 11:19:06.881152    2294 log.go:172] (0xc000a34140) (1) Data frame sent\nI0804 11:19:06.881169    2294 log.go:172] (0xc00003abb0) (0xc000a34140) Stream removed, broadcasting: 1\nI0804 11:19:06.881325    2294 log.go:172] (0xc00003abb0) Go away received\nI0804 11:19:06.881492    2294 log.go:172] (0xc00003abb0) (0xc000a34140) Stream removed, broadcasting: 1\nI0804 11:19:06.881507    2294 log.go:172] (0xc00003abb0) (0xc0009dc000) Stream removed, broadcasting: 3\nI0804 11:19:06.881513    2294 log.go:172] (0xc00003abb0) (0xc0006c92c0) Stream removed, broadcasting: 5\n"
Aug  4 11:19:06.886: INFO: stdout: ""
Aug  4 11:19:06.887: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=services-7581 execpodprnvx -- /bin/sh -x -c nc -zv -t -w 2 10.110.250.232 80'
Aug  4 11:19:07.089: INFO: stderr: "I0804 11:19:07.011936    2315 log.go:172] (0xc0004ceb00) (0xc0007da1e0) Create stream\nI0804 11:19:07.011986    2315 log.go:172] (0xc0004ceb00) (0xc0007da1e0) Stream added, broadcasting: 1\nI0804 11:19:07.017648    2315 log.go:172] (0xc0004ceb00) Reply frame received for 1\nI0804 11:19:07.017687    2315 log.go:172] (0xc0004ceb00) (0xc0007da280) Create stream\nI0804 11:19:07.017695    2315 log.go:172] (0xc0004ceb00) (0xc0007da280) Stream added, broadcasting: 3\nI0804 11:19:07.018861    2315 log.go:172] (0xc0004ceb00) Reply frame received for 3\nI0804 11:19:07.018894    2315 log.go:172] (0xc0004ceb00) (0xc000b06000) Create stream\nI0804 11:19:07.018904    2315 log.go:172] (0xc0004ceb00) (0xc000b06000) Stream added, broadcasting: 5\nI0804 11:19:07.019812    2315 log.go:172] (0xc0004ceb00) Reply frame received for 5\nI0804 11:19:07.083115    2315 log.go:172] (0xc0004ceb00) Data frame received for 3\nI0804 11:19:07.083148    2315 log.go:172] (0xc0007da280) (3) Data frame handling\nI0804 11:19:07.083170    2315 log.go:172] (0xc0004ceb00) Data frame received for 5\nI0804 11:19:07.083180    2315 log.go:172] (0xc000b06000) (5) Data frame handling\nI0804 11:19:07.083193    2315 log.go:172] (0xc000b06000) (5) Data frame sent\nI0804 11:19:07.083201    2315 log.go:172] (0xc0004ceb00) Data frame received for 5\nI0804 11:19:07.083208    2315 log.go:172] (0xc000b06000) (5) Data frame handling\n+ nc -zv -t -w 2 10.110.250.232 80\nConnection to 10.110.250.232 80 port [tcp/http] succeeded!\nI0804 11:19:07.084643    2315 log.go:172] (0xc0004ceb00) Data frame received for 1\nI0804 11:19:07.084666    2315 log.go:172] (0xc0007da1e0) (1) Data frame handling\nI0804 11:19:07.084678    2315 log.go:172] (0xc0007da1e0) (1) Data frame sent\nI0804 11:19:07.084691    2315 log.go:172] (0xc0004ceb00) (0xc0007da1e0) Stream removed, broadcasting: 1\nI0804 11:19:07.084840    2315 log.go:172] (0xc0004ceb00) Go away received\nI0804 11:19:07.085089    2315 log.go:172] (0xc0004ceb00) (0xc0007da1e0) Stream removed, broadcasting: 1\nI0804 11:19:07.085103    2315 log.go:172] (0xc0004ceb00) (0xc0007da280) Stream removed, broadcasting: 3\nI0804 11:19:07.085108    2315 log.go:172] (0xc0004ceb00) (0xc000b06000) Stream removed, broadcasting: 5\n"
Aug  4 11:19:07.089: INFO: stdout: ""
Aug  4 11:19:07.089: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=services-7581 execpodprnvx -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.13 30415'
Aug  4 11:19:07.292: INFO: stderr: "I0804 11:19:07.207289    2335 log.go:172] (0xc000a64000) (0xc00098a000) Create stream\nI0804 11:19:07.207335    2335 log.go:172] (0xc000a64000) (0xc00098a000) Stream added, broadcasting: 1\nI0804 11:19:07.209538    2335 log.go:172] (0xc000a64000) Reply frame received for 1\nI0804 11:19:07.209569    2335 log.go:172] (0xc000a64000) (0xc000565e00) Create stream\nI0804 11:19:07.209578    2335 log.go:172] (0xc000a64000) (0xc000565e00) Stream added, broadcasting: 3\nI0804 11:19:07.210284    2335 log.go:172] (0xc000a64000) Reply frame received for 3\nI0804 11:19:07.210309    2335 log.go:172] (0xc000a64000) (0xc000900000) Create stream\nI0804 11:19:07.210318    2335 log.go:172] (0xc000a64000) (0xc000900000) Stream added, broadcasting: 5\nI0804 11:19:07.211026    2335 log.go:172] (0xc000a64000) Reply frame received for 5\nI0804 11:19:07.285770    2335 log.go:172] (0xc000a64000) Data frame received for 3\nI0804 11:19:07.285797    2335 log.go:172] (0xc000565e00) (3) Data frame handling\nI0804 11:19:07.285847    2335 log.go:172] (0xc000a64000) Data frame received for 5\nI0804 11:19:07.285881    2335 log.go:172] (0xc000900000) (5) Data frame handling\nI0804 11:19:07.285902    2335 log.go:172] (0xc000900000) (5) Data frame sent\nI0804 11:19:07.285915    2335 log.go:172] (0xc000a64000) Data frame received for 5\nI0804 11:19:07.285926    2335 log.go:172] (0xc000900000) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.13 30415\nConnection to 172.18.0.13 30415 port [tcp/30415] succeeded!\nI0804 11:19:07.287352    2335 log.go:172] (0xc000a64000) Data frame received for 1\nI0804 11:19:07.287381    2335 log.go:172] (0xc00098a000) (1) Data frame handling\nI0804 11:19:07.287394    2335 log.go:172] (0xc00098a000) (1) Data frame sent\nI0804 11:19:07.287408    2335 log.go:172] (0xc000a64000) (0xc00098a000) Stream removed, broadcasting: 1\nI0804 11:19:07.287513    2335 log.go:172] (0xc000a64000) Go away received\nI0804 11:19:07.287812    2335 log.go:172] (0xc000a64000) (0xc00098a000) Stream removed, broadcasting: 1\nI0804 11:19:07.287826    2335 log.go:172] (0xc000a64000) (0xc000565e00) Stream removed, broadcasting: 3\nI0804 11:19:07.287833    2335 log.go:172] (0xc000a64000) (0xc000900000) Stream removed, broadcasting: 5\n"
Aug  4 11:19:07.293: INFO: stdout: ""
Aug  4 11:19:07.293: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=services-7581 execpodprnvx -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.15 30415'
Aug  4 11:19:07.486: INFO: stderr: "I0804 11:19:07.409710    2354 log.go:172] (0xc00091ab00) (0xc0006b5540) Create stream\nI0804 11:19:07.409779    2354 log.go:172] (0xc00091ab00) (0xc0006b5540) Stream added, broadcasting: 1\nI0804 11:19:07.412425    2354 log.go:172] (0xc00091ab00) Reply frame received for 1\nI0804 11:19:07.412460    2354 log.go:172] (0xc00091ab00) (0xc000a98000) Create stream\nI0804 11:19:07.412468    2354 log.go:172] (0xc00091ab00) (0xc000a98000) Stream added, broadcasting: 3\nI0804 11:19:07.413448    2354 log.go:172] (0xc00091ab00) Reply frame received for 3\nI0804 11:19:07.413492    2354 log.go:172] (0xc00091ab00) (0xc0006b55e0) Create stream\nI0804 11:19:07.413506    2354 log.go:172] (0xc00091ab00) (0xc0006b55e0) Stream added, broadcasting: 5\nI0804 11:19:07.414167    2354 log.go:172] (0xc00091ab00) Reply frame received for 5\nI0804 11:19:07.479181    2354 log.go:172] (0xc00091ab00) Data frame received for 3\nI0804 11:19:07.479225    2354 log.go:172] (0xc000a98000) (3) Data frame handling\nI0804 11:19:07.479260    2354 log.go:172] (0xc00091ab00) Data frame received for 5\nI0804 11:19:07.479284    2354 log.go:172] (0xc0006b55e0) (5) Data frame handling\nI0804 11:19:07.479301    2354 log.go:172] (0xc0006b55e0) (5) Data frame sent\nI0804 11:19:07.479318    2354 log.go:172] (0xc00091ab00) Data frame received for 5\nI0804 11:19:07.479327    2354 log.go:172] (0xc0006b55e0) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.15 30415\nConnection to 172.18.0.15 30415 port [tcp/30415] succeeded!\nI0804 11:19:07.480401    2354 log.go:172] (0xc00091ab00) Data frame received for 1\nI0804 11:19:07.480419    2354 log.go:172] (0xc0006b5540) (1) Data frame handling\nI0804 11:19:07.480428    2354 log.go:172] (0xc0006b5540) (1) Data frame sent\nI0804 11:19:07.480442    2354 log.go:172] (0xc00091ab00) (0xc0006b5540) Stream removed, broadcasting: 1\nI0804 11:19:07.480453    2354 log.go:172] (0xc00091ab00) Go away received\nI0804 11:19:07.481075    2354 log.go:172] (0xc00091ab00) (0xc0006b5540) Stream removed, broadcasting: 1\nI0804 11:19:07.481100    2354 log.go:172] (0xc00091ab00) (0xc000a98000) Stream removed, broadcasting: 3\nI0804 11:19:07.481112    2354 log.go:172] (0xc00091ab00) (0xc0006b55e0) Stream removed, broadcasting: 5\n"
Aug  4 11:19:07.486: INFO: stdout: ""
Aug  4 11:19:07.486: INFO: Cleaning up the ExternalName to NodePort test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:19:07.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-7581" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:12.161 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":275,"completed":180,"skipped":3075,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:19:07.532: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8583.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8583.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8583.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8583.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8583.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-8583.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8583.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-8583.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8583.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-8583.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8583.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-8583.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8583.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 39.37.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.37.39_udp@PTR;check="$$(dig +tcp +noall +answer +search 39.37.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.37.39_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8583.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8583.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8583.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8583.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8583.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-8583.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8583.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-8583.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8583.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-8583.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8583.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-8583.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8583.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 39.37.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.37.39_udp@PTR;check="$$(dig +tcp +noall +answer +search 39.37.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.37.39_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug  4 11:19:15.978: INFO: Unable to read wheezy_udp@dns-test-service.dns-8583.svc.cluster.local from pod dns-8583/dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6: the server could not find the requested resource (get pods dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6)
Aug  4 11:19:15.981: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8583.svc.cluster.local from pod dns-8583/dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6: the server could not find the requested resource (get pods dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6)
Aug  4 11:19:15.984: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8583.svc.cluster.local from pod dns-8583/dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6: the server could not find the requested resource (get pods dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6)
Aug  4 11:19:15.988: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8583.svc.cluster.local from pod dns-8583/dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6: the server could not find the requested resource (get pods dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6)
Aug  4 11:19:16.063: INFO: Unable to read jessie_udp@dns-test-service.dns-8583.svc.cluster.local from pod dns-8583/dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6: the server could not find the requested resource (get pods dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6)
Aug  4 11:19:16.066: INFO: Unable to read jessie_tcp@dns-test-service.dns-8583.svc.cluster.local from pod dns-8583/dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6: the server could not find the requested resource (get pods dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6)
Aug  4 11:19:16.069: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8583.svc.cluster.local from pod dns-8583/dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6: the server could not find the requested resource (get pods dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6)
Aug  4 11:19:16.073: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8583.svc.cluster.local from pod dns-8583/dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6: the server could not find the requested resource (get pods dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6)
Aug  4 11:19:16.090: INFO: Lookups using dns-8583/dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6 failed for: [wheezy_udp@dns-test-service.dns-8583.svc.cluster.local wheezy_tcp@dns-test-service.dns-8583.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8583.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8583.svc.cluster.local jessie_udp@dns-test-service.dns-8583.svc.cluster.local jessie_tcp@dns-test-service.dns-8583.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8583.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8583.svc.cluster.local]

Aug  4 11:19:21.095: INFO: Unable to read wheezy_udp@dns-test-service.dns-8583.svc.cluster.local from pod dns-8583/dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6: the server could not find the requested resource (get pods dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6)
Aug  4 11:19:21.099: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8583.svc.cluster.local from pod dns-8583/dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6: the server could not find the requested resource (get pods dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6)
Aug  4 11:19:21.103: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8583.svc.cluster.local from pod dns-8583/dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6: the server could not find the requested resource (get pods dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6)
Aug  4 11:19:21.107: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8583.svc.cluster.local from pod dns-8583/dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6: the server could not find the requested resource (get pods dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6)
Aug  4 11:19:21.125: INFO: Unable to read jessie_udp@dns-test-service.dns-8583.svc.cluster.local from pod dns-8583/dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6: the server could not find the requested resource (get pods dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6)
Aug  4 11:19:21.127: INFO: Unable to read jessie_tcp@dns-test-service.dns-8583.svc.cluster.local from pod dns-8583/dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6: the server could not find the requested resource (get pods dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6)
Aug  4 11:19:21.129: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8583.svc.cluster.local from pod dns-8583/dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6: the server could not find the requested resource (get pods dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6)
Aug  4 11:19:21.132: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8583.svc.cluster.local from pod dns-8583/dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6: the server could not find the requested resource (get pods dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6)
Aug  4 11:19:21.147: INFO: Lookups using dns-8583/dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6 failed for: [wheezy_udp@dns-test-service.dns-8583.svc.cluster.local wheezy_tcp@dns-test-service.dns-8583.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8583.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8583.svc.cluster.local jessie_udp@dns-test-service.dns-8583.svc.cluster.local jessie_tcp@dns-test-service.dns-8583.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8583.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8583.svc.cluster.local]

Aug  4 11:19:26.197: INFO: Unable to read wheezy_udp@dns-test-service.dns-8583.svc.cluster.local from pod dns-8583/dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6: the server could not find the requested resource (get pods dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6)
Aug  4 11:19:26.201: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8583.svc.cluster.local from pod dns-8583/dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6: the server could not find the requested resource (get pods dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6)
Aug  4 11:19:26.213: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8583.svc.cluster.local from pod dns-8583/dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6: the server could not find the requested resource (get pods dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6)
Aug  4 11:19:26.216: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8583.svc.cluster.local from pod dns-8583/dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6: the server could not find the requested resource (get pods dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6)
Aug  4 11:19:26.239: INFO: Unable to read jessie_udp@dns-test-service.dns-8583.svc.cluster.local from pod dns-8583/dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6: the server could not find the requested resource (get pods dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6)
Aug  4 11:19:26.242: INFO: Unable to read jessie_tcp@dns-test-service.dns-8583.svc.cluster.local from pod dns-8583/dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6: the server could not find the requested resource (get pods dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6)
Aug  4 11:19:26.245: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8583.svc.cluster.local from pod dns-8583/dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6: the server could not find the requested resource (get pods dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6)
Aug  4 11:19:26.248: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8583.svc.cluster.local from pod dns-8583/dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6: the server could not find the requested resource (get pods dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6)
Aug  4 11:19:26.265: INFO: Lookups using dns-8583/dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6 failed for: [wheezy_udp@dns-test-service.dns-8583.svc.cluster.local wheezy_tcp@dns-test-service.dns-8583.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8583.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8583.svc.cluster.local jessie_udp@dns-test-service.dns-8583.svc.cluster.local jessie_tcp@dns-test-service.dns-8583.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8583.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8583.svc.cluster.local]

Aug  4 11:19:31.095: INFO: Unable to read wheezy_udp@dns-test-service.dns-8583.svc.cluster.local from pod dns-8583/dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6: the server could not find the requested resource (get pods dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6)
Aug  4 11:19:31.099: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8583.svc.cluster.local from pod dns-8583/dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6: the server could not find the requested resource (get pods dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6)
Aug  4 11:19:31.102: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8583.svc.cluster.local from pod dns-8583/dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6: the server could not find the requested resource (get pods dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6)
Aug  4 11:19:31.106: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8583.svc.cluster.local from pod dns-8583/dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6: the server could not find the requested resource (get pods dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6)
Aug  4 11:19:31.143: INFO: Unable to read jessie_udp@dns-test-service.dns-8583.svc.cluster.local from pod dns-8583/dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6: the server could not find the requested resource (get pods dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6)
Aug  4 11:19:31.146: INFO: Unable to read jessie_tcp@dns-test-service.dns-8583.svc.cluster.local from pod dns-8583/dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6: the server could not find the requested resource (get pods dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6)
Aug  4 11:19:31.151: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8583.svc.cluster.local from pod dns-8583/dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6: the server could not find the requested resource (get pods dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6)
Aug  4 11:19:31.154: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8583.svc.cluster.local from pod dns-8583/dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6: the server could not find the requested resource (get pods dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6)
Aug  4 11:19:31.169: INFO: Lookups using dns-8583/dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6 failed for: [wheezy_udp@dns-test-service.dns-8583.svc.cluster.local wheezy_tcp@dns-test-service.dns-8583.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8583.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8583.svc.cluster.local jessie_udp@dns-test-service.dns-8583.svc.cluster.local jessie_tcp@dns-test-service.dns-8583.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8583.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8583.svc.cluster.local]

Aug  4 11:19:36.095: INFO: Unable to read wheezy_udp@dns-test-service.dns-8583.svc.cluster.local from pod dns-8583/dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6: the server could not find the requested resource (get pods dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6)
Aug  4 11:19:36.098: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8583.svc.cluster.local from pod dns-8583/dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6: the server could not find the requested resource (get pods dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6)
Aug  4 11:19:36.101: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8583.svc.cluster.local from pod dns-8583/dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6: the server could not find the requested resource (get pods dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6)
Aug  4 11:19:36.104: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8583.svc.cluster.local from pod dns-8583/dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6: the server could not find the requested resource (get pods dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6)
Aug  4 11:19:36.123: INFO: Unable to read jessie_udp@dns-test-service.dns-8583.svc.cluster.local from pod dns-8583/dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6: the server could not find the requested resource (get pods dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6)
Aug  4 11:19:36.126: INFO: Unable to read jessie_tcp@dns-test-service.dns-8583.svc.cluster.local from pod dns-8583/dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6: the server could not find the requested resource (get pods dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6)
Aug  4 11:19:36.129: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8583.svc.cluster.local from pod dns-8583/dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6: the server could not find the requested resource (get pods dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6)
Aug  4 11:19:36.131: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8583.svc.cluster.local from pod dns-8583/dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6: the server could not find the requested resource (get pods dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6)
Aug  4 11:19:36.164: INFO: Lookups using dns-8583/dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6 failed for: [wheezy_udp@dns-test-service.dns-8583.svc.cluster.local wheezy_tcp@dns-test-service.dns-8583.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8583.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8583.svc.cluster.local jessie_udp@dns-test-service.dns-8583.svc.cluster.local jessie_tcp@dns-test-service.dns-8583.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8583.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8583.svc.cluster.local]

Aug  4 11:19:41.095: INFO: Unable to read wheezy_udp@dns-test-service.dns-8583.svc.cluster.local from pod dns-8583/dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6: the server could not find the requested resource (get pods dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6)
Aug  4 11:19:41.099: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8583.svc.cluster.local from pod dns-8583/dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6: the server could not find the requested resource (get pods dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6)
Aug  4 11:19:41.102: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8583.svc.cluster.local from pod dns-8583/dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6: the server could not find the requested resource (get pods dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6)
Aug  4 11:19:41.105: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8583.svc.cluster.local from pod dns-8583/dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6: the server could not find the requested resource (get pods dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6)
Aug  4 11:19:41.123: INFO: Unable to read jessie_udp@dns-test-service.dns-8583.svc.cluster.local from pod dns-8583/dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6: the server could not find the requested resource (get pods dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6)
Aug  4 11:19:41.125: INFO: Unable to read jessie_tcp@dns-test-service.dns-8583.svc.cluster.local from pod dns-8583/dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6: the server could not find the requested resource (get pods dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6)
Aug  4 11:19:41.128: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8583.svc.cluster.local from pod dns-8583/dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6: the server could not find the requested resource (get pods dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6)
Aug  4 11:19:41.131: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8583.svc.cluster.local from pod dns-8583/dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6: the server could not find the requested resource (get pods dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6)
Aug  4 11:19:41.145: INFO: Lookups using dns-8583/dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6 failed for: [wheezy_udp@dns-test-service.dns-8583.svc.cluster.local wheezy_tcp@dns-test-service.dns-8583.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8583.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8583.svc.cluster.local jessie_udp@dns-test-service.dns-8583.svc.cluster.local jessie_tcp@dns-test-service.dns-8583.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8583.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8583.svc.cluster.local]

Aug  4 11:19:46.155: INFO: DNS probes using dns-8583/dns-test-d16ca89b-db0c-47f4-a175-e1b2ccc849e6 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:19:46.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-8583" for this suite.

• [SLOW TEST:39.413 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for services  [Conformance]","total":275,"completed":181,"skipped":3109,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:19:46.945: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with configMap that has name projected-configmap-test-upd-0597c742-ea94-4e2a-a048-853ba9629736
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-0597c742-ea94-4e2a-a048-853ba9629736
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:19:55.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2454" for this suite.

• [SLOW TEST:8.357 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":182,"skipped":3116,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:19:55.303: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod pod-subpath-test-downwardapi-chth
STEP: Creating a pod to test atomic-volume-subpath
Aug  4 11:19:55.461: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-chth" in namespace "subpath-4021" to be "Succeeded or Failed"
Aug  4 11:19:55.519: INFO: Pod "pod-subpath-test-downwardapi-chth": Phase="Pending", Reason="", readiness=false. Elapsed: 57.269239ms
Aug  4 11:19:57.522: INFO: Pod "pod-subpath-test-downwardapi-chth": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061077708s
Aug  4 11:19:59.526: INFO: Pod "pod-subpath-test-downwardapi-chth": Phase="Running", Reason="", readiness=true. Elapsed: 4.064847446s
Aug  4 11:20:01.530: INFO: Pod "pod-subpath-test-downwardapi-chth": Phase="Running", Reason="", readiness=true. Elapsed: 6.068592818s
Aug  4 11:20:03.534: INFO: Pod "pod-subpath-test-downwardapi-chth": Phase="Running", Reason="", readiness=true. Elapsed: 8.072861447s
Aug  4 11:20:05.538: INFO: Pod "pod-subpath-test-downwardapi-chth": Phase="Running", Reason="", readiness=true. Elapsed: 10.076602383s
Aug  4 11:20:07.542: INFO: Pod "pod-subpath-test-downwardapi-chth": Phase="Running", Reason="", readiness=true. Elapsed: 12.080312591s
Aug  4 11:20:09.545: INFO: Pod "pod-subpath-test-downwardapi-chth": Phase="Running", Reason="", readiness=true. Elapsed: 14.083820627s
Aug  4 11:20:11.549: INFO: Pod "pod-subpath-test-downwardapi-chth": Phase="Running", Reason="", readiness=true. Elapsed: 16.087790802s
Aug  4 11:20:13.553: INFO: Pod "pod-subpath-test-downwardapi-chth": Phase="Running", Reason="", readiness=true. Elapsed: 18.091798849s
Aug  4 11:20:15.557: INFO: Pod "pod-subpath-test-downwardapi-chth": Phase="Running", Reason="", readiness=true. Elapsed: 20.095384381s
Aug  4 11:20:17.561: INFO: Pod "pod-subpath-test-downwardapi-chth": Phase="Running", Reason="", readiness=true. Elapsed: 22.099476995s
Aug  4 11:20:19.565: INFO: Pod "pod-subpath-test-downwardapi-chth": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.103422045s
STEP: Saw pod success
Aug  4 11:20:19.565: INFO: Pod "pod-subpath-test-downwardapi-chth" satisfied condition "Succeeded or Failed"
Aug  4 11:20:19.568: INFO: Trying to get logs from node kali-worker2 pod pod-subpath-test-downwardapi-chth container test-container-subpath-downwardapi-chth: 
STEP: delete the pod
Aug  4 11:20:19.650: INFO: Waiting for pod pod-subpath-test-downwardapi-chth to disappear
Aug  4 11:20:19.657: INFO: Pod pod-subpath-test-downwardapi-chth no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-chth
Aug  4 11:20:19.657: INFO: Deleting pod "pod-subpath-test-downwardapi-chth" in namespace "subpath-4021"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:20:19.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-4021" for this suite.

• [SLOW TEST:24.396 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":275,"completed":183,"skipped":3140,"failed":0}
SSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:20:19.699: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Aug  4 11:20:19.834: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  4 11:20:19.858: INFO: Number of nodes with available pods: 0
Aug  4 11:20:19.858: INFO: Node kali-worker is running more than one daemon pod
Aug  4 11:20:20.864: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  4 11:20:20.868: INFO: Number of nodes with available pods: 0
Aug  4 11:20:20.868: INFO: Node kali-worker is running more than one daemon pod
Aug  4 11:20:21.863: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  4 11:20:21.951: INFO: Number of nodes with available pods: 0
Aug  4 11:20:21.951: INFO: Node kali-worker is running more than one daemon pod
Aug  4 11:20:23.008: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  4 11:20:23.011: INFO: Number of nodes with available pods: 0
Aug  4 11:20:23.011: INFO: Node kali-worker is running more than one daemon pod
Aug  4 11:20:23.864: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  4 11:20:23.868: INFO: Number of nodes with available pods: 1
Aug  4 11:20:23.868: INFO: Node kali-worker is running more than one daemon pod
Aug  4 11:20:24.910: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  4 11:20:24.913: INFO: Number of nodes with available pods: 2
Aug  4 11:20:24.914: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Aug  4 11:20:24.955: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  4 11:20:24.966: INFO: Number of nodes with available pods: 1
Aug  4 11:20:24.966: INFO: Node kali-worker2 is running more than one daemon pod
Aug  4 11:20:25.971: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  4 11:20:25.974: INFO: Number of nodes with available pods: 1
Aug  4 11:20:25.974: INFO: Node kali-worker2 is running more than one daemon pod
Aug  4 11:20:27.209: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  4 11:20:27.404: INFO: Number of nodes with available pods: 1
Aug  4 11:20:27.404: INFO: Node kali-worker2 is running more than one daemon pod
Aug  4 11:20:27.972: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  4 11:20:27.976: INFO: Number of nodes with available pods: 1
Aug  4 11:20:27.976: INFO: Node kali-worker2 is running more than one daemon pod
Aug  4 11:20:28.971: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  4 11:20:28.974: INFO: Number of nodes with available pods: 1
Aug  4 11:20:28.974: INFO: Node kali-worker2 is running more than one daemon pod
Aug  4 11:20:29.973: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  4 11:20:29.977: INFO: Number of nodes with available pods: 2
Aug  4 11:20:29.977: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7490, will wait for the garbage collector to delete the pods
Aug  4 11:20:30.041: INFO: Deleting DaemonSet.extensions daemon-set took: 5.268822ms
Aug  4 11:20:30.441: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.268311ms
Aug  4 11:20:43.545: INFO: Number of nodes with available pods: 0
Aug  4 11:20:43.545: INFO: Number of running nodes: 0, number of available pods: 0
Aug  4 11:20:43.547: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7490/daemonsets","resourceVersion":"6680354"},"items":null}

Aug  4 11:20:43.549: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7490/pods","resourceVersion":"6680354"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:20:43.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-7490" for this suite.

• [SLOW TEST:23.866 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":275,"completed":184,"skipped":3147,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:20:43.565: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-5800
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-5800
STEP: creating replication controller externalsvc in namespace services-5800
I0804 11:20:43.784807       7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-5800, replica count: 2
I0804 11:20:46.835199       7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0804 11:20:49.835464       7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the ClusterIP service to type=ExternalName
Aug  4 11:20:49.872: INFO: Creating new exec pod
Aug  4 11:20:53.945: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=services-5800 execpod9pfq4 -- /bin/sh -x -c nslookup clusterip-service'
Aug  4 11:20:54.170: INFO: stderr: "I0804 11:20:54.079759    2377 log.go:172] (0xc0000e9340) (0xc0003c3720) Create stream\nI0804 11:20:54.079836    2377 log.go:172] (0xc0000e9340) (0xc0003c3720) Stream added, broadcasting: 1\nI0804 11:20:54.083271    2377 log.go:172] (0xc0000e9340) Reply frame received for 1\nI0804 11:20:54.083317    2377 log.go:172] (0xc0000e9340) (0xc000674e60) Create stream\nI0804 11:20:54.083330    2377 log.go:172] (0xc0000e9340) (0xc000674e60) Stream added, broadcasting: 3\nI0804 11:20:54.084321    2377 log.go:172] (0xc0000e9340) Reply frame received for 3\nI0804 11:20:54.084350    2377 log.go:172] (0xc0000e9340) (0xc0003c37c0) Create stream\nI0804 11:20:54.084364    2377 log.go:172] (0xc0000e9340) (0xc0003c37c0) Stream added, broadcasting: 5\nI0804 11:20:54.085422    2377 log.go:172] (0xc0000e9340) Reply frame received for 5\nI0804 11:20:54.157699    2377 log.go:172] (0xc0000e9340) Data frame received for 5\nI0804 11:20:54.157829    2377 log.go:172] (0xc0003c37c0) (5) Data frame handling\nI0804 11:20:54.157875    2377 log.go:172] (0xc0003c37c0) (5) Data frame sent\n+ nslookup clusterip-service\nI0804 11:20:54.163563    2377 log.go:172] (0xc0000e9340) Data frame received for 3\nI0804 11:20:54.163576    2377 log.go:172] (0xc000674e60) (3) Data frame handling\nI0804 11:20:54.163586    2377 log.go:172] (0xc000674e60) (3) Data frame sent\nI0804 11:20:54.164712    2377 log.go:172] (0xc0000e9340) Data frame received for 3\nI0804 11:20:54.164811    2377 log.go:172] (0xc000674e60) (3) Data frame handling\nI0804 11:20:54.164855    2377 log.go:172] (0xc000674e60) (3) Data frame sent\nI0804 11:20:54.165190    2377 log.go:172] (0xc0000e9340) Data frame received for 5\nI0804 11:20:54.165201    2377 log.go:172] (0xc0003c37c0) (5) Data frame handling\nI0804 11:20:54.165425    2377 log.go:172] (0xc0000e9340) Data frame received for 3\nI0804 11:20:54.165442    2377 log.go:172] (0xc000674e60) (3) Data frame handling\nI0804 11:20:54.167115    2377 log.go:172] (0xc0000e9340) Data frame received for 1\nI0804 11:20:54.167136    2377 log.go:172] (0xc0003c3720) (1) Data frame handling\nI0804 11:20:54.167148    2377 log.go:172] (0xc0003c3720) (1) Data frame sent\nI0804 11:20:54.167165    2377 log.go:172] (0xc0000e9340) (0xc0003c3720) Stream removed, broadcasting: 1\nI0804 11:20:54.167187    2377 log.go:172] (0xc0000e9340) Go away received\nI0804 11:20:54.167448    2377 log.go:172] (0xc0000e9340) (0xc0003c3720) Stream removed, broadcasting: 1\nI0804 11:20:54.167470    2377 log.go:172] (0xc0000e9340) (0xc000674e60) Stream removed, broadcasting: 3\nI0804 11:20:54.167483    2377 log.go:172] (0xc0000e9340) (0xc0003c37c0) Stream removed, broadcasting: 5\n"
Aug  4 11:20:54.170: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-5800.svc.cluster.local\tcanonical name = externalsvc.services-5800.svc.cluster.local.\nName:\texternalsvc.services-5800.svc.cluster.local\nAddress: 10.98.250.225\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-5800, will wait for the garbage collector to delete the pods
Aug  4 11:20:54.229: INFO: Deleting ReplicationController externalsvc took: 5.584472ms
Aug  4 11:20:54.529: INFO: Terminating ReplicationController externalsvc pods took: 300.225055ms
Aug  4 11:21:03.491: INFO: Cleaning up the ClusterIP to ExternalName test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:21:03.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5800" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:20.104 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":275,"completed":185,"skipped":3160,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:21:03.669: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating the pod
Aug  4 11:21:08.442: INFO: Successfully updated pod "labelsupdatea3ff4c00-ea9e-40f5-95bf-1429f90ec438"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:21:10.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-410" for this suite.

• [SLOW TEST:6.844 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":275,"completed":186,"skipped":3173,"failed":0}
SSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:21:10.514: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name projected-secret-test-map-439633ad-230e-4465-931e-8e6fd529bf2f
STEP: Creating a pod to test consume secrets
Aug  4 11:21:10.598: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-67387513-4715-4cc6-a1e6-226de56d274f" in namespace "projected-626" to be "Succeeded or Failed"
Aug  4 11:21:10.621: INFO: Pod "pod-projected-secrets-67387513-4715-4cc6-a1e6-226de56d274f": Phase="Pending", Reason="", readiness=false. Elapsed: 23.219574ms
Aug  4 11:21:12.666: INFO: Pod "pod-projected-secrets-67387513-4715-4cc6-a1e6-226de56d274f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067609925s
Aug  4 11:21:14.671: INFO: Pod "pod-projected-secrets-67387513-4715-4cc6-a1e6-226de56d274f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.073280114s
STEP: Saw pod success
Aug  4 11:21:14.672: INFO: Pod "pod-projected-secrets-67387513-4715-4cc6-a1e6-226de56d274f" satisfied condition "Succeeded or Failed"
Aug  4 11:21:14.675: INFO: Trying to get logs from node kali-worker2 pod pod-projected-secrets-67387513-4715-4cc6-a1e6-226de56d274f container projected-secret-volume-test: 
STEP: delete the pod
Aug  4 11:21:14.750: INFO: Waiting for pod pod-projected-secrets-67387513-4715-4cc6-a1e6-226de56d274f to disappear
Aug  4 11:21:14.755: INFO: Pod pod-projected-secrets-67387513-4715-4cc6-a1e6-226de56d274f no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:21:14.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-626" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":187,"skipped":3178,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:21:14.762: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-6505
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a new StatefulSet
Aug  4 11:21:14.934: INFO: Found 0 stateful pods, waiting for 3
Aug  4 11:21:24.939: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug  4 11:21:24.939: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug  4 11:21:24.939: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Aug  4 11:21:34.939: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug  4 11:21:34.939: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug  4 11:21:34.939: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Aug  4 11:21:34.965: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Aug  4 11:21:45.025: INFO: Updating stateful set ss2
Aug  4 11:21:45.082: INFO: Waiting for Pod statefulset-6505/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
STEP: Restoring Pods to the correct revision when they are deleted
Aug  4 11:21:55.995: INFO: Found 2 stateful pods, waiting for 3
Aug  4 11:22:06.004: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug  4 11:22:06.004: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug  4 11:22:06.004: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Aug  4 11:22:06.027: INFO: Updating stateful set ss2
Aug  4 11:22:06.131: INFO: Waiting for Pod statefulset-6505/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Aug  4 11:22:16.158: INFO: Updating stateful set ss2
Aug  4 11:22:16.192: INFO: Waiting for StatefulSet statefulset-6505/ss2 to complete update
Aug  4 11:22:16.192: INFO: Waiting for Pod statefulset-6505/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Aug  4 11:22:26.201: INFO: Deleting all statefulset in ns statefulset-6505
Aug  4 11:22:26.204: INFO: Scaling statefulset ss2 to 0
Aug  4 11:22:56.243: INFO: Waiting for statefulset status.replicas updated to 0
Aug  4 11:22:56.246: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:22:56.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-6505" for this suite.

• [SLOW TEST:101.532 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":275,"completed":188,"skipped":3190,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:22:56.295: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod liveness-e393556a-9065-4624-aa03-98faec899d15 in namespace container-probe-9430
Aug  4 11:23:00.384: INFO: Started pod liveness-e393556a-9065-4624-aa03-98faec899d15 in namespace container-probe-9430
STEP: checking the pod's current state and verifying that restartCount is present
Aug  4 11:23:00.387: INFO: Initial restart count of pod liveness-e393556a-9065-4624-aa03-98faec899d15 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:27:01.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-9430" for this suite.

• [SLOW TEST:245.752 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":275,"completed":189,"skipped":3243,"failed":0}
SSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:27:02.048: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Aug  4 11:27:08.728: INFO: 10 pods remaining
Aug  4 11:27:08.728: INFO: 10 pods has nil DeletionTimestamp
Aug  4 11:27:08.728: INFO: 
Aug  4 11:27:10.577: INFO: 3 pods remaining
Aug  4 11:27:10.577: INFO: 0 pods has nil DeletionTimestamp
Aug  4 11:27:10.577: INFO: 
Aug  4 11:27:11.659: INFO: 0 pods remaining
Aug  4 11:27:11.659: INFO: 0 pods has nil DeletionTimestamp
Aug  4 11:27:11.659: INFO: 
STEP: Gathering metrics
W0804 11:27:13.088936       7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug  4 11:27:13.088: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:27:13.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-6571" for this suite.

• [SLOW TEST:11.049 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":275,"completed":190,"skipped":3247,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:27:13.097: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:27:14.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-8667" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":275,"completed":191,"skipped":3318,"failed":0}

------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:27:14.671: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug  4 11:27:15.273: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5e76ce0f-cb62-435d-b142-67055a49887a" in namespace "projected-967" to be "Succeeded or Failed"
Aug  4 11:27:15.330: INFO: Pod "downwardapi-volume-5e76ce0f-cb62-435d-b142-67055a49887a": Phase="Pending", Reason="", readiness=false. Elapsed: 56.686795ms
Aug  4 11:27:17.400: INFO: Pod "downwardapi-volume-5e76ce0f-cb62-435d-b142-67055a49887a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.127204827s
Aug  4 11:27:19.469: INFO: Pod "downwardapi-volume-5e76ce0f-cb62-435d-b142-67055a49887a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.195581021s
Aug  4 11:27:21.473: INFO: Pod "downwardapi-volume-5e76ce0f-cb62-435d-b142-67055a49887a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.199684424s
STEP: Saw pod success
Aug  4 11:27:21.473: INFO: Pod "downwardapi-volume-5e76ce0f-cb62-435d-b142-67055a49887a" satisfied condition "Succeeded or Failed"
Aug  4 11:27:21.476: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-5e76ce0f-cb62-435d-b142-67055a49887a container client-container: 
STEP: delete the pod
Aug  4 11:27:21.529: INFO: Waiting for pod downwardapi-volume-5e76ce0f-cb62-435d-b142-67055a49887a to disappear
Aug  4 11:27:21.539: INFO: Pod downwardapi-volume-5e76ce0f-cb62-435d-b142-67055a49887a no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:27:21.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-967" for this suite.

• [SLOW TEST:6.877 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":275,"completed":192,"skipped":3318,"failed":0}
SSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:27:21.548: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test substitution in container's command
Aug  4 11:27:21.644: INFO: Waiting up to 5m0s for pod "var-expansion-1c271222-0c93-4a4d-b7dd-d2f047d8c876" in namespace "var-expansion-5254" to be "Succeeded or Failed"
Aug  4 11:27:21.647: INFO: Pod "var-expansion-1c271222-0c93-4a4d-b7dd-d2f047d8c876": Phase="Pending", Reason="", readiness=false. Elapsed: 3.372928ms
Aug  4 11:27:23.651: INFO: Pod "var-expansion-1c271222-0c93-4a4d-b7dd-d2f047d8c876": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007547027s
Aug  4 11:27:25.656: INFO: Pod "var-expansion-1c271222-0c93-4a4d-b7dd-d2f047d8c876": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011885647s
Aug  4 11:27:27.660: INFO: Pod "var-expansion-1c271222-0c93-4a4d-b7dd-d2f047d8c876": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016348449s
STEP: Saw pod success
Aug  4 11:27:27.660: INFO: Pod "var-expansion-1c271222-0c93-4a4d-b7dd-d2f047d8c876" satisfied condition "Succeeded or Failed"
Aug  4 11:27:27.663: INFO: Trying to get logs from node kali-worker2 pod var-expansion-1c271222-0c93-4a4d-b7dd-d2f047d8c876 container dapi-container: 
STEP: delete the pod
Aug  4 11:27:27.777: INFO: Waiting for pod var-expansion-1c271222-0c93-4a4d-b7dd-d2f047d8c876 to disappear
Aug  4 11:27:27.816: INFO: Pod var-expansion-1c271222-0c93-4a4d-b7dd-d2f047d8c876 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:27:27.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-5254" for this suite.

• [SLOW TEST:6.276 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":275,"completed":193,"skipped":3331,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:27:27.825: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating Agnhost RC
Aug  4 11:27:27.908: INFO: namespace kubectl-5662
Aug  4 11:27:27.908: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5662'
Aug  4 11:27:31.988: INFO: stderr: ""
Aug  4 11:27:31.989: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Aug  4 11:27:32.992: INFO: Selector matched 1 pods for map[app:agnhost]
Aug  4 11:27:32.992: INFO: Found 0 / 1
Aug  4 11:27:34.122: INFO: Selector matched 1 pods for map[app:agnhost]
Aug  4 11:27:34.122: INFO: Found 0 / 1
Aug  4 11:27:35.098: INFO: Selector matched 1 pods for map[app:agnhost]
Aug  4 11:27:35.098: INFO: Found 0 / 1
Aug  4 11:27:35.993: INFO: Selector matched 1 pods for map[app:agnhost]
Aug  4 11:27:35.993: INFO: Found 0 / 1
Aug  4 11:27:37.038: INFO: Selector matched 1 pods for map[app:agnhost]
Aug  4 11:27:37.038: INFO: Found 1 / 1
Aug  4 11:27:37.038: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Aug  4 11:27:37.042: INFO: Selector matched 1 pods for map[app:agnhost]
Aug  4 11:27:37.042: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Aug  4 11:27:37.042: INFO: wait on agnhost-master startup in kubectl-5662 
Aug  4 11:27:37.042: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config logs agnhost-master-jqggg agnhost-master --namespace=kubectl-5662'
Aug  4 11:27:37.152: INFO: stderr: ""
Aug  4 11:27:37.152: INFO: stdout: "Paused\n"
STEP: exposing RC
Aug  4 11:27:37.152: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-5662'
Aug  4 11:27:37.304: INFO: stderr: ""
Aug  4 11:27:37.304: INFO: stdout: "service/rm2 exposed\n"
Aug  4 11:27:37.312: INFO: Service rm2 in namespace kubectl-5662 found.
STEP: exposing service
Aug  4 11:27:39.322: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-5662'
Aug  4 11:27:39.488: INFO: stderr: ""
Aug  4 11:27:39.488: INFO: stdout: "service/rm3 exposed\n"
Aug  4 11:27:39.547: INFO: Service rm3 in namespace kubectl-5662 found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:27:41.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5662" for this suite.

• [SLOW TEST:13.737 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1119
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":275,"completed":194,"skipped":3379,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should patch a Namespace [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:27:41.563: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should patch a Namespace [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a Namespace
STEP: patching the Namespace
STEP: get the Namespace and ensuring it has the label
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:27:41.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-5748" for this suite.
STEP: Destroying namespace "nspatchtest-f2de7482-5a32-47b7-8806-0bb7250422e2-3983" for this suite.
•{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":275,"completed":195,"skipped":3400,"failed":0}

------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:27:41.855: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0777 on tmpfs
Aug  4 11:27:41.981: INFO: Waiting up to 5m0s for pod "pod-82200988-741b-46e7-8ef5-179e13cf963e" in namespace "emptydir-1088" to be "Succeeded or Failed"
Aug  4 11:27:42.003: INFO: Pod "pod-82200988-741b-46e7-8ef5-179e13cf963e": Phase="Pending", Reason="", readiness=false. Elapsed: 22.281157ms
Aug  4 11:27:44.213: INFO: Pod "pod-82200988-741b-46e7-8ef5-179e13cf963e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.2320708s
Aug  4 11:27:46.217: INFO: Pod "pod-82200988-741b-46e7-8ef5-179e13cf963e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.236831377s
STEP: Saw pod success
Aug  4 11:27:46.217: INFO: Pod "pod-82200988-741b-46e7-8ef5-179e13cf963e" satisfied condition "Succeeded or Failed"
Aug  4 11:27:46.221: INFO: Trying to get logs from node kali-worker2 pod pod-82200988-741b-46e7-8ef5-179e13cf963e container test-container: 
STEP: delete the pod
Aug  4 11:27:46.260: INFO: Waiting for pod pod-82200988-741b-46e7-8ef5-179e13cf963e to disappear
Aug  4 11:27:46.277: INFO: Pod pod-82200988-741b-46e7-8ef5-179e13cf963e no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:27:46.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1088" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":196,"skipped":3400,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:27:46.290: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0804 11:27:47.799348       7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug  4 11:27:47.799: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:27:47.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-9101" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":275,"completed":197,"skipped":3406,"failed":0}
SSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:27:47.807: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug  4 11:27:48.593: INFO: (0) /api/v1/nodes/kali-worker:10250/proxy/logs/: 
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name cm-test-opt-del-08073ec8-485e-41b6-ab41-275197c3037c
STEP: Creating configMap with name cm-test-opt-upd-cc372155-2225-4dd8-8e57-b1a9b8b523fc
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-08073ec8-485e-41b6-ab41-275197c3037c
STEP: Updating configmap cm-test-opt-upd-cc372155-2225-4dd8-8e57-b1a9b8b523fc
STEP: Creating configMap with name cm-test-opt-create-86b3136e-c098-49bb-8b80-57538dfb1bab
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:27:57.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7537" for this suite.

• [SLOW TEST:8.640 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":199,"skipped":3423,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:27:57.932: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-map-d3e3f01e-ccbd-4b18-90da-45cc40176f34
STEP: Creating a pod to test consume configMaps
Aug  4 11:27:58.016: INFO: Waiting up to 5m0s for pod "pod-configmaps-5040d389-2baa-41ca-becf-200665e1bb29" in namespace "configmap-2388" to be "Succeeded or Failed"
Aug  4 11:27:58.020: INFO: Pod "pod-configmaps-5040d389-2baa-41ca-becf-200665e1bb29": Phase="Pending", Reason="", readiness=false. Elapsed: 4.6446ms
Aug  4 11:28:00.202: INFO: Pod "pod-configmaps-5040d389-2baa-41ca-becf-200665e1bb29": Phase="Pending", Reason="", readiness=false. Elapsed: 2.185971167s
Aug  4 11:28:02.206: INFO: Pod "pod-configmaps-5040d389-2baa-41ca-becf-200665e1bb29": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.189954756s
STEP: Saw pod success
Aug  4 11:28:02.206: INFO: Pod "pod-configmaps-5040d389-2baa-41ca-becf-200665e1bb29" satisfied condition "Succeeded or Failed"
Aug  4 11:28:02.208: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-5040d389-2baa-41ca-becf-200665e1bb29 container configmap-volume-test: 
STEP: delete the pod
Aug  4 11:28:02.297: INFO: Waiting for pod pod-configmaps-5040d389-2baa-41ca-becf-200665e1bb29 to disappear
Aug  4 11:28:02.308: INFO: Pod pod-configmaps-5040d389-2baa-41ca-becf-200665e1bb29 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:28:02.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2388" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":200,"skipped":3437,"failed":0}
SSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:28:02.339: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-8549
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-8549
STEP: Creating statefulset with conflicting port in namespace statefulset-8549
STEP: Waiting until pod test-pod will start running in namespace statefulset-8549
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-8549
Aug  4 11:28:08.724: INFO: Observed stateful pod in namespace: statefulset-8549, name: ss-0, uid: fdb97616-ff6b-4b71-a852-8258be09ceab, status phase: Failed. Waiting for statefulset controller to delete.
Aug  4 11:28:08.746: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-8549
STEP: Removing pod with conflicting port in namespace statefulset-8549
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-8549 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Aug  4 11:28:13.641: INFO: Deleting all statefulset in ns statefulset-8549
Aug  4 11:28:13.644: INFO: Scaling statefulset ss to 0
Aug  4 11:28:23.659: INFO: Waiting for statefulset status.replicas updated to 0
Aug  4 11:28:23.661: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:28:23.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-8549" for this suite.

• [SLOW TEST:21.374 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":275,"completed":201,"skipped":3446,"failed":0}
SSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:28:23.713: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Aug  4 11:28:28.366: INFO: Successfully updated pod "pod-update-cdd7223b-ec36-48aa-915b-a37720b3c39d"
STEP: verifying the updated pod is in kubernetes
Aug  4 11:28:28.411: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:28:28.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1433" for this suite.
•{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":275,"completed":202,"skipped":3456,"failed":0}
SSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny custom resource creation, update and deletion [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:28:28.420: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug  4 11:28:30.440: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug  4 11:28:32.531: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732137310, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732137310, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732137310, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732137310, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug  4 11:28:34.625: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732137310, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732137310, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732137310, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732137310, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug  4 11:28:37.683: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny custom resource creation, update and deletion [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug  4 11:28:37.829: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the custom resource webhook via the AdmissionRegistration API
STEP: Creating a custom resource that should be denied by the webhook
STEP: Creating a custom resource whose deletion would be denied by the webhook
STEP: Updating the custom resource with disallowed data should be denied
STEP: Deleting the custom resource should be denied
STEP: Remove the offending key and value from the custom resource data
STEP: Deleting the updated custom resource should be successful
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:28:39.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2612" for this suite.
STEP: Destroying namespace "webhook-2612-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:11.131 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny custom resource creation, update and deletion [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":275,"completed":203,"skipped":3460,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:28:39.551: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug  4 11:28:39.616: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:28:43.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3248" for this suite.
•{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":275,"completed":204,"skipped":3469,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:28:43.866: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug  4 11:28:43.973: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Aug  4 11:28:44.038: INFO: Number of nodes with available pods: 0
Aug  4 11:28:44.038: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Aug  4 11:28:44.133: INFO: Number of nodes with available pods: 0
Aug  4 11:28:44.133: INFO: Node kali-worker is running more than one daemon pod
Aug  4 11:28:45.137: INFO: Number of nodes with available pods: 0
Aug  4 11:28:45.138: INFO: Node kali-worker is running more than one daemon pod
Aug  4 11:28:46.244: INFO: Number of nodes with available pods: 0
Aug  4 11:28:46.244: INFO: Node kali-worker is running more than one daemon pod
Aug  4 11:28:47.137: INFO: Number of nodes with available pods: 1
Aug  4 11:28:47.137: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Aug  4 11:28:47.180: INFO: Number of nodes with available pods: 1
Aug  4 11:28:47.180: INFO: Number of running nodes: 0, number of available pods: 1
Aug  4 11:28:48.184: INFO: Number of nodes with available pods: 0
Aug  4 11:28:48.184: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Aug  4 11:28:48.226: INFO: Number of nodes with available pods: 0
Aug  4 11:28:48.226: INFO: Node kali-worker is running more than one daemon pod
Aug  4 11:28:49.416: INFO: Number of nodes with available pods: 0
Aug  4 11:28:49.416: INFO: Node kali-worker is running more than one daemon pod
Aug  4 11:28:50.230: INFO: Number of nodes with available pods: 0
Aug  4 11:28:50.230: INFO: Node kali-worker is running more than one daemon pod
Aug  4 11:28:51.230: INFO: Number of nodes with available pods: 0
Aug  4 11:28:51.230: INFO: Node kali-worker is running more than one daemon pod
Aug  4 11:28:52.279: INFO: Number of nodes with available pods: 0
Aug  4 11:28:52.279: INFO: Node kali-worker is running more than one daemon pod
Aug  4 11:28:53.230: INFO: Number of nodes with available pods: 0
Aug  4 11:28:53.230: INFO: Node kali-worker is running more than one daemon pod
Aug  4 11:28:54.230: INFO: Number of nodes with available pods: 0
Aug  4 11:28:54.230: INFO: Node kali-worker is running more than one daemon pod
Aug  4 11:28:55.229: INFO: Number of nodes with available pods: 1
Aug  4 11:28:55.229: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-788, will wait for the garbage collector to delete the pods
Aug  4 11:28:55.296: INFO: Deleting DaemonSet.extensions daemon-set took: 8.187879ms
Aug  4 11:28:55.596: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.298265ms
Aug  4 11:29:03.500: INFO: Number of nodes with available pods: 0
Aug  4 11:29:03.500: INFO: Number of running nodes: 0, number of available pods: 0
Aug  4 11:29:03.503: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-788/daemonsets","resourceVersion":"6683058"},"items":null}

Aug  4 11:29:03.505: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-788/pods","resourceVersion":"6683058"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:29:03.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-788" for this suite.

• [SLOW TEST:19.719 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":275,"completed":205,"skipped":3483,"failed":0}
S
------------------------------
[sig-network] Services 
  should find a service from listing all namespaces [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:29:03.585: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should find a service from listing all namespaces [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: fetching services
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:29:03.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-27" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702
•{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":275,"completed":206,"skipped":3484,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:29:03.678: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-7e93b36b-2a1d-47a0-9bc1-66f3c93502f6
STEP: Creating a pod to test consume secrets
Aug  4 11:29:03.863: INFO: Waiting up to 5m0s for pod "pod-secrets-5502bde7-ef91-476c-932a-835aae4b4853" in namespace "secrets-2149" to be "Succeeded or Failed"
Aug  4 11:29:03.880: INFO: Pod "pod-secrets-5502bde7-ef91-476c-932a-835aae4b4853": Phase="Pending", Reason="", readiness=false. Elapsed: 16.461919ms
Aug  4 11:29:05.884: INFO: Pod "pod-secrets-5502bde7-ef91-476c-932a-835aae4b4853": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021033942s
Aug  4 11:29:07.889: INFO: Pod "pod-secrets-5502bde7-ef91-476c-932a-835aae4b4853": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025427583s
STEP: Saw pod success
Aug  4 11:29:07.889: INFO: Pod "pod-secrets-5502bde7-ef91-476c-932a-835aae4b4853" satisfied condition "Succeeded or Failed"
Aug  4 11:29:07.892: INFO: Trying to get logs from node kali-worker pod pod-secrets-5502bde7-ef91-476c-932a-835aae4b4853 container secret-volume-test: 
STEP: delete the pod
Aug  4 11:29:07.966: INFO: Waiting for pod pod-secrets-5502bde7-ef91-476c-932a-835aae4b4853 to disappear
Aug  4 11:29:07.970: INFO: Pod pod-secrets-5502bde7-ef91-476c-932a-835aae4b4853 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:29:07.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2149" for this suite.
STEP: Destroying namespace "secret-namespace-4038" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":275,"completed":207,"skipped":3498,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:29:07.989: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation
Aug  4 11:29:08.042: INFO: >>> kubeConfig: /root/.kube/config
Aug  4 11:29:09.980: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:29:20.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-7430" for this suite.

• [SLOW TEST:12.668 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":275,"completed":208,"skipped":3523,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:29:20.657: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-465.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-465.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-465.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-465.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-465.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-465.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug  4 11:29:28.916: INFO: DNS probes using dns-465/dns-test-df2aaa29-f2c5-44c2-966f-b8c780624b5c succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:29:29.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-465" for this suite.

• [SLOW TEST:8.718 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":275,"completed":209,"skipped":3606,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:29:29.375: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: validating api versions
Aug  4 11:29:29.763: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config api-versions'
Aug  4 11:29:30.030: INFO: stderr: ""
Aug  4 11:29:30.030: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:29:30.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4582" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":275,"completed":210,"skipped":3621,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:29:30.076: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug  4 11:29:30.217: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with known and required properties
Aug  4 11:29:32.221: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7399 create -f -'
Aug  4 11:29:37.764: INFO: stderr: ""
Aug  4 11:29:37.764: INFO: stdout: "e2e-test-crd-publish-openapi-6501-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Aug  4 11:29:37.764: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7399 delete e2e-test-crd-publish-openapi-6501-crds test-foo'
Aug  4 11:29:37.913: INFO: stderr: ""
Aug  4 11:29:37.913: INFO: stdout: "e2e-test-crd-publish-openapi-6501-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
Aug  4 11:29:37.913: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7399 apply -f -'
Aug  4 11:29:38.180: INFO: stderr: ""
Aug  4 11:29:38.180: INFO: stdout: "e2e-test-crd-publish-openapi-6501-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Aug  4 11:29:38.180: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7399 delete e2e-test-crd-publish-openapi-6501-crds test-foo'
Aug  4 11:29:38.328: INFO: stderr: ""
Aug  4 11:29:38.328: INFO: stdout: "e2e-test-crd-publish-openapi-6501-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema
Aug  4 11:29:38.328: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7399 create -f -'
Aug  4 11:29:38.565: INFO: rc: 1
Aug  4 11:29:38.566: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7399 apply -f -'
Aug  4 11:29:38.814: INFO: rc: 1
STEP: client-side validation (kubectl create and apply) rejects request without required properties
Aug  4 11:29:38.814: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7399 create -f -'
Aug  4 11:29:39.055: INFO: rc: 1
Aug  4 11:29:39.055: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7399 apply -f -'
Aug  4 11:29:39.291: INFO: rc: 1
STEP: kubectl explain works to explain CR properties
Aug  4 11:29:39.291: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6501-crds'
Aug  4 11:29:39.523: INFO: stderr: ""
Aug  4 11:29:39.523: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-6501-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n     Foo CRD for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Foo\n\n   status\t\n     Status of Foo\n\n"
STEP: kubectl explain works to explain CR properties recursively
Aug  4 11:29:39.523: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6501-crds.metadata'
Aug  4 11:29:39.759: INFO: stderr: ""
Aug  4 11:29:39.759: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-6501-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n     ObjectMeta is metadata that all persisted resources must have, which\n     includes all objects users must create.\n\nFIELDS:\n   annotations\t\n     Annotations is an unstructured key value map stored with a resource that\n     may be set by external tools to store and retrieve arbitrary metadata. They\n     are not queryable and should be preserved when modifying objects. More\n     info: http://kubernetes.io/docs/user-guide/annotations\n\n   clusterName\t\n     The name of the cluster which the object belongs to. This is used to\n     distinguish resources with same name and namespace in different clusters.\n     This field is not set anywhere right now and apiserver is going to ignore\n     it if set in create or update request.\n\n   creationTimestamp\t\n     CreationTimestamp is a timestamp representing the server time when this\n     object was created. It is not guaranteed to be set in happens-before order\n     across separate operations. Clients may not set this value. It is\n     represented in RFC3339 form and is in UTC. Populated by the system.\n     Read-only. Null for lists. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   deletionGracePeriodSeconds\t\n     Number of seconds allowed for this object to gracefully terminate before it\n     will be removed from the system. Only set when deletionTimestamp is also\n     set. May only be shortened. Read-only.\n\n   deletionTimestamp\t\n     DeletionTimestamp is RFC 3339 date and time at which this resource will be\n     deleted. This field is set by the server when a graceful deletion is\n     requested by the user, and is not directly settable by a client. The\n     resource is expected to be deleted (no longer visible from resource lists,\n     and not reachable by name) after the time in this field, once the\n     finalizers list is empty. As long as the finalizers list contains items,\n     deletion is blocked. Once the deletionTimestamp is set, this value may not\n     be unset or be set further into the future, although it may be shortened or\n     the resource may be deleted prior to this time. For example, a user may\n     request that a pod is deleted in 30 seconds. The Kubelet will react by\n     sending a graceful termination signal to the containers in the pod. After\n     that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n     to the container and after cleanup, remove the pod from the API. In the\n     presence of network partitions, this object may still exist after this\n     timestamp, until an administrator or automated process can determine the\n     resource is fully terminated. If not set, graceful deletion of the object\n     has not been requested. Populated by the system when a graceful deletion is\n     requested. Read-only. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   finalizers\t<[]string>\n     Must be empty before the object is deleted from the registry. Each entry is\n     an identifier for the responsible component that will remove the entry from\n     the list. If the deletionTimestamp of the object is non-nil, entries in\n     this list can only be removed. Finalizers may be processed and removed in\n     any order. Order is NOT enforced because it introduces significant risk of\n     stuck finalizers. finalizers is a shared field, any actor with permission\n     can reorder it. If the finalizer list is processed in order, then this can\n     lead to a situation in which the component responsible for the first\n     finalizer in the list is waiting for a signal (field value, external\n     system, or other) produced by a component responsible for a finalizer later\n     in the list, resulting in a deadlock. Without enforced ordering finalizers\n     are free to order amongst themselves and are not vulnerable to ordering\n     changes in the list.\n\n   generateName\t\n     GenerateName is an optional prefix, used by the server, to generate a\n     unique name ONLY IF the Name field has not been provided. If this field is\n     used, the name returned to the client will be different than the name\n     passed. This value will also be combined with a unique suffix. The provided\n     value has the same validation rules as the Name field, and may be truncated\n     by the length of the suffix required to make the value unique on the\n     server. If this field is specified and the generated name exists, the\n     server will NOT return a 409 - instead, it will either return 201 Created\n     or 500 with Reason ServerTimeout indicating a unique name could not be\n     found in the time allotted, and the client should retry (optionally after\n     the time indicated in the Retry-After header). Applied only if Name is not\n     specified. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n   generation\t\n     A sequence number representing a specific generation of the desired state.\n     Populated by the system. Read-only.\n\n   labels\t\n     Map of string keys and values that can be used to organize and categorize\n     (scope and select) objects. May match selectors of replication controllers\n     and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n   managedFields\t<[]Object>\n     ManagedFields maps workflow-id and version to the set of fields that are\n     managed by that workflow. This is mostly for internal housekeeping, and\n     users typically shouldn't need to set or understand this field. A workflow\n     can be the user's name, a controller's name, or the name of a specific\n     apply path like \"ci-cd\". The set of fields is always in the version that\n     the workflow used when modifying the object.\n\n   name\t\n     Name must be unique within a namespace. Is required when creating\n     resources, although some resources may allow a client to request the\n     generation of an appropriate name automatically. Name is primarily intended\n     for creation idempotence and configuration definition. Cannot be updated.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n   namespace\t\n     Namespace defines the space within each name must be unique. An empty\n     namespace is equivalent to the \"default\" namespace, but \"default\" is the\n     canonical representation. Not all objects are required to be scoped to a\n     namespace - the value of this field for those objects will be empty. Must\n     be a DNS_LABEL. Cannot be updated. More info:\n     http://kubernetes.io/docs/user-guide/namespaces\n\n   ownerReferences\t<[]Object>\n     List of objects depended by this object. If ALL objects in the list have\n     been deleted, this object will be garbage collected. If this object is\n     managed by a controller, then an entry in this list will point to this\n     controller, with the controller field set to true. There cannot be more\n     than one managing controller.\n\n   resourceVersion\t\n     An opaque value that represents the internal version of this object that\n     can be used by clients to determine when objects have changed. May be used\n     for optimistic concurrency, change detection, and the watch operation on a\n     resource or set of resources. Clients must treat these values as opaque and\n     passed unmodified back to the server. They may only be valid for a\n     particular resource or set of resources. Populated by the system.\n     Read-only. Value must be treated as opaque by clients and . More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n   selfLink\t\n     SelfLink is a URL representing this object. Populated by the system.\n     Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n     release and the field is planned to be removed in 1.21 release.\n\n   uid\t\n     UID is the unique in time and space value for this object. It is typically\n     generated by the server on successful creation of a resource and is not\n     allowed to change on PUT operations. Populated by the system. Read-only.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n"
Aug  4 11:29:39.759: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6501-crds.spec'
Aug  4 11:29:40.005: INFO: stderr: ""
Aug  4 11:29:40.005: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-6501-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
Aug  4 11:29:40.005: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6501-crds.spec.bars'
Aug  4 11:29:40.262: INFO: stderr: ""
Aug  4 11:29:40.262: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-6501-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
Aug  4 11:29:40.262: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6501-crds.spec.bars2'
Aug  4 11:29:40.500: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:29:43.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-7399" for this suite.

• [SLOW TEST:13.397 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":275,"completed":211,"skipped":3651,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:29:43.473: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Service
STEP: Ensuring resource quota status captures service creation
STEP: Deleting a Service
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:29:56.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-2973" for this suite.

• [SLOW TEST:12.551 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":275,"completed":212,"skipped":3687,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:29:56.025: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a ResourceQuota with best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a best-effort pod
STEP: Ensuring resource quota with best effort scope captures the pod usage
STEP: Ensuring resource quota with not best effort ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a not best-effort pod
STEP: Ensuring resource quota with not best effort scope captures the pod usage
STEP: Ensuring resource quota with best effort scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:30:12.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-7472" for this suite.

• [SLOW TEST:16.949 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":275,"completed":213,"skipped":3703,"failed":0}
S
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:30:12.974: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug  4 11:30:13.035: INFO: Waiting up to 5m0s for pod "downwardapi-volume-569b372f-1ee4-47a8-9279-43245caf0591" in namespace "downward-api-3363" to be "Succeeded or Failed"
Aug  4 11:30:13.046: INFO: Pod "downwardapi-volume-569b372f-1ee4-47a8-9279-43245caf0591": Phase="Pending", Reason="", readiness=false. Elapsed: 10.474597ms
Aug  4 11:30:15.051: INFO: Pod "downwardapi-volume-569b372f-1ee4-47a8-9279-43245caf0591": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015434937s
Aug  4 11:30:17.055: INFO: Pod "downwardapi-volume-569b372f-1ee4-47a8-9279-43245caf0591": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019886991s
STEP: Saw pod success
Aug  4 11:30:17.055: INFO: Pod "downwardapi-volume-569b372f-1ee4-47a8-9279-43245caf0591" satisfied condition "Succeeded or Failed"
Aug  4 11:30:17.058: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-569b372f-1ee4-47a8-9279-43245caf0591 container client-container: 
STEP: delete the pod
Aug  4 11:30:17.108: INFO: Waiting for pod downwardapi-volume-569b372f-1ee4-47a8-9279-43245caf0591 to disappear
Aug  4 11:30:17.127: INFO: Pod downwardapi-volume-569b372f-1ee4-47a8-9279-43245caf0591 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:30:17.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3363" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":214,"skipped":3704,"failed":0}
SSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:30:17.134: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-31b2eca9-7c24-46d1-be08-7f0d5220bba5
STEP: Creating a pod to test consume configMaps
Aug  4 11:30:17.284: INFO: Waiting up to 5m0s for pod "pod-configmaps-ec0d6a73-5652-40bd-a6a5-271bd3426717" in namespace "configmap-6258" to be "Succeeded or Failed"
Aug  4 11:30:17.288: INFO: Pod "pod-configmaps-ec0d6a73-5652-40bd-a6a5-271bd3426717": Phase="Pending", Reason="", readiness=false. Elapsed: 3.601335ms
Aug  4 11:30:19.294: INFO: Pod "pod-configmaps-ec0d6a73-5652-40bd-a6a5-271bd3426717": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009773019s
Aug  4 11:30:21.298: INFO: Pod "pod-configmaps-ec0d6a73-5652-40bd-a6a5-271bd3426717": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01383721s
STEP: Saw pod success
Aug  4 11:30:21.298: INFO: Pod "pod-configmaps-ec0d6a73-5652-40bd-a6a5-271bd3426717" satisfied condition "Succeeded or Failed"
Aug  4 11:30:21.301: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-ec0d6a73-5652-40bd-a6a5-271bd3426717 container configmap-volume-test: 
STEP: delete the pod
Aug  4 11:30:21.468: INFO: Waiting for pod pod-configmaps-ec0d6a73-5652-40bd-a6a5-271bd3426717 to disappear
Aug  4 11:30:21.504: INFO: Pod pod-configmaps-ec0d6a73-5652-40bd-a6a5-271bd3426717 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:30:21.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6258" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":275,"completed":215,"skipped":3707,"failed":0}
SSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:30:21.513: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Aug  4 11:30:27.026: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:30:27.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-9071" for this suite.

• [SLOW TEST:5.634 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:133
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":216,"skipped":3713,"failed":0}
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:30:27.148: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0644 on tmpfs
Aug  4 11:30:27.267: INFO: Waiting up to 5m0s for pod "pod-0a23086e-1627-4806-b302-5555dcfeb081" in namespace "emptydir-3371" to be "Succeeded or Failed"
Aug  4 11:30:27.318: INFO: Pod "pod-0a23086e-1627-4806-b302-5555dcfeb081": Phase="Pending", Reason="", readiness=false. Elapsed: 50.789511ms
Aug  4 11:30:29.322: INFO: Pod "pod-0a23086e-1627-4806-b302-5555dcfeb081": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054962998s
Aug  4 11:30:31.326: INFO: Pod "pod-0a23086e-1627-4806-b302-5555dcfeb081": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.059346903s
STEP: Saw pod success
Aug  4 11:30:31.326: INFO: Pod "pod-0a23086e-1627-4806-b302-5555dcfeb081" satisfied condition "Succeeded or Failed"
Aug  4 11:30:31.330: INFO: Trying to get logs from node kali-worker pod pod-0a23086e-1627-4806-b302-5555dcfeb081 container test-container: 
STEP: delete the pod
Aug  4 11:30:31.379: INFO: Waiting for pod pod-0a23086e-1627-4806-b302-5555dcfeb081 to disappear
Aug  4 11:30:31.397: INFO: Pod pod-0a23086e-1627-4806-b302-5555dcfeb081 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:30:31.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3371" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":217,"skipped":3718,"failed":0}
S
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:30:31.427: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug  4 11:30:35.620: INFO: Waiting up to 5m0s for pod "client-envvars-e7c9185a-2889-4995-994c-0f0893e004b2" in namespace "pods-4069" to be "Succeeded or Failed"
Aug  4 11:30:35.698: INFO: Pod "client-envvars-e7c9185a-2889-4995-994c-0f0893e004b2": Phase="Pending", Reason="", readiness=false. Elapsed: 78.024507ms
Aug  4 11:30:37.701: INFO: Pod "client-envvars-e7c9185a-2889-4995-994c-0f0893e004b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081086631s
Aug  4 11:30:39.705: INFO: Pod "client-envvars-e7c9185a-2889-4995-994c-0f0893e004b2": Phase="Running", Reason="", readiness=true. Elapsed: 4.085324218s
Aug  4 11:30:41.709: INFO: Pod "client-envvars-e7c9185a-2889-4995-994c-0f0893e004b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.088964186s
STEP: Saw pod success
Aug  4 11:30:41.709: INFO: Pod "client-envvars-e7c9185a-2889-4995-994c-0f0893e004b2" satisfied condition "Succeeded or Failed"
Aug  4 11:30:41.712: INFO: Trying to get logs from node kali-worker2 pod client-envvars-e7c9185a-2889-4995-994c-0f0893e004b2 container env3cont: 
STEP: delete the pod
Aug  4 11:30:41.727: INFO: Waiting for pod client-envvars-e7c9185a-2889-4995-994c-0f0893e004b2 to disappear
Aug  4 11:30:41.745: INFO: Pod client-envvars-e7c9185a-2889-4995-994c-0f0893e004b2 no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:30:41.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4069" for this suite.

• [SLOW TEST:10.325 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":275,"completed":218,"skipped":3719,"failed":0}
SSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:30:41.752: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug  4 11:30:41.962: INFO: The status of Pod test-webserver-ae24f58d-81d8-47b6-a00b-ce94d829dc18 is Pending, waiting for it to be Running (with Ready = true)
Aug  4 11:30:43.979: INFO: The status of Pod test-webserver-ae24f58d-81d8-47b6-a00b-ce94d829dc18 is Pending, waiting for it to be Running (with Ready = true)
Aug  4 11:30:45.965: INFO: The status of Pod test-webserver-ae24f58d-81d8-47b6-a00b-ce94d829dc18 is Running (Ready = false)
Aug  4 11:30:47.966: INFO: The status of Pod test-webserver-ae24f58d-81d8-47b6-a00b-ce94d829dc18 is Running (Ready = false)
Aug  4 11:30:49.966: INFO: The status of Pod test-webserver-ae24f58d-81d8-47b6-a00b-ce94d829dc18 is Running (Ready = false)
Aug  4 11:30:51.966: INFO: The status of Pod test-webserver-ae24f58d-81d8-47b6-a00b-ce94d829dc18 is Running (Ready = false)
Aug  4 11:30:54.064: INFO: The status of Pod test-webserver-ae24f58d-81d8-47b6-a00b-ce94d829dc18 is Running (Ready = false)
Aug  4 11:30:55.966: INFO: The status of Pod test-webserver-ae24f58d-81d8-47b6-a00b-ce94d829dc18 is Running (Ready = false)
Aug  4 11:30:57.966: INFO: The status of Pod test-webserver-ae24f58d-81d8-47b6-a00b-ce94d829dc18 is Running (Ready = false)
Aug  4 11:30:59.966: INFO: The status of Pod test-webserver-ae24f58d-81d8-47b6-a00b-ce94d829dc18 is Running (Ready = false)
Aug  4 11:31:01.966: INFO: The status of Pod test-webserver-ae24f58d-81d8-47b6-a00b-ce94d829dc18 is Running (Ready = false)
Aug  4 11:31:03.965: INFO: The status of Pod test-webserver-ae24f58d-81d8-47b6-a00b-ce94d829dc18 is Running (Ready = false)
Aug  4 11:31:05.966: INFO: The status of Pod test-webserver-ae24f58d-81d8-47b6-a00b-ce94d829dc18 is Running (Ready = false)
Aug  4 11:31:07.965: INFO: The status of Pod test-webserver-ae24f58d-81d8-47b6-a00b-ce94d829dc18 is Running (Ready = true)
Aug  4 11:31:07.968: INFO: Container started at 2020-08-04 11:30:44 +0000 UTC, pod became ready at 2020-08-04 11:31:07 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:31:07.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-303" for this suite.

• [SLOW TEST:26.224 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":275,"completed":219,"skipped":3726,"failed":0}
SSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:31:07.976: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating Agnhost RC
Aug  4 11:31:08.049: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3559'
Aug  4 11:31:08.403: INFO: stderr: ""
Aug  4 11:31:08.403: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Aug  4 11:31:09.408: INFO: Selector matched 1 pods for map[app:agnhost]
Aug  4 11:31:09.408: INFO: Found 0 / 1
Aug  4 11:31:10.411: INFO: Selector matched 1 pods for map[app:agnhost]
Aug  4 11:31:10.411: INFO: Found 0 / 1
Aug  4 11:31:11.407: INFO: Selector matched 1 pods for map[app:agnhost]
Aug  4 11:31:11.407: INFO: Found 0 / 1
Aug  4 11:31:12.408: INFO: Selector matched 1 pods for map[app:agnhost]
Aug  4 11:31:12.408: INFO: Found 1 / 1
Aug  4 11:31:12.408: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Aug  4 11:31:12.411: INFO: Selector matched 1 pods for map[app:agnhost]
Aug  4 11:31:12.411: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Aug  4 11:31:12.411: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config patch pod agnhost-master-kpwff --namespace=kubectl-3559 -p {"metadata":{"annotations":{"x":"y"}}}'
Aug  4 11:31:12.549: INFO: stderr: ""
Aug  4 11:31:12.549: INFO: stdout: "pod/agnhost-master-kpwff patched\n"
STEP: checking annotations
Aug  4 11:31:12.557: INFO: Selector matched 1 pods for map[app:agnhost]
Aug  4 11:31:12.557: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:31:12.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3559" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":275,"completed":220,"skipped":3732,"failed":0}
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:31:12.566: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating all guestbook components
Aug  4 11:31:12.625: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-slave
  labels:
    app: agnhost
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: agnhost
    role: slave
    tier: backend

Aug  4 11:31:12.625: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9136'
Aug  4 11:31:12.904: INFO: stderr: ""
Aug  4 11:31:12.904: INFO: stdout: "service/agnhost-slave created\n"
Aug  4 11:31:12.905: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-master
  labels:
    app: agnhost
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: agnhost
    role: master
    tier: backend

Aug  4 11:31:12.905: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9136'
Aug  4 11:31:13.250: INFO: stderr: ""
Aug  4 11:31:13.250: INFO: stdout: "service/agnhost-master created\n"
Aug  4 11:31:13.250: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Aug  4 11:31:13.250: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9136'
Aug  4 11:31:13.694: INFO: stderr: ""
Aug  4 11:31:13.694: INFO: stdout: "service/frontend created\n"
Aug  4 11:31:13.695: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: guestbook-frontend
        image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12
        args: [ "guestbook", "--backend-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 80

Aug  4 11:31:13.695: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9136'
Aug  4 11:31:14.197: INFO: stderr: ""
Aug  4 11:31:14.197: INFO: stdout: "deployment.apps/frontend created\n"
Aug  4 11:31:14.198: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: agnhost
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12
        args: [ "guestbook", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Aug  4 11:31:14.198: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9136'
Aug  4 11:31:14.476: INFO: stderr: ""
Aug  4 11:31:14.476: INFO: stdout: "deployment.apps/agnhost-master created\n"
Aug  4 11:31:14.476: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: agnhost
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12
        args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Aug  4 11:31:14.476: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9136'
Aug  4 11:31:14.790: INFO: stderr: ""
Aug  4 11:31:14.790: INFO: stdout: "deployment.apps/agnhost-slave created\n"
STEP: validating guestbook app
Aug  4 11:31:14.790: INFO: Waiting for all frontend pods to be Running.
Aug  4 11:31:24.841: INFO: Waiting for frontend to serve content.
Aug  4 11:31:24.850: INFO: Trying to add a new entry to the guestbook.
Aug  4 11:31:24.860: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Aug  4 11:31:24.878: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9136'
Aug  4 11:31:25.064: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug  4 11:31:25.064: INFO: stdout: "service \"agnhost-slave\" force deleted\n"
STEP: using delete to clean up resources
Aug  4 11:31:25.065: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9136'
Aug  4 11:31:25.313: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug  4 11:31:25.313: INFO: stdout: "service \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Aug  4 11:31:25.313: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9136'
Aug  4 11:31:25.476: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug  4 11:31:25.476: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Aug  4 11:31:25.476: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9136'
Aug  4 11:31:25.597: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug  4 11:31:25.597: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Aug  4 11:31:25.597: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9136'
Aug  4 11:31:25.708: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug  4 11:31:25.709: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Aug  4 11:31:25.709: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9136'
Aug  4 11:31:26.126: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug  4 11:31:26.126: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:31:26.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9136" for this suite.

• [SLOW TEST:14.025 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:310
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":275,"completed":221,"skipped":3741,"failed":0}
SSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:31:26.591: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-60a420b5-88b4-4d5c-91ed-08f613675822
STEP: Creating a pod to test consume configMaps
Aug  4 11:31:27.193: INFO: Waiting up to 5m0s for pod "pod-configmaps-406438e3-e98b-4ca9-bee3-aad458ccbd4a" in namespace "configmap-7445" to be "Succeeded or Failed"
Aug  4 11:31:27.220: INFO: Pod "pod-configmaps-406438e3-e98b-4ca9-bee3-aad458ccbd4a": Phase="Pending", Reason="", readiness=false. Elapsed: 26.347718ms
Aug  4 11:31:29.224: INFO: Pod "pod-configmaps-406438e3-e98b-4ca9-bee3-aad458ccbd4a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030147686s
Aug  4 11:31:31.370: INFO: Pod "pod-configmaps-406438e3-e98b-4ca9-bee3-aad458ccbd4a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.176191824s
Aug  4 11:31:33.376: INFO: Pod "pod-configmaps-406438e3-e98b-4ca9-bee3-aad458ccbd4a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.182587456s
STEP: Saw pod success
Aug  4 11:31:33.376: INFO: Pod "pod-configmaps-406438e3-e98b-4ca9-bee3-aad458ccbd4a" satisfied condition "Succeeded or Failed"
Aug  4 11:31:33.378: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-406438e3-e98b-4ca9-bee3-aad458ccbd4a container configmap-volume-test: 
STEP: delete the pod
Aug  4 11:31:33.633: INFO: Waiting for pod pod-configmaps-406438e3-e98b-4ca9-bee3-aad458ccbd4a to disappear
Aug  4 11:31:33.650: INFO: Pod pod-configmaps-406438e3-e98b-4ca9-bee3-aad458ccbd4a no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:31:33.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7445" for this suite.

• [SLOW TEST:7.127 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":222,"skipped":3744,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:31:33.718: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ConfigMap
STEP: Ensuring resource quota status captures configMap creation
STEP: Deleting a ConfigMap
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:31:49.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-5631" for this suite.

• [SLOW TEST:16.287 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":275,"completed":223,"skipped":3749,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:31:50.005: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-projected-all-test-volume-c0efa55e-adb6-46c5-9e35-ae1498a1844a
STEP: Creating secret with name secret-projected-all-test-volume-f8c231de-ed57-49ba-a683-1b7713f63937
STEP: Creating a pod to test Check all projections for projected volume plugin
Aug  4 11:31:50.259: INFO: Waiting up to 5m0s for pod "projected-volume-8aded990-c6a5-4167-99ef-41ce206236c0" in namespace "projected-6006" to be "Succeeded or Failed"
Aug  4 11:31:50.332: INFO: Pod "projected-volume-8aded990-c6a5-4167-99ef-41ce206236c0": Phase="Pending", Reason="", readiness=false. Elapsed: 73.191941ms
Aug  4 11:31:52.725: INFO: Pod "projected-volume-8aded990-c6a5-4167-99ef-41ce206236c0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.466076345s
Aug  4 11:31:55.047: INFO: Pod "projected-volume-8aded990-c6a5-4167-99ef-41ce206236c0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.787685026s
Aug  4 11:31:57.050: INFO: Pod "projected-volume-8aded990-c6a5-4167-99ef-41ce206236c0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.791337161s
Aug  4 11:31:59.596: INFO: Pod "projected-volume-8aded990-c6a5-4167-99ef-41ce206236c0": Phase="Pending", Reason="", readiness=false. Elapsed: 9.336840026s
Aug  4 11:32:01.600: INFO: Pod "projected-volume-8aded990-c6a5-4167-99ef-41ce206236c0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.340969297s
STEP: Saw pod success
Aug  4 11:32:01.600: INFO: Pod "projected-volume-8aded990-c6a5-4167-99ef-41ce206236c0" satisfied condition "Succeeded or Failed"
Aug  4 11:32:01.603: INFO: Trying to get logs from node kali-worker pod projected-volume-8aded990-c6a5-4167-99ef-41ce206236c0 container projected-all-volume-test: 
STEP: delete the pod
Aug  4 11:32:01.654: INFO: Waiting for pod projected-volume-8aded990-c6a5-4167-99ef-41ce206236c0 to disappear
Aug  4 11:32:01.663: INFO: Pod projected-volume-8aded990-c6a5-4167-99ef-41ce206236c0 no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:32:01.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6006" for this suite.

• [SLOW TEST:11.670 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:32
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":275,"completed":224,"skipped":3762,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:32:01.676: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-upd-f43b3796-18cf-4f7f-8cad-be7043dd0f83
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:32:07.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1866" for this suite.

• [SLOW TEST:6.131 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":225,"skipped":3789,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:32:07.808: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug  4 11:32:08.298: INFO: Waiting up to 5m0s for pod "downwardapi-volume-df3e415b-ca4e-4879-8553-6f07b930009a" in namespace "downward-api-6212" to be "Succeeded or Failed"
Aug  4 11:32:08.381: INFO: Pod "downwardapi-volume-df3e415b-ca4e-4879-8553-6f07b930009a": Phase="Pending", Reason="", readiness=false. Elapsed: 83.214833ms
Aug  4 11:32:10.456: INFO: Pod "downwardapi-volume-df3e415b-ca4e-4879-8553-6f07b930009a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.157804741s
Aug  4 11:32:12.460: INFO: Pod "downwardapi-volume-df3e415b-ca4e-4879-8553-6f07b930009a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.161599511s
Aug  4 11:32:14.504: INFO: Pod "downwardapi-volume-df3e415b-ca4e-4879-8553-6f07b930009a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.205442941s
STEP: Saw pod success
Aug  4 11:32:14.504: INFO: Pod "downwardapi-volume-df3e415b-ca4e-4879-8553-6f07b930009a" satisfied condition "Succeeded or Failed"
Aug  4 11:32:14.507: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-df3e415b-ca4e-4879-8553-6f07b930009a container client-container: 
STEP: delete the pod
Aug  4 11:32:14.653: INFO: Waiting for pod downwardapi-volume-df3e415b-ca4e-4879-8553-6f07b930009a to disappear
Aug  4 11:32:14.695: INFO: Pod downwardapi-volume-df3e415b-ca4e-4879-8553-6f07b930009a no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:32:14.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6212" for this suite.

• [SLOW TEST:6.895 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":275,"completed":226,"skipped":3819,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:32:14.704: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
Aug  4 11:32:14.891: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the sample API server.
Aug  4 11:32:15.699: INFO: new replicaset for deployment "sample-apiserver-deployment" is yet to be created
Aug  4 11:32:18.460: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732137536, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732137536, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732137536, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732137535, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7996d54f97\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug  4 11:32:20.465: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732137536, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732137536, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732137536, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732137535, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7996d54f97\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug  4 11:32:23.097: INFO: Waited 626.608518ms for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:32:23.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-3119" for this suite.

• [SLOW TEST:8.987 seconds]
[sig-api-machinery] Aggregator
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":275,"completed":227,"skipped":3873,"failed":0}
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:32:23.692: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward api env vars
Aug  4 11:32:24.546: INFO: Waiting up to 5m0s for pod "downward-api-72f4c1f6-0f4f-4d4b-8a1a-ac71efaf8dde" in namespace "downward-api-7590" to be "Succeeded or Failed"
Aug  4 11:32:24.582: INFO: Pod "downward-api-72f4c1f6-0f4f-4d4b-8a1a-ac71efaf8dde": Phase="Pending", Reason="", readiness=false. Elapsed: 36.600927ms
Aug  4 11:32:26.586: INFO: Pod "downward-api-72f4c1f6-0f4f-4d4b-8a1a-ac71efaf8dde": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040595474s
Aug  4 11:32:28.591: INFO: Pod "downward-api-72f4c1f6-0f4f-4d4b-8a1a-ac71efaf8dde": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045629892s
STEP: Saw pod success
Aug  4 11:32:28.591: INFO: Pod "downward-api-72f4c1f6-0f4f-4d4b-8a1a-ac71efaf8dde" satisfied condition "Succeeded or Failed"
Aug  4 11:32:28.595: INFO: Trying to get logs from node kali-worker2 pod downward-api-72f4c1f6-0f4f-4d4b-8a1a-ac71efaf8dde container dapi-container: 
STEP: delete the pod
Aug  4 11:32:28.823: INFO: Waiting for pod downward-api-72f4c1f6-0f4f-4d4b-8a1a-ac71efaf8dde to disappear
Aug  4 11:32:28.832: INFO: Pod downward-api-72f4c1f6-0f4f-4d4b-8a1a-ac71efaf8dde no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:32:28.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7590" for this suite.

• [SLOW TEST:5.194 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":275,"completed":228,"skipped":3873,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:32:28.886: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug  4 11:32:29.184: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"c166ce40-d2ca-4cd1-a512-79883fe669e4", Controller:(*bool)(0xc004a759f2), BlockOwnerDeletion:(*bool)(0xc004a759f3)}}
Aug  4 11:32:29.202: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"1f8a8ed5-40c1-4ed5-bcef-7f2faad02480", Controller:(*bool)(0xc004aa4e0a), BlockOwnerDeletion:(*bool)(0xc004aa4e0b)}}
Aug  4 11:32:29.245: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"b999e0d6-47ac-475c-a0f1-e71821031c09", Controller:(*bool)(0xc004aa5002), BlockOwnerDeletion:(*bool)(0xc004aa5003)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:32:34.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-7574" for this suite.

• [SLOW TEST:5.556 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":275,"completed":229,"skipped":3898,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:32:34.443: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
Aug  4 11:32:34.512: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:32:42.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-5120" for this suite.

• [SLOW TEST:8.039 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":275,"completed":230,"skipped":3918,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:32:42.483: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug  4 11:32:42.559: INFO: Waiting up to 5m0s for pod "downwardapi-volume-de619e05-b307-4eba-ba92-4b89bb27b7d2" in namespace "projected-7507" to be "Succeeded or Failed"
Aug  4 11:32:42.569: INFO: Pod "downwardapi-volume-de619e05-b307-4eba-ba92-4b89bb27b7d2": Phase="Pending", Reason="", readiness=false. Elapsed: 9.432298ms
Aug  4 11:32:44.573: INFO: Pod "downwardapi-volume-de619e05-b307-4eba-ba92-4b89bb27b7d2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01376766s
Aug  4 11:32:46.577: INFO: Pod "downwardapi-volume-de619e05-b307-4eba-ba92-4b89bb27b7d2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018163182s
STEP: Saw pod success
Aug  4 11:32:46.577: INFO: Pod "downwardapi-volume-de619e05-b307-4eba-ba92-4b89bb27b7d2" satisfied condition "Succeeded or Failed"
Aug  4 11:32:46.581: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-de619e05-b307-4eba-ba92-4b89bb27b7d2 container client-container: 
STEP: delete the pod
Aug  4 11:32:46.628: INFO: Waiting for pod downwardapi-volume-de619e05-b307-4eba-ba92-4b89bb27b7d2 to disappear
Aug  4 11:32:46.641: INFO: Pod downwardapi-volume-de619e05-b307-4eba-ba92-4b89bb27b7d2 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:32:46.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7507" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":231,"skipped":3927,"failed":0}
SS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:32:46.652: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir volume type on tmpfs
Aug  4 11:32:46.709: INFO: Waiting up to 5m0s for pod "pod-584b49a4-4856-4156-b35b-b26e877b3497" in namespace "emptydir-4317" to be "Succeeded or Failed"
Aug  4 11:32:46.713: INFO: Pod "pod-584b49a4-4856-4156-b35b-b26e877b3497": Phase="Pending", Reason="", readiness=false. Elapsed: 3.547349ms
Aug  4 11:32:48.930: INFO: Pod "pod-584b49a4-4856-4156-b35b-b26e877b3497": Phase="Pending", Reason="", readiness=false. Elapsed: 2.221079499s
Aug  4 11:32:50.934: INFO: Pod "pod-584b49a4-4856-4156-b35b-b26e877b3497": Phase="Running", Reason="", readiness=true. Elapsed: 4.225391479s
Aug  4 11:32:52.937: INFO: Pod "pod-584b49a4-4856-4156-b35b-b26e877b3497": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.228419991s
STEP: Saw pod success
Aug  4 11:32:52.938: INFO: Pod "pod-584b49a4-4856-4156-b35b-b26e877b3497" satisfied condition "Succeeded or Failed"
Aug  4 11:32:52.941: INFO: Trying to get logs from node kali-worker2 pod pod-584b49a4-4856-4156-b35b-b26e877b3497 container test-container: 
STEP: delete the pod
Aug  4 11:32:53.003: INFO: Waiting for pod pod-584b49a4-4856-4156-b35b-b26e877b3497 to disappear
Aug  4 11:32:53.026: INFO: Pod pod-584b49a4-4856-4156-b35b-b26e877b3497 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:32:53.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4317" for this suite.

• [SLOW TEST:6.424 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":232,"skipped":3929,"failed":0}
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:32:53.076: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-6209.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-6209.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6209.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-6209.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-6209.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6209.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug  4 11:32:59.257: INFO: DNS probes using dns-6209/dns-test-79ac8413-32ed-4a3a-91d9-3844e571678a succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:32:59.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-6209" for this suite.

• [SLOW TEST:6.308 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":275,"completed":233,"skipped":3929,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:32:59.385: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0644 on node default medium
Aug  4 11:32:59.572: INFO: Waiting up to 5m0s for pod "pod-d817a038-6bd2-41cf-9578-692a74a42f06" in namespace "emptydir-4586" to be "Succeeded or Failed"
Aug  4 11:32:59.636: INFO: Pod "pod-d817a038-6bd2-41cf-9578-692a74a42f06": Phase="Pending", Reason="", readiness=false. Elapsed: 64.029919ms
Aug  4 11:33:01.641: INFO: Pod "pod-d817a038-6bd2-41cf-9578-692a74a42f06": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068494776s
Aug  4 11:33:03.645: INFO: Pod "pod-d817a038-6bd2-41cf-9578-692a74a42f06": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07256315s
Aug  4 11:33:05.648: INFO: Pod "pod-d817a038-6bd2-41cf-9578-692a74a42f06": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.07637354s
STEP: Saw pod success
Aug  4 11:33:05.648: INFO: Pod "pod-d817a038-6bd2-41cf-9578-692a74a42f06" satisfied condition "Succeeded or Failed"
Aug  4 11:33:05.651: INFO: Trying to get logs from node kali-worker pod pod-d817a038-6bd2-41cf-9578-692a74a42f06 container test-container: 
STEP: delete the pod
Aug  4 11:33:05.728: INFO: Waiting for pod pod-d817a038-6bd2-41cf-9578-692a74a42f06 to disappear
Aug  4 11:33:05.731: INFO: Pod pod-d817a038-6bd2-41cf-9578-692a74a42f06 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:33:05.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4586" for this suite.

• [SLOW TEST:6.351 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":234,"skipped":3941,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:33:05.736: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: getting the auto-created API token
STEP: reading a file in the container
Aug  4 11:33:10.324: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4283 pod-service-account-8ebb847e-04f6-4c2f-aada-f027da047cec -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Aug  4 11:33:10.554: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4283 pod-service-account-8ebb847e-04f6-4c2f-aada-f027da047cec -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Aug  4 11:33:10.765: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4283 pod-service-account-8ebb847e-04f6-4c2f-aada-f027da047cec -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:33:11.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-4283" for this suite.

• [SLOW TEST:5.297 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":275,"completed":235,"skipped":3977,"failed":0}
S
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should have a working scale subresource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:33:11.034: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-6401
[It] should have a working scale subresource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating statefulset ss in namespace statefulset-6401
Aug  4 11:33:11.302: INFO: Found 0 stateful pods, waiting for 1
Aug  4 11:33:21.306: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: getting scale subresource
STEP: updating a scale subresource
STEP: verifying the statefulset Spec.Replicas was modified
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Aug  4 11:33:21.332: INFO: Deleting all statefulset in ns statefulset-6401
Aug  4 11:33:21.356: INFO: Scaling statefulset ss to 0
Aug  4 11:33:41.528: INFO: Waiting for statefulset status.replicas updated to 0
Aug  4 11:33:41.531: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:33:41.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-6401" for this suite.

• [SLOW TEST:30.521 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    should have a working scale subresource [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":275,"completed":236,"skipped":3978,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:33:41.555: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Pod that fits quota
STEP: Ensuring ResourceQuota status captures the pod usage
STEP: Not allowing a pod to be created that exceeds remaining quota
STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources)
STEP: Ensuring a pod cannot update its resource requirements
STEP: Ensuring attempts to update pod resource requirements did not change quota usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:33:54.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-3503" for this suite.

• [SLOW TEST:13.288 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":275,"completed":237,"skipped":4006,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:33:54.844: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: validating cluster-info
Aug  4 11:33:54.915: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config cluster-info'
Aug  4 11:33:55.013: INFO: stderr: ""
Aug  4 11:33:55.013: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:35995\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:35995/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:33:55.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9753" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info  [Conformance]","total":275,"completed":238,"skipped":4041,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:33:55.025: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:34:00.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-4479" for this suite.

• [SLOW TEST:5.272 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":275,"completed":239,"skipped":4066,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:34:00.297: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name s-test-opt-del-4c84f55c-4ae8-4ce5-ab25-55ac361eac78
STEP: Creating secret with name s-test-opt-upd-9f714c79-4e7e-4bf7-8728-74abdb3bf21c
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-4c84f55c-4ae8-4ce5-ab25-55ac361eac78
STEP: Updating secret s-test-opt-upd-9f714c79-4e7e-4bf7-8728-74abdb3bf21c
STEP: Creating secret with name s-test-opt-create-e9352a83-97db-4a8d-bddb-70051bcabca3
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:34:08.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1574" for this suite.

• [SLOW TEST:8.346 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":240,"skipped":4099,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:34:08.644: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Performing setup for networking test in namespace pod-network-test-8050
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Aug  4 11:34:08.772: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Aug  4 11:34:08.928: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Aug  4 11:34:10.932: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Aug  4 11:34:12.933: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Aug  4 11:34:15.366: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug  4 11:34:17.011: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug  4 11:34:18.931: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug  4 11:34:20.932: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug  4 11:34:22.931: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug  4 11:34:24.932: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug  4 11:34:26.939: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug  4 11:34:28.932: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug  4 11:34:30.932: INFO: The status of Pod netserver-0 is Running (Ready = true)
Aug  4 11:34:30.939: INFO: The status of Pod netserver-1 is Running (Ready = false)
Aug  4 11:34:32.943: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Aug  4 11:34:36.974: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.9:8080/dial?request=hostname&protocol=http&host=10.244.2.72&port=8080&tries=1'] Namespace:pod-network-test-8050 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug  4 11:34:36.974: INFO: >>> kubeConfig: /root/.kube/config
I0804 11:34:36.999448       7 log.go:172] (0xc0029e4370) (0xc001923e00) Create stream
I0804 11:34:36.999481       7 log.go:172] (0xc0029e4370) (0xc001923e00) Stream added, broadcasting: 1
I0804 11:34:37.001200       7 log.go:172] (0xc0029e4370) Reply frame received for 1
I0804 11:34:37.001230       7 log.go:172] (0xc0029e4370) (0xc000bc21e0) Create stream
I0804 11:34:37.001241       7 log.go:172] (0xc0029e4370) (0xc000bc21e0) Stream added, broadcasting: 3
I0804 11:34:37.002072       7 log.go:172] (0xc0029e4370) Reply frame received for 3
I0804 11:34:37.002106       7 log.go:172] (0xc0029e4370) (0xc001747180) Create stream
I0804 11:34:37.002126       7 log.go:172] (0xc0029e4370) (0xc001747180) Stream added, broadcasting: 5
I0804 11:34:37.003052       7 log.go:172] (0xc0029e4370) Reply frame received for 5
I0804 11:34:37.089915       7 log.go:172] (0xc0029e4370) Data frame received for 3
I0804 11:34:37.089944       7 log.go:172] (0xc000bc21e0) (3) Data frame handling
I0804 11:34:37.089957       7 log.go:172] (0xc000bc21e0) (3) Data frame sent
I0804 11:34:37.090777       7 log.go:172] (0xc0029e4370) Data frame received for 3
I0804 11:34:37.090808       7 log.go:172] (0xc000bc21e0) (3) Data frame handling
I0804 11:34:37.091019       7 log.go:172] (0xc0029e4370) Data frame received for 5
I0804 11:34:37.091087       7 log.go:172] (0xc001747180) (5) Data frame handling
I0804 11:34:37.093508       7 log.go:172] (0xc0029e4370) Data frame received for 1
I0804 11:34:37.093561       7 log.go:172] (0xc001923e00) (1) Data frame handling
I0804 11:34:37.093608       7 log.go:172] (0xc001923e00) (1) Data frame sent
I0804 11:34:37.094258       7 log.go:172] (0xc0029e4370) (0xc001923e00) Stream removed, broadcasting: 1
I0804 11:34:37.094358       7 log.go:172] (0xc0029e4370) (0xc001923e00) Stream removed, broadcasting: 1
I0804 11:34:37.094406       7 log.go:172] (0xc0029e4370) (0xc000bc21e0) Stream removed, broadcasting: 3
I0804 11:34:37.094449       7 log.go:172] (0xc0029e4370) (0xc001747180) Stream removed, broadcasting: 5
Aug  4 11:34:37.094: INFO: Waiting for responses: map[]
I0804 11:34:37.099065       7 log.go:172] (0xc0029e4370) Go away received
Aug  4 11:34:37.099: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.9:8080/dial?request=hostname&protocol=http&host=10.244.1.8&port=8080&tries=1'] Namespace:pod-network-test-8050 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug  4 11:34:37.099: INFO: >>> kubeConfig: /root/.kube/config
I0804 11:34:37.121248       7 log.go:172] (0xc0029e4bb0) (0xc000bc2a00) Create stream
I0804 11:34:37.121283       7 log.go:172] (0xc0029e4bb0) (0xc000bc2a00) Stream added, broadcasting: 1
I0804 11:34:37.123229       7 log.go:172] (0xc0029e4bb0) Reply frame received for 1
I0804 11:34:37.123258       7 log.go:172] (0xc0029e4bb0) (0xc001747220) Create stream
I0804 11:34:37.123269       7 log.go:172] (0xc0029e4bb0) (0xc001747220) Stream added, broadcasting: 3
I0804 11:34:37.124214       7 log.go:172] (0xc0029e4bb0) Reply frame received for 3
I0804 11:34:37.124258       7 log.go:172] (0xc0029e4bb0) (0xc00226c320) Create stream
I0804 11:34:37.124272       7 log.go:172] (0xc0029e4bb0) (0xc00226c320) Stream added, broadcasting: 5
I0804 11:34:37.125214       7 log.go:172] (0xc0029e4bb0) Reply frame received for 5
I0804 11:34:37.202583       7 log.go:172] (0xc0029e4bb0) Data frame received for 3
I0804 11:34:37.202606       7 log.go:172] (0xc001747220) (3) Data frame handling
I0804 11:34:37.202619       7 log.go:172] (0xc001747220) (3) Data frame sent
I0804 11:34:37.203408       7 log.go:172] (0xc0029e4bb0) Data frame received for 5
I0804 11:34:37.203440       7 log.go:172] (0xc00226c320) (5) Data frame handling
I0804 11:34:37.203492       7 log.go:172] (0xc0029e4bb0) Data frame received for 3
I0804 11:34:37.203531       7 log.go:172] (0xc001747220) (3) Data frame handling
I0804 11:34:37.204710       7 log.go:172] (0xc0029e4bb0) Data frame received for 1
I0804 11:34:37.204800       7 log.go:172] (0xc000bc2a00) (1) Data frame handling
I0804 11:34:37.204821       7 log.go:172] (0xc000bc2a00) (1) Data frame sent
I0804 11:34:37.204834       7 log.go:172] (0xc0029e4bb0) (0xc000bc2a00) Stream removed, broadcasting: 1
I0804 11:34:37.204924       7 log.go:172] (0xc0029e4bb0) (0xc000bc2a00) Stream removed, broadcasting: 1
I0804 11:34:37.204955       7 log.go:172] (0xc0029e4bb0) (0xc001747220) Stream removed, broadcasting: 3
I0804 11:34:37.204974       7 log.go:172] (0xc0029e4bb0) (0xc00226c320) Stream removed, broadcasting: 5
I0804 11:34:37.205014       7 log.go:172] (0xc0029e4bb0) Go away received
Aug  4 11:34:37.205: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:34:37.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-8050" for this suite.

• [SLOW TEST:28.594 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":275,"completed":241,"skipped":4147,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:34:37.239: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:34:37.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8005" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702
•{"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":275,"completed":242,"skipped":4170,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:34:37.372: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug  4 11:34:37.442: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cc2f23a2-5f5d-46fd-b6c3-d89c14c82562" in namespace "downward-api-9599" to be "Succeeded or Failed"
Aug  4 11:34:37.446: INFO: Pod "downwardapi-volume-cc2f23a2-5f5d-46fd-b6c3-d89c14c82562": Phase="Pending", Reason="", readiness=false. Elapsed: 3.362677ms
Aug  4 11:34:39.500: INFO: Pod "downwardapi-volume-cc2f23a2-5f5d-46fd-b6c3-d89c14c82562": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058079465s
Aug  4 11:34:41.504: INFO: Pod "downwardapi-volume-cc2f23a2-5f5d-46fd-b6c3-d89c14c82562": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.061285643s
STEP: Saw pod success
Aug  4 11:34:41.504: INFO: Pod "downwardapi-volume-cc2f23a2-5f5d-46fd-b6c3-d89c14c82562" satisfied condition "Succeeded or Failed"
Aug  4 11:34:41.506: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-cc2f23a2-5f5d-46fd-b6c3-d89c14c82562 container client-container: 
STEP: delete the pod
Aug  4 11:34:41.731: INFO: Waiting for pod downwardapi-volume-cc2f23a2-5f5d-46fd-b6c3-d89c14c82562 to disappear
Aug  4 11:34:41.745: INFO: Pod downwardapi-volume-cc2f23a2-5f5d-46fd-b6c3-d89c14c82562 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:34:41.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9599" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":243,"skipped":4219,"failed":0}
SSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:34:41.756: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:34:47.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-1031" for this suite.

• [SLOW TEST:6.281 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":275,"completed":244,"skipped":4225,"failed":0}
SSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a container with runAsUser 
  should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:34:48.037: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug  4 11:34:48.115: INFO: Waiting up to 5m0s for pod "busybox-user-65534-33657c52-e8b6-4845-9f9c-a556a6da52af" in namespace "security-context-test-9824" to be "Succeeded or Failed"
Aug  4 11:34:48.479: INFO: Pod "busybox-user-65534-33657c52-e8b6-4845-9f9c-a556a6da52af": Phase="Pending", Reason="", readiness=false. Elapsed: 364.25896ms
Aug  4 11:34:50.484: INFO: Pod "busybox-user-65534-33657c52-e8b6-4845-9f9c-a556a6da52af": Phase="Pending", Reason="", readiness=false. Elapsed: 2.368687308s
Aug  4 11:34:52.609: INFO: Pod "busybox-user-65534-33657c52-e8b6-4845-9f9c-a556a6da52af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.494459665s
Aug  4 11:34:52.609: INFO: Pod "busybox-user-65534-33657c52-e8b6-4845-9f9c-a556a6da52af" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:34:52.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-9824" for this suite.
•{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":245,"skipped":4235,"failed":0}
SSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:34:52.619: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Aug  4 11:35:03.991: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-864 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug  4 11:35:03.991: INFO: >>> kubeConfig: /root/.kube/config
I0804 11:35:04.027655       7 log.go:172] (0xc00180a6e0) (0xc00236d680) Create stream
I0804 11:35:04.027688       7 log.go:172] (0xc00180a6e0) (0xc00236d680) Stream added, broadcasting: 1
I0804 11:35:04.029894       7 log.go:172] (0xc00180a6e0) Reply frame received for 1
I0804 11:35:04.029951       7 log.go:172] (0xc00180a6e0) (0xc00236d720) Create stream
I0804 11:35:04.029974       7 log.go:172] (0xc00180a6e0) (0xc00236d720) Stream added, broadcasting: 3
I0804 11:35:04.031125       7 log.go:172] (0xc00180a6e0) Reply frame received for 3
I0804 11:35:04.031183       7 log.go:172] (0xc00180a6e0) (0xc00159ea00) Create stream
I0804 11:35:04.031203       7 log.go:172] (0xc00180a6e0) (0xc00159ea00) Stream added, broadcasting: 5
I0804 11:35:04.032338       7 log.go:172] (0xc00180a6e0) Reply frame received for 5
I0804 11:35:04.109195       7 log.go:172] (0xc00180a6e0) Data frame received for 5
I0804 11:35:04.109225       7 log.go:172] (0xc00159ea00) (5) Data frame handling
I0804 11:35:04.109280       7 log.go:172] (0xc00180a6e0) Data frame received for 3
I0804 11:35:04.109316       7 log.go:172] (0xc00236d720) (3) Data frame handling
I0804 11:35:04.109345       7 log.go:172] (0xc00236d720) (3) Data frame sent
I0804 11:35:04.109361       7 log.go:172] (0xc00180a6e0) Data frame received for 3
I0804 11:35:04.109372       7 log.go:172] (0xc00236d720) (3) Data frame handling
I0804 11:35:04.110764       7 log.go:172] (0xc00180a6e0) Data frame received for 1
I0804 11:35:04.110785       7 log.go:172] (0xc00236d680) (1) Data frame handling
I0804 11:35:04.110798       7 log.go:172] (0xc00236d680) (1) Data frame sent
I0804 11:35:04.110811       7 log.go:172] (0xc00180a6e0) (0xc00236d680) Stream removed, broadcasting: 1
I0804 11:35:04.110837       7 log.go:172] (0xc00180a6e0) Go away received
I0804 11:35:04.110966       7 log.go:172] (0xc00180a6e0) (0xc00236d680) Stream removed, broadcasting: 1
I0804 11:35:04.110998       7 log.go:172] (0xc00180a6e0) (0xc00236d720) Stream removed, broadcasting: 3
I0804 11:35:04.111013       7 log.go:172] (0xc00180a6e0) (0xc00159ea00) Stream removed, broadcasting: 5
Aug  4 11:35:04.111: INFO: Exec stderr: ""
Aug  4 11:35:04.111: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-864 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug  4 11:35:04.111: INFO: >>> kubeConfig: /root/.kube/config
I0804 11:35:04.137561       7 log.go:172] (0xc002d484d0) (0xc001190780) Create stream
I0804 11:35:04.137595       7 log.go:172] (0xc002d484d0) (0xc001190780) Stream added, broadcasting: 1
I0804 11:35:04.139537       7 log.go:172] (0xc002d484d0) Reply frame received for 1
I0804 11:35:04.139581       7 log.go:172] (0xc002d484d0) (0xc00236d7c0) Create stream
I0804 11:35:04.139599       7 log.go:172] (0xc002d484d0) (0xc00236d7c0) Stream added, broadcasting: 3
I0804 11:35:04.140510       7 log.go:172] (0xc002d484d0) Reply frame received for 3
I0804 11:35:04.140574       7 log.go:172] (0xc002d484d0) (0xc00236d860) Create stream
I0804 11:35:04.140592       7 log.go:172] (0xc002d484d0) (0xc00236d860) Stream added, broadcasting: 5
I0804 11:35:04.141486       7 log.go:172] (0xc002d484d0) Reply frame received for 5
I0804 11:35:04.200705       7 log.go:172] (0xc002d484d0) Data frame received for 3
I0804 11:35:04.200808       7 log.go:172] (0xc00236d7c0) (3) Data frame handling
I0804 11:35:04.200824       7 log.go:172] (0xc00236d7c0) (3) Data frame sent
I0804 11:35:04.200832       7 log.go:172] (0xc002d484d0) Data frame received for 3
I0804 11:35:04.200845       7 log.go:172] (0xc00236d7c0) (3) Data frame handling
I0804 11:35:04.200860       7 log.go:172] (0xc002d484d0) Data frame received for 5
I0804 11:35:04.200874       7 log.go:172] (0xc00236d860) (5) Data frame handling
I0804 11:35:04.203178       7 log.go:172] (0xc002d484d0) Data frame received for 1
I0804 11:35:04.203193       7 log.go:172] (0xc001190780) (1) Data frame handling
I0804 11:35:04.203201       7 log.go:172] (0xc001190780) (1) Data frame sent
I0804 11:35:04.203223       7 log.go:172] (0xc002d484d0) (0xc001190780) Stream removed, broadcasting: 1
I0804 11:35:04.203240       7 log.go:172] (0xc002d484d0) Go away received
I0804 11:35:04.203385       7 log.go:172] (0xc002d484d0) (0xc001190780) Stream removed, broadcasting: 1
I0804 11:35:04.203405       7 log.go:172] (0xc002d484d0) (0xc00236d7c0) Stream removed, broadcasting: 3
I0804 11:35:04.203417       7 log.go:172] (0xc002d484d0) (0xc00236d860) Stream removed, broadcasting: 5
Aug  4 11:35:04.203: INFO: Exec stderr: ""
Aug  4 11:35:04.203: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-864 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug  4 11:35:04.203: INFO: >>> kubeConfig: /root/.kube/config
I0804 11:35:04.236220       7 log.go:172] (0xc00180adc0) (0xc00236da40) Create stream
I0804 11:35:04.236265       7 log.go:172] (0xc00180adc0) (0xc00236da40) Stream added, broadcasting: 1
I0804 11:35:04.238779       7 log.go:172] (0xc00180adc0) Reply frame received for 1
I0804 11:35:04.238846       7 log.go:172] (0xc00180adc0) (0xc00159ebe0) Create stream
I0804 11:35:04.238869       7 log.go:172] (0xc00180adc0) (0xc00159ebe0) Stream added, broadcasting: 3
I0804 11:35:04.239776       7 log.go:172] (0xc00180adc0) Reply frame received for 3
I0804 11:35:04.239805       7 log.go:172] (0xc00180adc0) (0xc00236dae0) Create stream
I0804 11:35:04.239811       7 log.go:172] (0xc00180adc0) (0xc00236dae0) Stream added, broadcasting: 5
I0804 11:35:04.240842       7 log.go:172] (0xc00180adc0) Reply frame received for 5
I0804 11:35:04.298916       7 log.go:172] (0xc00180adc0) Data frame received for 5
I0804 11:35:04.298961       7 log.go:172] (0xc00236dae0) (5) Data frame handling
I0804 11:35:04.299002       7 log.go:172] (0xc00180adc0) Data frame received for 3
I0804 11:35:04.299061       7 log.go:172] (0xc00159ebe0) (3) Data frame handling
I0804 11:35:04.299095       7 log.go:172] (0xc00159ebe0) (3) Data frame sent
I0804 11:35:04.299115       7 log.go:172] (0xc00180adc0) Data frame received for 3
I0804 11:35:04.299133       7 log.go:172] (0xc00159ebe0) (3) Data frame handling
I0804 11:35:04.300991       7 log.go:172] (0xc00180adc0) Data frame received for 1
I0804 11:35:04.301017       7 log.go:172] (0xc00236da40) (1) Data frame handling
I0804 11:35:04.301030       7 log.go:172] (0xc00236da40) (1) Data frame sent
I0804 11:35:04.301051       7 log.go:172] (0xc00180adc0) (0xc00236da40) Stream removed, broadcasting: 1
I0804 11:35:04.301108       7 log.go:172] (0xc00180adc0) Go away received
I0804 11:35:04.301198       7 log.go:172] (0xc00180adc0) (0xc00236da40) Stream removed, broadcasting: 1
I0804 11:35:04.301242       7 log.go:172] (0xc00180adc0) (0xc00159ebe0) Stream removed, broadcasting: 3
I0804 11:35:04.301261       7 log.go:172] (0xc00180adc0) (0xc00236dae0) Stream removed, broadcasting: 5
Aug  4 11:35:04.301: INFO: Exec stderr: ""
Aug  4 11:35:04.301: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-864 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug  4 11:35:04.301: INFO: >>> kubeConfig: /root/.kube/config
I0804 11:35:04.329619       7 log.go:172] (0xc002d06a50) (0xc002926820) Create stream
I0804 11:35:04.329645       7 log.go:172] (0xc002d06a50) (0xc002926820) Stream added, broadcasting: 1
I0804 11:35:04.331511       7 log.go:172] (0xc002d06a50) Reply frame received for 1
I0804 11:35:04.331546       7 log.go:172] (0xc002d06a50) (0xc00236db80) Create stream
I0804 11:35:04.331558       7 log.go:172] (0xc002d06a50) (0xc00236db80) Stream added, broadcasting: 3
I0804 11:35:04.332655       7 log.go:172] (0xc002d06a50) Reply frame received for 3
I0804 11:35:04.332679       7 log.go:172] (0xc002d06a50) (0xc001190960) Create stream
I0804 11:35:04.332689       7 log.go:172] (0xc002d06a50) (0xc001190960) Stream added, broadcasting: 5
I0804 11:35:04.333928       7 log.go:172] (0xc002d06a50) Reply frame received for 5
I0804 11:35:04.397513       7 log.go:172] (0xc002d06a50) Data frame received for 5
I0804 11:35:04.397560       7 log.go:172] (0xc001190960) (5) Data frame handling
I0804 11:35:04.397919       7 log.go:172] (0xc002d06a50) Data frame received for 3
I0804 11:35:04.397947       7 log.go:172] (0xc00236db80) (3) Data frame handling
I0804 11:35:04.397975       7 log.go:172] (0xc00236db80) (3) Data frame sent
I0804 11:35:04.397989       7 log.go:172] (0xc002d06a50) Data frame received for 3
I0804 11:35:04.397999       7 log.go:172] (0xc00236db80) (3) Data frame handling
I0804 11:35:04.401230       7 log.go:172] (0xc002d06a50) Data frame received for 1
I0804 11:35:04.401283       7 log.go:172] (0xc002926820) (1) Data frame handling
I0804 11:35:04.401308       7 log.go:172] (0xc002926820) (1) Data frame sent
I0804 11:35:04.401325       7 log.go:172] (0xc002d06a50) (0xc002926820) Stream removed, broadcasting: 1
I0804 11:35:04.401344       7 log.go:172] (0xc002d06a50) Go away received
I0804 11:35:04.401495       7 log.go:172] (0xc002d06a50) (0xc002926820) Stream removed, broadcasting: 1
I0804 11:35:04.401532       7 log.go:172] (0xc002d06a50) (0xc00236db80) Stream removed, broadcasting: 3
I0804 11:35:04.401560       7 log.go:172] (0xc002d06a50) (0xc001190960) Stream removed, broadcasting: 5
Aug  4 11:35:04.401: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Aug  4 11:35:04.401: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-864 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug  4 11:35:04.401: INFO: >>> kubeConfig: /root/.kube/config
I0804 11:35:04.440597       7 log.go:172] (0xc0029e5760) (0xc00159f400) Create stream
I0804 11:35:04.440630       7 log.go:172] (0xc0029e5760) (0xc00159f400) Stream added, broadcasting: 1
I0804 11:35:04.442513       7 log.go:172] (0xc0029e5760) Reply frame received for 1
I0804 11:35:04.442562       7 log.go:172] (0xc0029e5760) (0xc0023b61e0) Create stream
I0804 11:35:04.442576       7 log.go:172] (0xc0029e5760) (0xc0023b61e0) Stream added, broadcasting: 3
I0804 11:35:04.443638       7 log.go:172] (0xc0029e5760) Reply frame received for 3
I0804 11:35:04.443696       7 log.go:172] (0xc0029e5760) (0xc0029268c0) Create stream
I0804 11:35:04.443717       7 log.go:172] (0xc0029e5760) (0xc0029268c0) Stream added, broadcasting: 5
I0804 11:35:04.444599       7 log.go:172] (0xc0029e5760) Reply frame received for 5
I0804 11:35:04.509935       7 log.go:172] (0xc0029e5760) Data frame received for 5
I0804 11:35:04.509968       7 log.go:172] (0xc0029268c0) (5) Data frame handling
I0804 11:35:04.510010       7 log.go:172] (0xc0029e5760) Data frame received for 3
I0804 11:35:04.510066       7 log.go:172] (0xc0023b61e0) (3) Data frame handling
I0804 11:35:04.510107       7 log.go:172] (0xc0023b61e0) (3) Data frame sent
I0804 11:35:04.510173       7 log.go:172] (0xc0029e5760) Data frame received for 3
I0804 11:35:04.510194       7 log.go:172] (0xc0023b61e0) (3) Data frame handling
I0804 11:35:04.511386       7 log.go:172] (0xc0029e5760) Data frame received for 1
I0804 11:35:04.511464       7 log.go:172] (0xc00159f400) (1) Data frame handling
I0804 11:35:04.511502       7 log.go:172] (0xc00159f400) (1) Data frame sent
I0804 11:35:04.511529       7 log.go:172] (0xc0029e5760) (0xc00159f400) Stream removed, broadcasting: 1
I0804 11:35:04.511556       7 log.go:172] (0xc0029e5760) Go away received
I0804 11:35:04.511738       7 log.go:172] (0xc0029e5760) (0xc00159f400) Stream removed, broadcasting: 1
I0804 11:35:04.511764       7 log.go:172] (0xc0029e5760) (0xc0023b61e0) Stream removed, broadcasting: 3
I0804 11:35:04.511783       7 log.go:172] (0xc0029e5760) (0xc0029268c0) Stream removed, broadcasting: 5
Aug  4 11:35:04.511: INFO: Exec stderr: ""
Aug  4 11:35:04.511: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-864 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug  4 11:35:04.511: INFO: >>> kubeConfig: /root/.kube/config
I0804 11:35:04.590424       7 log.go:172] (0xc00285b810) (0xc0023b6280) Create stream
I0804 11:35:04.590457       7 log.go:172] (0xc00285b810) (0xc0023b6280) Stream added, broadcasting: 1
I0804 11:35:04.592366       7 log.go:172] (0xc00285b810) Reply frame received for 1
I0804 11:35:04.592407       7 log.go:172] (0xc00285b810) (0xc001190e60) Create stream
I0804 11:35:04.592418       7 log.go:172] (0xc00285b810) (0xc001190e60) Stream added, broadcasting: 3
I0804 11:35:04.593492       7 log.go:172] (0xc00285b810) Reply frame received for 3
I0804 11:35:04.593528       7 log.go:172] (0xc00285b810) (0xc00159f9a0) Create stream
I0804 11:35:04.593555       7 log.go:172] (0xc00285b810) (0xc00159f9a0) Stream added, broadcasting: 5
I0804 11:35:04.594569       7 log.go:172] (0xc00285b810) Reply frame received for 5
I0804 11:35:04.665936       7 log.go:172] (0xc00285b810) Data frame received for 5
I0804 11:35:04.665984       7 log.go:172] (0xc00159f9a0) (5) Data frame handling
I0804 11:35:04.666006       7 log.go:172] (0xc00285b810) Data frame received for 3
I0804 11:35:04.666017       7 log.go:172] (0xc001190e60) (3) Data frame handling
I0804 11:35:04.666032       7 log.go:172] (0xc001190e60) (3) Data frame sent
I0804 11:35:04.666051       7 log.go:172] (0xc00285b810) Data frame received for 3
I0804 11:35:04.666074       7 log.go:172] (0xc001190e60) (3) Data frame handling
I0804 11:35:04.667422       7 log.go:172] (0xc00285b810) Data frame received for 1
I0804 11:35:04.667440       7 log.go:172] (0xc0023b6280) (1) Data frame handling
I0804 11:35:04.667461       7 log.go:172] (0xc0023b6280) (1) Data frame sent
I0804 11:35:04.667490       7 log.go:172] (0xc00285b810) (0xc0023b6280) Stream removed, broadcasting: 1
I0804 11:35:04.667504       7 log.go:172] (0xc00285b810) Go away received
I0804 11:35:04.667664       7 log.go:172] (0xc00285b810) (0xc0023b6280) Stream removed, broadcasting: 1
I0804 11:35:04.667689       7 log.go:172] (0xc00285b810) (0xc001190e60) Stream removed, broadcasting: 3
I0804 11:35:04.667730       7 log.go:172] (0xc00285b810) (0xc00159f9a0) Stream removed, broadcasting: 5
Aug  4 11:35:04.667: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Aug  4 11:35:04.667: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-864 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug  4 11:35:04.667: INFO: >>> kubeConfig: /root/.kube/config
I0804 11:35:04.730822       7 log.go:172] (0xc002d48e70) (0xc0011912c0) Create stream
I0804 11:35:04.730864       7 log.go:172] (0xc002d48e70) (0xc0011912c0) Stream added, broadcasting: 1
I0804 11:35:04.733611       7 log.go:172] (0xc002d48e70) Reply frame received for 1
I0804 11:35:04.733668       7 log.go:172] (0xc002d48e70) (0xc001191360) Create stream
I0804 11:35:04.733685       7 log.go:172] (0xc002d48e70) (0xc001191360) Stream added, broadcasting: 3
I0804 11:35:04.734865       7 log.go:172] (0xc002d48e70) Reply frame received for 3
I0804 11:35:04.734912       7 log.go:172] (0xc002d48e70) (0xc0023b6320) Create stream
I0804 11:35:04.734939       7 log.go:172] (0xc002d48e70) (0xc0023b6320) Stream added, broadcasting: 5
I0804 11:35:04.736130       7 log.go:172] (0xc002d48e70) Reply frame received for 5
I0804 11:35:04.814318       7 log.go:172] (0xc002d48e70) Data frame received for 3
I0804 11:35:04.814348       7 log.go:172] (0xc001191360) (3) Data frame handling
I0804 11:35:04.814367       7 log.go:172] (0xc001191360) (3) Data frame sent
I0804 11:35:04.814387       7 log.go:172] (0xc002d48e70) Data frame received for 3
I0804 11:35:04.814401       7 log.go:172] (0xc001191360) (3) Data frame handling
I0804 11:35:04.814418       7 log.go:172] (0xc002d48e70) Data frame received for 5
I0804 11:35:04.814430       7 log.go:172] (0xc0023b6320) (5) Data frame handling
I0804 11:35:04.815672       7 log.go:172] (0xc002d48e70) Data frame received for 1
I0804 11:35:04.815707       7 log.go:172] (0xc0011912c0) (1) Data frame handling
I0804 11:35:04.815739       7 log.go:172] (0xc0011912c0) (1) Data frame sent
I0804 11:35:04.815767       7 log.go:172] (0xc002d48e70) (0xc0011912c0) Stream removed, broadcasting: 1
I0804 11:35:04.815803       7 log.go:172] (0xc002d48e70) Go away received
I0804 11:35:04.815963       7 log.go:172] (0xc002d48e70) (0xc0011912c0) Stream removed, broadcasting: 1
I0804 11:35:04.816000       7 log.go:172] (0xc002d48e70) (0xc001191360) Stream removed, broadcasting: 3
I0804 11:35:04.816031       7 log.go:172] (0xc002d48e70) (0xc0023b6320) Stream removed, broadcasting: 5
Aug  4 11:35:04.816: INFO: Exec stderr: ""
Aug  4 11:35:04.816: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-864 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug  4 11:35:04.816: INFO: >>> kubeConfig: /root/.kube/config
I0804 11:35:04.846523       7 log.go:172] (0xc0029e5d90) (0xc0028b40a0) Create stream
I0804 11:35:04.846549       7 log.go:172] (0xc0029e5d90) (0xc0028b40a0) Stream added, broadcasting: 1
I0804 11:35:04.855981       7 log.go:172] (0xc0029e5d90) Reply frame received for 1
I0804 11:35:04.856034       7 log.go:172] (0xc0029e5d90) (0xc00236c000) Create stream
I0804 11:35:04.856051       7 log.go:172] (0xc0029e5d90) (0xc00236c000) Stream added, broadcasting: 3
I0804 11:35:04.857064       7 log.go:172] (0xc0029e5d90) Reply frame received for 3
I0804 11:35:04.857106       7 log.go:172] (0xc0029e5d90) (0xc00236c0a0) Create stream
I0804 11:35:04.857116       7 log.go:172] (0xc0029e5d90) (0xc00236c0a0) Stream added, broadcasting: 5
I0804 11:35:04.858108       7 log.go:172] (0xc0029e5d90) Reply frame received for 5
I0804 11:35:04.921435       7 log.go:172] (0xc0029e5d90) Data frame received for 3
I0804 11:35:04.921479       7 log.go:172] (0xc00236c000) (3) Data frame handling
I0804 11:35:04.921495       7 log.go:172] (0xc00236c000) (3) Data frame sent
I0804 11:35:04.921510       7 log.go:172] (0xc0029e5d90) Data frame received for 3
I0804 11:35:04.921525       7 log.go:172] (0xc00236c000) (3) Data frame handling
I0804 11:35:04.921566       7 log.go:172] (0xc0029e5d90) Data frame received for 5
I0804 11:35:04.921601       7 log.go:172] (0xc00236c0a0) (5) Data frame handling
I0804 11:35:04.923377       7 log.go:172] (0xc0029e5d90) Data frame received for 1
I0804 11:35:04.923421       7 log.go:172] (0xc0028b40a0) (1) Data frame handling
I0804 11:35:04.923470       7 log.go:172] (0xc0028b40a0) (1) Data frame sent
I0804 11:35:04.923495       7 log.go:172] (0xc0029e5d90) (0xc0028b40a0) Stream removed, broadcasting: 1
I0804 11:35:04.923518       7 log.go:172] (0xc0029e5d90) Go away received
I0804 11:35:04.923639       7 log.go:172] (0xc0029e5d90) (0xc0028b40a0) Stream removed, broadcasting: 1
I0804 11:35:04.923670       7 log.go:172] (0xc0029e5d90) (0xc00236c000) Stream removed, broadcasting: 3
I0804 11:35:04.923701       7 log.go:172] (0xc0029e5d90) (0xc00236c0a0) Stream removed, broadcasting: 5
Aug  4 11:35:04.923: INFO: Exec stderr: ""
Aug  4 11:35:04.923: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-864 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug  4 11:35:04.923: INFO: >>> kubeConfig: /root/.kube/config
I0804 11:35:04.959624       7 log.go:172] (0xc0029e4370) (0xc001314280) Create stream
I0804 11:35:04.959654       7 log.go:172] (0xc0029e4370) (0xc001314280) Stream added, broadcasting: 1
I0804 11:35:04.966066       7 log.go:172] (0xc0029e4370) Reply frame received for 1
I0804 11:35:04.966106       7 log.go:172] (0xc0029e4370) (0xc00236c1e0) Create stream
I0804 11:35:04.966120       7 log.go:172] (0xc0029e4370) (0xc00236c1e0) Stream added, broadcasting: 3
I0804 11:35:04.967066       7 log.go:172] (0xc0029e4370) Reply frame received for 3
I0804 11:35:04.967117       7 log.go:172] (0xc0029e4370) (0xc00236c280) Create stream
I0804 11:35:04.967135       7 log.go:172] (0xc0029e4370) (0xc00236c280) Stream added, broadcasting: 5
I0804 11:35:04.968589       7 log.go:172] (0xc0029e4370) Reply frame received for 5
I0804 11:35:05.041561       7 log.go:172] (0xc0029e4370) Data frame received for 5
I0804 11:35:05.041647       7 log.go:172] (0xc00236c280) (5) Data frame handling
I0804 11:35:05.041682       7 log.go:172] (0xc0029e4370) Data frame received for 3
I0804 11:35:05.041693       7 log.go:172] (0xc00236c1e0) (3) Data frame handling
I0804 11:35:05.041705       7 log.go:172] (0xc00236c1e0) (3) Data frame sent
I0804 11:35:05.041719       7 log.go:172] (0xc0029e4370) Data frame received for 3
I0804 11:35:05.041736       7 log.go:172] (0xc00236c1e0) (3) Data frame handling
I0804 11:35:05.045210       7 log.go:172] (0xc0029e4370) Data frame received for 1
I0804 11:35:05.045231       7 log.go:172] (0xc001314280) (1) Data frame handling
I0804 11:35:05.045242       7 log.go:172] (0xc001314280) (1) Data frame sent
I0804 11:35:05.045251       7 log.go:172] (0xc0029e4370) (0xc001314280) Stream removed, broadcasting: 1
I0804 11:35:05.045263       7 log.go:172] (0xc0029e4370) Go away received
I0804 11:35:05.045396       7 log.go:172] (0xc0029e4370) (0xc001314280) Stream removed, broadcasting: 1
I0804 11:35:05.045419       7 log.go:172] (0xc0029e4370) (0xc00236c1e0) Stream removed, broadcasting: 3
I0804 11:35:05.045436       7 log.go:172] (0xc0029e4370) (0xc00236c280) Stream removed, broadcasting: 5
Aug  4 11:35:05.045: INFO: Exec stderr: ""
Aug  4 11:35:05.045: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-864 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug  4 11:35:05.045: INFO: >>> kubeConfig: /root/.kube/config
I0804 11:35:05.073714       7 log.go:172] (0xc00180a160) (0xc00159ebe0) Create stream
I0804 11:35:05.073741       7 log.go:172] (0xc00180a160) (0xc00159ebe0) Stream added, broadcasting: 1
I0804 11:35:05.075360       7 log.go:172] (0xc00180a160) Reply frame received for 1
I0804 11:35:05.075402       7 log.go:172] (0xc00180a160) (0xc00236c3c0) Create stream
I0804 11:35:05.075422       7 log.go:172] (0xc00180a160) (0xc00236c3c0) Stream added, broadcasting: 3
I0804 11:35:05.076376       7 log.go:172] (0xc00180a160) Reply frame received for 3
I0804 11:35:05.076421       7 log.go:172] (0xc00180a160) (0xc00159ec80) Create stream
I0804 11:35:05.076436       7 log.go:172] (0xc00180a160) (0xc00159ec80) Stream added, broadcasting: 5
I0804 11:35:05.077631       7 log.go:172] (0xc00180a160) Reply frame received for 5
I0804 11:35:05.137235       7 log.go:172] (0xc00180a160) Data frame received for 3
I0804 11:35:05.137274       7 log.go:172] (0xc00236c3c0) (3) Data frame handling
I0804 11:35:05.137292       7 log.go:172] (0xc00236c3c0) (3) Data frame sent
I0804 11:35:05.137308       7 log.go:172] (0xc00180a160) Data frame received for 3
I0804 11:35:05.137321       7 log.go:172] (0xc00236c3c0) (3) Data frame handling
I0804 11:35:05.137336       7 log.go:172] (0xc00180a160) Data frame received for 5
I0804 11:35:05.137346       7 log.go:172] (0xc00159ec80) (5) Data frame handling
I0804 11:35:05.138874       7 log.go:172] (0xc00180a160) Data frame received for 1
I0804 11:35:05.138894       7 log.go:172] (0xc00159ebe0) (1) Data frame handling
I0804 11:35:05.138908       7 log.go:172] (0xc00159ebe0) (1) Data frame sent
I0804 11:35:05.138925       7 log.go:172] (0xc00180a160) (0xc00159ebe0) Stream removed, broadcasting: 1
I0804 11:35:05.138951       7 log.go:172] (0xc00180a160) Go away received
I0804 11:35:05.139083       7 log.go:172] (0xc00180a160) (0xc00159ebe0) Stream removed, broadcasting: 1
I0804 11:35:05.139114       7 log.go:172] (0xc00180a160) (0xc00236c3c0) Stream removed, broadcasting: 3
I0804 11:35:05.139145       7 log.go:172] (0xc00180a160) (0xc00159ec80) Stream removed, broadcasting: 5
Aug  4 11:35:05.139: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:35:05.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-864" for this suite.

• [SLOW TEST:12.531 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":246,"skipped":4242,"failed":0}
S
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:35:05.151: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod liveness-45c5ab10-1705-4264-ae4d-5c211ec59fc5 in namespace container-probe-2633
Aug  4 11:35:09.267: INFO: Started pod liveness-45c5ab10-1705-4264-ae4d-5c211ec59fc5 in namespace container-probe-2633
STEP: checking the pod's current state and verifying that restartCount is present
Aug  4 11:35:09.270: INFO: Initial restart count of pod liveness-45c5ab10-1705-4264-ae4d-5c211ec59fc5 is 0
Aug  4 11:35:27.563: INFO: Restart count of pod container-probe-2633/liveness-45c5ab10-1705-4264-ae4d-5c211ec59fc5 is now 1 (18.29306062s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:35:27.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-2633" for this suite.

• [SLOW TEST:22.467 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":275,"completed":247,"skipped":4243,"failed":0}
SS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:35:27.618: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug  4 11:35:28.634: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created
Aug  4 11:35:30.696: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732137728, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732137728, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732137730, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732137728, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug  4 11:35:32.705: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732137728, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732137728, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732137730, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732137728, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug  4 11:35:35.809: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug  4 11:35:35.812: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-7727-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:35:37.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9361" for this suite.
STEP: Destroying namespace "webhook-9361-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:9.507 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":275,"completed":248,"skipped":4245,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:35:37.126: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name projected-secret-test-219ce0e7-2ecd-42a0-a3ec-719287235c83
STEP: Creating a pod to test consume secrets
Aug  4 11:35:37.236: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-81b2b97f-8200-4b1d-b48a-84b924cd7aab" in namespace "projected-3448" to be "Succeeded or Failed"
Aug  4 11:35:37.250: INFO: Pod "pod-projected-secrets-81b2b97f-8200-4b1d-b48a-84b924cd7aab": Phase="Pending", Reason="", readiness=false. Elapsed: 13.684607ms
Aug  4 11:35:39.253: INFO: Pod "pod-projected-secrets-81b2b97f-8200-4b1d-b48a-84b924cd7aab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017516025s
Aug  4 11:35:41.484: INFO: Pod "pod-projected-secrets-81b2b97f-8200-4b1d-b48a-84b924cd7aab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.248200179s
STEP: Saw pod success
Aug  4 11:35:41.484: INFO: Pod "pod-projected-secrets-81b2b97f-8200-4b1d-b48a-84b924cd7aab" satisfied condition "Succeeded or Failed"
Aug  4 11:35:41.661: INFO: Trying to get logs from node kali-worker pod pod-projected-secrets-81b2b97f-8200-4b1d-b48a-84b924cd7aab container projected-secret-volume-test: 
STEP: delete the pod
Aug  4 11:35:41.961: INFO: Waiting for pod pod-projected-secrets-81b2b97f-8200-4b1d-b48a-84b924cd7aab to disappear
Aug  4 11:35:41.995: INFO: Pod pod-projected-secrets-81b2b97f-8200-4b1d-b48a-84b924cd7aab no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:35:41.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3448" for this suite.

• [SLOW TEST:5.227 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":249,"skipped":4259,"failed":0}
SSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:35:42.353: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:35:48.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-3186" for this suite.

• [SLOW TEST:6.209 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:41
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":275,"completed":250,"skipped":4266,"failed":0}
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:35:48.563: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod pod-subpath-test-secret-gnst
STEP: Creating a pod to test atomic-volume-subpath
Aug  4 11:35:48.677: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-gnst" in namespace "subpath-3657" to be "Succeeded or Failed"
Aug  4 11:35:48.699: INFO: Pod "pod-subpath-test-secret-gnst": Phase="Pending", Reason="", readiness=false. Elapsed: 22.362172ms
Aug  4 11:35:50.702: INFO: Pod "pod-subpath-test-secret-gnst": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025601022s
Aug  4 11:35:52.707: INFO: Pod "pod-subpath-test-secret-gnst": Phase="Running", Reason="", readiness=true. Elapsed: 4.030185475s
Aug  4 11:35:54.712: INFO: Pod "pod-subpath-test-secret-gnst": Phase="Running", Reason="", readiness=true. Elapsed: 6.035519371s
Aug  4 11:35:56.723: INFO: Pod "pod-subpath-test-secret-gnst": Phase="Running", Reason="", readiness=true. Elapsed: 8.04638354s
Aug  4 11:35:58.728: INFO: Pod "pod-subpath-test-secret-gnst": Phase="Running", Reason="", readiness=true. Elapsed: 10.051079207s
Aug  4 11:36:00.754: INFO: Pod "pod-subpath-test-secret-gnst": Phase="Running", Reason="", readiness=true. Elapsed: 12.07728854s
Aug  4 11:36:02.759: INFO: Pod "pod-subpath-test-secret-gnst": Phase="Running", Reason="", readiness=true. Elapsed: 14.08199906s
Aug  4 11:36:04.778: INFO: Pod "pod-subpath-test-secret-gnst": Phase="Running", Reason="", readiness=true. Elapsed: 16.10166827s
Aug  4 11:36:06.783: INFO: Pod "pod-subpath-test-secret-gnst": Phase="Running", Reason="", readiness=true. Elapsed: 18.106590871s
Aug  4 11:36:08.840: INFO: Pod "pod-subpath-test-secret-gnst": Phase="Running", Reason="", readiness=true. Elapsed: 20.163587001s
Aug  4 11:36:10.845: INFO: Pod "pod-subpath-test-secret-gnst": Phase="Running", Reason="", readiness=true. Elapsed: 22.167895756s
Aug  4 11:36:12.849: INFO: Pod "pod-subpath-test-secret-gnst": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.172110965s
STEP: Saw pod success
Aug  4 11:36:12.849: INFO: Pod "pod-subpath-test-secret-gnst" satisfied condition "Succeeded or Failed"
Aug  4 11:36:12.852: INFO: Trying to get logs from node kali-worker pod pod-subpath-test-secret-gnst container test-container-subpath-secret-gnst: 
STEP: delete the pod
Aug  4 11:36:12.879: INFO: Waiting for pod pod-subpath-test-secret-gnst to disappear
Aug  4 11:36:12.891: INFO: Pod pod-subpath-test-secret-gnst no longer exists
STEP: Deleting pod pod-subpath-test-secret-gnst
Aug  4 11:36:12.892: INFO: Deleting pod "pod-subpath-test-secret-gnst" in namespace "subpath-3657"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:36:12.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-3657" for this suite.

• [SLOW TEST:24.397 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":275,"completed":251,"skipped":4266,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:36:12.961: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug  4 11:36:13.006: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:36:13.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-3992" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":275,"completed":252,"skipped":4337,"failed":0}
SSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:36:13.643: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
Aug  4 11:36:13.753: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:36:22.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-1905" for this suite.

• [SLOW TEST:9.098 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":275,"completed":253,"skipped":4340,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:36:22.741: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0777 on node default medium
Aug  4 11:36:22.867: INFO: Waiting up to 5m0s for pod "pod-92ab7916-610c-493e-9b0f-72a947369df2" in namespace "emptydir-2896" to be "Succeeded or Failed"
Aug  4 11:36:22.934: INFO: Pod "pod-92ab7916-610c-493e-9b0f-72a947369df2": Phase="Pending", Reason="", readiness=false. Elapsed: 66.962994ms
Aug  4 11:36:24.942: INFO: Pod "pod-92ab7916-610c-493e-9b0f-72a947369df2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075222614s
Aug  4 11:36:26.958: INFO: Pod "pod-92ab7916-610c-493e-9b0f-72a947369df2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.091086378s
STEP: Saw pod success
Aug  4 11:36:26.958: INFO: Pod "pod-92ab7916-610c-493e-9b0f-72a947369df2" satisfied condition "Succeeded or Failed"
Aug  4 11:36:26.960: INFO: Trying to get logs from node kali-worker pod pod-92ab7916-610c-493e-9b0f-72a947369df2 container test-container: 
STEP: delete the pod
Aug  4 11:36:26.997: INFO: Waiting for pod pod-92ab7916-610c-493e-9b0f-72a947369df2 to disappear
Aug  4 11:36:27.032: INFO: Pod pod-92ab7916-610c-493e-9b0f-72a947369df2 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:36:27.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2896" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":254,"skipped":4349,"failed":0}
SSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:36:27.042: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91
Aug  4 11:36:27.357: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug  4 11:36:27.395: INFO: Waiting for terminating namespaces to be deleted...
Aug  4 11:36:27.400: INFO: 
Logging pods the kubelet thinks is on node kali-worker before test
Aug  4 11:36:27.410: INFO: kindnet-njbgt from kube-system started at 2020-07-10 10:28:30 +0000 UTC (1 container statuses recorded)
Aug  4 11:36:27.410: INFO: 	Container kindnet-cni ready: true, restart count 1
Aug  4 11:36:27.410: INFO: kube-proxy-qwsfx from kube-system started at 2020-07-10 10:28:31 +0000 UTC (1 container statuses recorded)
Aug  4 11:36:27.410: INFO: 	Container kube-proxy ready: true, restart count 0
Aug  4 11:36:27.410: INFO: busybox-scheduling-4f45051f-2b7c-4100-910f-964cd01dbeb5 from kubelet-test-3186 started at 2020-08-04 11:35:42 +0000 UTC (1 container statuses recorded)
Aug  4 11:36:27.410: INFO: 	Container busybox-scheduling-4f45051f-2b7c-4100-910f-964cd01dbeb5 ready: false, restart count 0
Aug  4 11:36:27.410: INFO: rally-824618b1-6cukkjuh-lb7rq from c-rally-824618b1-4lzsfcdd started at 2020-08-01 10:57:26 +0000 UTC (1 container statuses recorded)
Aug  4 11:36:27.410: INFO: 	Container rally-824618b1-6cukkjuh ready: true, restart count 3
Aug  4 11:36:27.410: INFO: rally-19e4df10-30wkw9yu-glqpf from c-rally-19e4df10-7fs771wk started at 2020-08-01 11:12:55 +0000 UTC (1 container statuses recorded)
Aug  4 11:36:27.410: INFO: 	Container rally-19e4df10-30wkw9yu ready: true, restart count 0
Aug  4 11:36:27.410: INFO: pod-init-00c169d8-fbdf-467e-8542-7ef5e49821f7 from init-container-1905 started at 2020-08-04 11:36:13 +0000 UTC (1 container statuses recorded)
Aug  4 11:36:27.410: INFO: 	Container run1 ready: false, restart count 0
Aug  4 11:36:27.410: INFO: 
Logging pods the kubelet thinks is on node kali-worker2 before test
Aug  4 11:36:27.422: INFO: kindnet-pk4xb from kube-system started at 2020-07-10 10:28:28 +0000 UTC (1 container statuses recorded)
Aug  4 11:36:27.422: INFO: 	Container kindnet-cni ready: true, restart count 1
Aug  4 11:36:27.422: INFO: rally-19e4df10-30wkw9yu-qbmr7 from c-rally-19e4df10-7fs771wk started at 2020-08-01 11:12:55 +0000 UTC (1 container statuses recorded)
Aug  4 11:36:27.422: INFO: 	Container rally-19e4df10-30wkw9yu ready: true, restart count 0
Aug  4 11:36:27.422: INFO: kube-proxy-vk6jr from kube-system started at 2020-07-10 10:28:28 +0000 UTC (1 container statuses recorded)
Aug  4 11:36:27.422: INFO: 	Container kube-proxy ready: true, restart count 0
Aug  4 11:36:27.422: INFO: rally-824618b1-6cukkjuh-m84l4 from c-rally-824618b1-4lzsfcdd started at 2020-08-01 10:57:24 +0000 UTC (1 container statuses recorded)
Aug  4 11:36:27.422: INFO: 	Container rally-824618b1-6cukkjuh ready: true, restart count 3
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: verifying the node has the label node kali-worker
STEP: verifying the node has the label node kali-worker2
Aug  4 11:36:27.564: INFO: Pod rally-19e4df10-30wkw9yu-glqpf requesting resource cpu=0m on Node kali-worker
Aug  4 11:36:27.564: INFO: Pod rally-19e4df10-30wkw9yu-qbmr7 requesting resource cpu=0m on Node kali-worker2
Aug  4 11:36:27.564: INFO: Pod rally-824618b1-6cukkjuh-lb7rq requesting resource cpu=0m on Node kali-worker
Aug  4 11:36:27.564: INFO: Pod rally-824618b1-6cukkjuh-m84l4 requesting resource cpu=0m on Node kali-worker2
Aug  4 11:36:27.564: INFO: Pod kindnet-njbgt requesting resource cpu=100m on Node kali-worker
Aug  4 11:36:27.564: INFO: Pod kindnet-pk4xb requesting resource cpu=100m on Node kali-worker2
Aug  4 11:36:27.564: INFO: Pod kube-proxy-qwsfx requesting resource cpu=0m on Node kali-worker
Aug  4 11:36:27.564: INFO: Pod kube-proxy-vk6jr requesting resource cpu=0m on Node kali-worker2
Aug  4 11:36:27.564: INFO: Pod busybox-scheduling-4f45051f-2b7c-4100-910f-964cd01dbeb5 requesting resource cpu=0m on Node kali-worker
STEP: Starting Pods to consume most of the cluster CPU.
Aug  4 11:36:27.564: INFO: Creating a pod which consumes cpu=11130m on Node kali-worker2
Aug  4 11:36:27.572: INFO: Creating a pod which consumes cpu=11130m on Node kali-worker
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-7a5c1c8b-72db-4e58-923d-adfc43aef916.16280d91c8cd2ad8], Reason = [Scheduled], Message = [Successfully assigned sched-pred-635/filler-pod-7a5c1c8b-72db-4e58-923d-adfc43aef916 to kali-worker]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-7a5c1c8b-72db-4e58-923d-adfc43aef916.16280d9269817752], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-7a5c1c8b-72db-4e58-923d-adfc43aef916.16280d92ca2dc297], Reason = [Created], Message = [Created container filler-pod-7a5c1c8b-72db-4e58-923d-adfc43aef916]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-7a5c1c8b-72db-4e58-923d-adfc43aef916.16280d92dc6fd35e], Reason = [Started], Message = [Started container filler-pod-7a5c1c8b-72db-4e58-923d-adfc43aef916]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-cdc2ab4a-152f-4494-b2e3-69aa2bac512b.16280d91c7d9a52e], Reason = [Scheduled], Message = [Successfully assigned sched-pred-635/filler-pod-cdc2ab4a-152f-4494-b2e3-69aa2bac512b to kali-worker2]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-cdc2ab4a-152f-4494-b2e3-69aa2bac512b.16280d92170c184b], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-cdc2ab4a-152f-4494-b2e3-69aa2bac512b.16280d92650084db], Reason = [Created], Message = [Created container filler-pod-cdc2ab4a-152f-4494-b2e3-69aa2bac512b]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-cdc2ab4a-152f-4494-b2e3-69aa2bac512b.16280d929cd67b26], Reason = [Started], Message = [Started container filler-pod-cdc2ab4a-152f-4494-b2e3-69aa2bac512b]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.16280d932ffb1b2f], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.16280d93317b5b17], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.]
STEP: removing the label node off the node kali-worker
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node kali-worker2
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:36:34.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-635" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82

• [SLOW TEST:7.785 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","total":275,"completed":255,"skipped":4355,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:36:34.828: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating the pod
Aug  4 11:36:39.422: INFO: Successfully updated pod "annotationupdateb0b6afef-017a-471c-a57c-a3f9be56d30d"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:36:43.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9109" for this suite.

• [SLOW TEST:8.650 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":275,"completed":256,"skipped":4385,"failed":0}
SSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:36:43.478: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod busybox-4bac4979-edff-4d95-97c6-b2d080617ea2 in namespace container-probe-1773
Aug  4 11:36:49.729: INFO: Started pod busybox-4bac4979-edff-4d95-97c6-b2d080617ea2 in namespace container-probe-1773
STEP: checking the pod's current state and verifying that restartCount is present
Aug  4 11:36:49.732: INFO: Initial restart count of pod busybox-4bac4979-edff-4d95-97c6-b2d080617ea2 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:40:51.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1773" for this suite.

• [SLOW TEST:247.939 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":275,"completed":257,"skipped":4397,"failed":0}
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:40:51.417: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-2d393957-4d5c-4a91-b7db-2fba344759a5
STEP: Creating a pod to test consume secrets
Aug  4 11:40:51.584: INFO: Waiting up to 5m0s for pod "pod-secrets-7c6969b2-a590-449c-937e-10c7018601d0" in namespace "secrets-4745" to be "Succeeded or Failed"
Aug  4 11:40:51.597: INFO: Pod "pod-secrets-7c6969b2-a590-449c-937e-10c7018601d0": Phase="Pending", Reason="", readiness=false. Elapsed: 12.375213ms
Aug  4 11:40:53.738: INFO: Pod "pod-secrets-7c6969b2-a590-449c-937e-10c7018601d0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.153873643s
Aug  4 11:40:55.750: INFO: Pod "pod-secrets-7c6969b2-a590-449c-937e-10c7018601d0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.16538963s
STEP: Saw pod success
Aug  4 11:40:55.750: INFO: Pod "pod-secrets-7c6969b2-a590-449c-937e-10c7018601d0" satisfied condition "Succeeded or Failed"
Aug  4 11:40:55.752: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-7c6969b2-a590-449c-937e-10c7018601d0 container secret-env-test: 
STEP: delete the pod
Aug  4 11:40:55.833: INFO: Waiting for pod pod-secrets-7c6969b2-a590-449c-937e-10c7018601d0 to disappear
Aug  4 11:40:55.849: INFO: Pod pod-secrets-7c6969b2-a590-449c-937e-10c7018601d0 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:40:55.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4745" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":275,"completed":258,"skipped":4397,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:40:55.874: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug  4 11:40:56.168: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d04d2404-f8d9-4da8-a709-d413614cd3d8" in namespace "downward-api-8669" to be "Succeeded or Failed"
Aug  4 11:40:56.178: INFO: Pod "downwardapi-volume-d04d2404-f8d9-4da8-a709-d413614cd3d8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.186499ms
Aug  4 11:40:58.253: INFO: Pod "downwardapi-volume-d04d2404-f8d9-4da8-a709-d413614cd3d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.085889993s
Aug  4 11:41:00.258: INFO: Pod "downwardapi-volume-d04d2404-f8d9-4da8-a709-d413614cd3d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.090228765s
STEP: Saw pod success
Aug  4 11:41:00.258: INFO: Pod "downwardapi-volume-d04d2404-f8d9-4da8-a709-d413614cd3d8" satisfied condition "Succeeded or Failed"
Aug  4 11:41:00.261: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-d04d2404-f8d9-4da8-a709-d413614cd3d8 container client-container: 
STEP: delete the pod
Aug  4 11:41:00.317: INFO: Waiting for pod downwardapi-volume-d04d2404-f8d9-4da8-a709-d413614cd3d8 to disappear
Aug  4 11:41:00.325: INFO: Pod downwardapi-volume-d04d2404-f8d9-4da8-a709-d413614cd3d8 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:41:00.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8669" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":259,"skipped":4408,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:41:00.333: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:41:06.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-1530" for this suite.
STEP: Destroying namespace "nsdeletetest-3170" for this suite.
Aug  4 11:41:06.662: INFO: Namespace nsdeletetest-3170 was already deleted
STEP: Destroying namespace "nsdeletetest-7726" for this suite.

• [SLOW TEST:6.333 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":275,"completed":260,"skipped":4423,"failed":0}
S
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:41:06.666: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test hostPath mode
Aug  4 11:41:06.749: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-4348" to be "Succeeded or Failed"
Aug  4 11:41:06.753: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 3.497061ms
Aug  4 11:41:08.777: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028277293s
Aug  4 11:41:10.852: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.103357954s
Aug  4 11:41:12.856: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.10710891s
STEP: Saw pod success
Aug  4 11:41:12.856: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed"
Aug  4 11:41:12.860: INFO: Trying to get logs from node kali-worker pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Aug  4 11:41:12.962: INFO: Waiting for pod pod-host-path-test to disappear
Aug  4 11:41:12.975: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:41:12.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-4348" for this suite.

• [SLOW TEST:6.317 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":261,"skipped":4424,"failed":0}
SSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:41:12.983: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Aug  4 11:41:18.182: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:41:18.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-8423" for this suite.

• [SLOW TEST:5.491 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":275,"completed":262,"skipped":4431,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:41:18.475: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicationController
STEP: Ensuring resource quota status captures replication controller creation
STEP: Deleting a ReplicationController
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:41:30.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-4993" for this suite.

• [SLOW TEST:12.010 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":275,"completed":263,"skipped":4473,"failed":0}
SSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:41:30.485: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod pod-subpath-test-configmap-747f
STEP: Creating a pod to test atomic-volume-subpath
Aug  4 11:41:30.624: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-747f" in namespace "subpath-5882" to be "Succeeded or Failed"
Aug  4 11:41:30.647: INFO: Pod "pod-subpath-test-configmap-747f": Phase="Pending", Reason="", readiness=false. Elapsed: 23.109942ms
Aug  4 11:41:32.651: INFO: Pod "pod-subpath-test-configmap-747f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027114605s
Aug  4 11:41:34.655: INFO: Pod "pod-subpath-test-configmap-747f": Phase="Running", Reason="", readiness=true. Elapsed: 4.03147034s
Aug  4 11:41:36.659: INFO: Pod "pod-subpath-test-configmap-747f": Phase="Running", Reason="", readiness=true. Elapsed: 6.035418501s
Aug  4 11:41:38.664: INFO: Pod "pod-subpath-test-configmap-747f": Phase="Running", Reason="", readiness=true. Elapsed: 8.039767677s
Aug  4 11:41:40.668: INFO: Pod "pod-subpath-test-configmap-747f": Phase="Running", Reason="", readiness=true. Elapsed: 10.044323897s
Aug  4 11:41:42.673: INFO: Pod "pod-subpath-test-configmap-747f": Phase="Running", Reason="", readiness=true. Elapsed: 12.049012735s
Aug  4 11:41:44.690: INFO: Pod "pod-subpath-test-configmap-747f": Phase="Running", Reason="", readiness=true. Elapsed: 14.066669154s
Aug  4 11:41:46.695: INFO: Pod "pod-subpath-test-configmap-747f": Phase="Running", Reason="", readiness=true. Elapsed: 16.071060347s
Aug  4 11:41:48.703: INFO: Pod "pod-subpath-test-configmap-747f": Phase="Running", Reason="", readiness=true. Elapsed: 18.078901454s
Aug  4 11:41:50.715: INFO: Pod "pod-subpath-test-configmap-747f": Phase="Running", Reason="", readiness=true. Elapsed: 20.090785877s
Aug  4 11:41:52.718: INFO: Pod "pod-subpath-test-configmap-747f": Phase="Running", Reason="", readiness=true. Elapsed: 22.094711772s
Aug  4 11:41:54.829: INFO: Pod "pod-subpath-test-configmap-747f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.205502252s
STEP: Saw pod success
Aug  4 11:41:54.829: INFO: Pod "pod-subpath-test-configmap-747f" satisfied condition "Succeeded or Failed"
Aug  4 11:41:54.832: INFO: Trying to get logs from node kali-worker pod pod-subpath-test-configmap-747f container test-container-subpath-configmap-747f: 
STEP: delete the pod
Aug  4 11:41:54.991: INFO: Waiting for pod pod-subpath-test-configmap-747f to disappear
Aug  4 11:41:54.995: INFO: Pod pod-subpath-test-configmap-747f no longer exists
STEP: Deleting pod pod-subpath-test-configmap-747f
Aug  4 11:41:54.995: INFO: Deleting pod "pod-subpath-test-configmap-747f" in namespace "subpath-5882"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:41:55.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-5882" for this suite.

• [SLOW TEST:24.536 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":275,"completed":264,"skipped":4479,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:41:55.021: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug  4 11:41:56.188: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug  4 11:41:58.198: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732138116, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732138116, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732138116, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732138116, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug  4 11:42:01.302: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a mutating webhook configuration
STEP: Updating a mutating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that should not be mutated
STEP: Patching a mutating webhook configuration's rules to include the create operation
STEP: Creating a configMap that should be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:42:01.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9859" for this suite.
STEP: Destroying namespace "webhook-9859-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:6.699 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":275,"completed":265,"skipped":4485,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:42:01.721: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug  4 11:42:03.381: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug  4 11:42:05.394: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732138123, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732138123, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732138123, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732138122, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug  4 11:42:07.427: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732138123, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732138123, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732138123, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732138122, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug  4 11:42:10.482: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Creating a dummy validating-webhook-configuration object
STEP: Deleting the validating-webhook-configuration, which should be possible to remove
STEP: Creating a dummy mutating-webhook-configuration object
STEP: Deleting the mutating-webhook-configuration, which should be possible to remove
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:42:10.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7341" for this suite.
STEP: Destroying namespace "webhook-7341-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:9.138 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":275,"completed":266,"skipped":4513,"failed":0}
SSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:42:10.859: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug  4 11:42:11.044: INFO: Creating deployment "test-recreate-deployment"
Aug  4 11:42:11.048: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Aug  4 11:42:11.301: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
Aug  4 11:42:13.308: INFO: Waiting deployment "test-recreate-deployment" to complete
Aug  4 11:42:13.311: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732138131, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732138131, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732138131, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732138131, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-74d98b5f7c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug  4 11:42:15.315: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Aug  4 11:42:15.324: INFO: Updating deployment test-recreate-deployment
Aug  4 11:42:15.324: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
Aug  4 11:42:16.147: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:{test-recreate-deployment  deployment-7347 /apis/apps/v1/namespaces/deployment-7347/deployments/test-recreate-deployment 4a95fea8-594e-4c7e-beb7-0add07f62c86 6687667 2 2020-08-04 11:42:11 +0000 UTC   map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] []  [{e2e.test Update apps/v1 2020-08-04 11:42:15 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-08-04 11:42:16 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 110 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00453bb38  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-08-04 11:42:15 +0000 UTC,LastTransitionTime:2020-08-04 11:42:15 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-d5667d9c7" is progressing.,LastUpdateTime:2020-08-04 11:42:16 +0000 UTC,LastTransitionTime:2020-08-04 11:42:11 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},}

Aug  4 11:42:16.436: INFO: New ReplicaSet "test-recreate-deployment-d5667d9c7" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:{test-recreate-deployment-d5667d9c7  deployment-7347 /apis/apps/v1/namespaces/deployment-7347/replicasets/test-recreate-deployment-d5667d9c7 4bcfd166-bc2b-47c4-9d03-0db52d0a426d 6687662 1 2020-08-04 11:42:15 +0000 UTC   map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 4a95fea8-594e-4c7e-beb7-0add07f62c86 0xc0043bc0a0 0xc0043bc0a1}] []  [{kube-controller-manager Update apps/v1 2020-08-04 11:42:15 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 52 97 57 53 102 101 97 56 45 53 57 52 101 45 52 99 55 101 45 98 101 98 55 45 48 97 100 100 48 55 102 54 50 99 56 54 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: d5667d9c7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0043bc2d8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Aug  4 11:42:16.436: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Aug  4 11:42:16.436: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-74d98b5f7c  deployment-7347 /apis/apps/v1/namespaces/deployment-7347/replicasets/test-recreate-deployment-74d98b5f7c cd4de5ba-50dc-438a-818d-1ec4e9dd12e8 6687654 2 2020-08-04 11:42:11 +0000 UTC   map[name:sample-pod-3 pod-template-hash:74d98b5f7c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 4a95fea8-594e-4c7e-beb7-0add07f62c86 0xc00453bf57 0xc00453bf58}] []  [{kube-controller-manager Update apps/v1 2020-08-04 11:42:15 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 52 97 57 53 102 101 97 56 45 53 57 52 101 45 52 99 55 101 45 98 101 98 55 45 48 97 100 100 48 55 102 54 50 99 56 54 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 74d98b5f7c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:74d98b5f7c] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00453bfe8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Aug  4 11:42:16.639: INFO: Pod "test-recreate-deployment-d5667d9c7-8qt29" is not available:
&Pod{ObjectMeta:{test-recreate-deployment-d5667d9c7-8qt29 test-recreate-deployment-d5667d9c7- deployment-7347 /api/v1/namespaces/deployment-7347/pods/test-recreate-deployment-d5667d9c7-8qt29 02613986-c7a6-442a-9084-d1b083c4288c 6687666 0 2020-08-04 11:42:15 +0000 UTC   map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [{apps/v1 ReplicaSet test-recreate-deployment-d5667d9c7 4bcfd166-bc2b-47c4-9d03-0db52d0a426d 0xc0043bd2e0 0xc0043bd2e1}] []  [{kube-controller-manager Update v1 2020-08-04 11:42:15 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 52 98 99 102 100 49 54 54 45 98 99 50 98 45 52 55 99 52 45 57 100 48 51 45 48 100 98 53 50 100 48 97 52 50 54 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-04 11:42:16 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vx5sj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vx5sj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vx5sj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:42:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:42:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:42:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-04 11:42:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-08-04 11:42:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:42:16.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-7347" for this suite.

• [SLOW TEST:6.065 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":275,"completed":267,"skipped":4520,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a pod with readOnlyRootFilesystem 
  should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:42:16.925: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug  4 11:42:17.591: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-4e4ed45f-605a-499c-a8cd-78327f933d53" in namespace "security-context-test-7539" to be "Succeeded or Failed"
Aug  4 11:42:17.663: INFO: Pod "busybox-readonly-false-4e4ed45f-605a-499c-a8cd-78327f933d53": Phase="Pending", Reason="", readiness=false. Elapsed: 72.223969ms
Aug  4 11:42:19.666: INFO: Pod "busybox-readonly-false-4e4ed45f-605a-499c-a8cd-78327f933d53": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07520504s
Aug  4 11:42:21.691: INFO: Pod "busybox-readonly-false-4e4ed45f-605a-499c-a8cd-78327f933d53": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.100385062s
Aug  4 11:42:21.691: INFO: Pod "busybox-readonly-false-4e4ed45f-605a-499c-a8cd-78327f933d53" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:42:21.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-7539" for this suite.
•{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":275,"completed":268,"skipped":4557,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:42:21.702: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: getting the auto-created API token
Aug  4 11:42:22.854: INFO: created pod pod-service-account-defaultsa
Aug  4 11:42:22.854: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Aug  4 11:42:22.864: INFO: created pod pod-service-account-mountsa
Aug  4 11:42:22.864: INFO: pod pod-service-account-mountsa service account token volume mount: true
Aug  4 11:42:22.934: INFO: created pod pod-service-account-nomountsa
Aug  4 11:42:22.934: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Aug  4 11:42:22.941: INFO: created pod pod-service-account-defaultsa-mountspec
Aug  4 11:42:22.941: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Aug  4 11:42:23.334: INFO: created pod pod-service-account-mountsa-mountspec
Aug  4 11:42:23.334: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Aug  4 11:42:23.385: INFO: created pod pod-service-account-nomountsa-mountspec
Aug  4 11:42:23.385: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Aug  4 11:42:23.427: INFO: created pod pod-service-account-defaultsa-nomountspec
Aug  4 11:42:23.427: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Aug  4 11:42:23.499: INFO: created pod pod-service-account-mountsa-nomountspec
Aug  4 11:42:23.499: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Aug  4 11:42:23.555: INFO: created pod pod-service-account-nomountsa-nomountspec
Aug  4 11:42:23.555: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:42:23.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-6781" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":275,"completed":269,"skipped":4587,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:42:23.707: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug  4 11:42:24.073: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Aug  4 11:42:24.102: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  4 11:42:24.119: INFO: Number of nodes with available pods: 0
Aug  4 11:42:24.119: INFO: Node kali-worker is running more than one daemon pod
Aug  4 11:42:25.459: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  4 11:42:25.462: INFO: Number of nodes with available pods: 0
Aug  4 11:42:25.462: INFO: Node kali-worker is running more than one daemon pod
Aug  4 11:42:26.124: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  4 11:42:26.127: INFO: Number of nodes with available pods: 0
Aug  4 11:42:26.127: INFO: Node kali-worker is running more than one daemon pod
Aug  4 11:42:27.405: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  4 11:42:27.451: INFO: Number of nodes with available pods: 0
Aug  4 11:42:27.451: INFO: Node kali-worker is running more than one daemon pod
Aug  4 11:42:28.548: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  4 11:42:28.752: INFO: Number of nodes with available pods: 0
Aug  4 11:42:28.752: INFO: Node kali-worker is running more than one daemon pod
Aug  4 11:42:29.177: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  4 11:42:29.181: INFO: Number of nodes with available pods: 0
Aug  4 11:42:29.181: INFO: Node kali-worker is running more than one daemon pod
Aug  4 11:42:30.178: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  4 11:42:30.181: INFO: Number of nodes with available pods: 0
Aug  4 11:42:30.181: INFO: Node kali-worker is running more than one daemon pod
Aug  4 11:42:31.926: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  4 11:42:33.177: INFO: Number of nodes with available pods: 0
Aug  4 11:42:33.177: INFO: Node kali-worker is running more than one daemon pod
Aug  4 11:42:34.238: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  4 11:42:34.308: INFO: Number of nodes with available pods: 0
Aug  4 11:42:34.308: INFO: Node kali-worker is running more than one daemon pod
Aug  4 11:42:35.345: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  4 11:42:35.609: INFO: Number of nodes with available pods: 0
Aug  4 11:42:35.609: INFO: Node kali-worker is running more than one daemon pod
Aug  4 11:42:36.477: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  4 11:42:36.747: INFO: Number of nodes with available pods: 2
Aug  4 11:42:36.747: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Aug  4 11:42:37.741: INFO: Wrong image for pod: daemon-set-5dj7d. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug  4 11:42:37.741: INFO: Wrong image for pod: daemon-set-t22gc. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug  4 11:42:37.763: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  4 11:42:38.769: INFO: Wrong image for pod: daemon-set-5dj7d. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug  4 11:42:38.769: INFO: Wrong image for pod: daemon-set-t22gc. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug  4 11:42:38.772: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  4 11:42:39.769: INFO: Wrong image for pod: daemon-set-5dj7d. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug  4 11:42:39.769: INFO: Wrong image for pod: daemon-set-t22gc. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug  4 11:42:39.774: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  4 11:42:40.768: INFO: Wrong image for pod: daemon-set-5dj7d. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug  4 11:42:40.768: INFO: Wrong image for pod: daemon-set-t22gc. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug  4 11:42:40.768: INFO: Pod daemon-set-t22gc is not available
Aug  4 11:42:40.773: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  4 11:42:41.842: INFO: Wrong image for pod: daemon-set-5dj7d. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug  4 11:42:41.842: INFO: Wrong image for pod: daemon-set-t22gc. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug  4 11:42:41.842: INFO: Pod daemon-set-t22gc is not available
Aug  4 11:42:41.847: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  4 11:42:42.768: INFO: Wrong image for pod: daemon-set-5dj7d. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug  4 11:42:42.768: INFO: Wrong image for pod: daemon-set-t22gc. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug  4 11:42:42.768: INFO: Pod daemon-set-t22gc is not available
Aug  4 11:42:42.771: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  4 11:42:43.768: INFO: Wrong image for pod: daemon-set-5dj7d. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug  4 11:42:43.768: INFO: Pod daemon-set-q6tkh is not available
Aug  4 11:42:43.772: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  4 11:42:44.768: INFO: Wrong image for pod: daemon-set-5dj7d. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug  4 11:42:44.768: INFO: Pod daemon-set-q6tkh is not available
Aug  4 11:42:44.772: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  4 11:42:45.807: INFO: Wrong image for pod: daemon-set-5dj7d. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug  4 11:42:45.807: INFO: Pod daemon-set-q6tkh is not available
Aug  4 11:42:45.811: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  4 11:42:46.767: INFO: Wrong image for pod: daemon-set-5dj7d. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug  4 11:42:46.771: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  4 11:42:48.017: INFO: Wrong image for pod: daemon-set-5dj7d. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug  4 11:42:48.017: INFO: Pod daemon-set-5dj7d is not available
Aug  4 11:42:48.075: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  4 11:42:48.768: INFO: Wrong image for pod: daemon-set-5dj7d. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug  4 11:42:48.768: INFO: Pod daemon-set-5dj7d is not available
Aug  4 11:42:48.772: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  4 11:42:49.768: INFO: Wrong image for pod: daemon-set-5dj7d. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug  4 11:42:49.768: INFO: Pod daemon-set-5dj7d is not available
Aug  4 11:42:49.772: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  4 11:42:50.768: INFO: Wrong image for pod: daemon-set-5dj7d. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug  4 11:42:50.769: INFO: Pod daemon-set-5dj7d is not available
Aug  4 11:42:50.773: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  4 11:42:51.842: INFO: Wrong image for pod: daemon-set-5dj7d. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug  4 11:42:51.842: INFO: Pod daemon-set-5dj7d is not available
Aug  4 11:42:51.847: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  4 11:42:52.769: INFO: Wrong image for pod: daemon-set-5dj7d. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug  4 11:42:52.769: INFO: Pod daemon-set-5dj7d is not available
Aug  4 11:42:52.773: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  4 11:42:53.768: INFO: Pod daemon-set-l92gj is not available
Aug  4 11:42:53.773: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
STEP: Check that daemon pods are still running on every node of the cluster.
Aug  4 11:42:53.777: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  4 11:42:53.781: INFO: Number of nodes with available pods: 1
Aug  4 11:42:53.781: INFO: Node kali-worker is running more than one daemon pod
Aug  4 11:42:54.786: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  4 11:42:54.790: INFO: Number of nodes with available pods: 1
Aug  4 11:42:54.790: INFO: Node kali-worker is running more than one daemon pod
Aug  4 11:42:55.787: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  4 11:42:55.790: INFO: Number of nodes with available pods: 1
Aug  4 11:42:55.790: INFO: Node kali-worker is running more than one daemon pod
Aug  4 11:42:56.814: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  4 11:42:56.817: INFO: Number of nodes with available pods: 2
Aug  4 11:42:56.817: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4068, will wait for the garbage collector to delete the pods
Aug  4 11:42:56.887: INFO: Deleting DaemonSet.extensions daemon-set took: 6.138627ms
Aug  4 11:42:57.187: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.211271ms
Aug  4 11:43:03.490: INFO: Number of nodes with available pods: 0
Aug  4 11:43:03.490: INFO: Number of running nodes: 0, number of available pods: 0
Aug  4 11:43:03.494: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4068/daemonsets","resourceVersion":"6688037"},"items":null}

Aug  4 11:43:03.505: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4068/pods","resourceVersion":"6688038"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:43:03.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-4068" for this suite.

• [SLOW TEST:39.811 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":275,"completed":270,"skipped":4641,"failed":0}
SSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:43:03.518: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod liveness-a85f75e9-5d0e-47e8-9152-f52351d20290 in namespace container-probe-8790
Aug  4 11:43:07.687: INFO: Started pod liveness-a85f75e9-5d0e-47e8-9152-f52351d20290 in namespace container-probe-8790
STEP: checking the pod's current state and verifying that restartCount is present
Aug  4 11:43:07.690: INFO: Initial restart count of pod liveness-a85f75e9-5d0e-47e8-9152-f52351d20290 is 0
Aug  4 11:43:26.087: INFO: Restart count of pod container-probe-8790/liveness-a85f75e9-5d0e-47e8-9152-f52351d20290 is now 1 (18.396964292s elapsed)
Aug  4 11:43:44.126: INFO: Restart count of pod container-probe-8790/liveness-a85f75e9-5d0e-47e8-9152-f52351d20290 is now 2 (36.435939273s elapsed)
Aug  4 11:44:04.168: INFO: Restart count of pod container-probe-8790/liveness-a85f75e9-5d0e-47e8-9152-f52351d20290 is now 3 (56.477800874s elapsed)
Aug  4 11:44:26.311: INFO: Restart count of pod container-probe-8790/liveness-a85f75e9-5d0e-47e8-9152-f52351d20290 is now 4 (1m18.621046346s elapsed)
Aug  4 11:45:36.525: INFO: Restart count of pod container-probe-8790/liveness-a85f75e9-5d0e-47e8-9152-f52351d20290 is now 5 (2m28.834611343s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:45:36.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-8790" for this suite.

• [SLOW TEST:153.073 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":275,"completed":271,"skipped":4653,"failed":0}
SS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:45:36.591: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-1959
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a new StatefulSet
Aug  4 11:45:37.004: INFO: Found 0 stateful pods, waiting for 3
Aug  4 11:45:47.009: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug  4 11:45:47.009: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug  4 11:45:47.009: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Aug  4 11:45:57.010: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug  4 11:45:57.010: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug  4 11:45:57.010: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Aug  4 11:45:57.019: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1959 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug  4 11:46:02.132: INFO: stderr: "I0804 11:46:02.010136    3155 log.go:172] (0xc0007e6000) (0xc0007dc0a0) Create stream\nI0804 11:46:02.010182    3155 log.go:172] (0xc0007e6000) (0xc0007dc0a0) Stream added, broadcasting: 1\nI0804 11:46:02.012426    3155 log.go:172] (0xc0007e6000) Reply frame received for 1\nI0804 11:46:02.012481    3155 log.go:172] (0xc0007e6000) (0xc0007dc1e0) Create stream\nI0804 11:46:02.012510    3155 log.go:172] (0xc0007e6000) (0xc0007dc1e0) Stream added, broadcasting: 3\nI0804 11:46:02.013648    3155 log.go:172] (0xc0007e6000) Reply frame received for 3\nI0804 11:46:02.013674    3155 log.go:172] (0xc0007e6000) (0xc00084f220) Create stream\nI0804 11:46:02.013682    3155 log.go:172] (0xc0007e6000) (0xc00084f220) Stream added, broadcasting: 5\nI0804 11:46:02.014542    3155 log.go:172] (0xc0007e6000) Reply frame received for 5\nI0804 11:46:02.087689    3155 log.go:172] (0xc0007e6000) Data frame received for 5\nI0804 11:46:02.087723    3155 log.go:172] (0xc00084f220) (5) Data frame handling\nI0804 11:46:02.087751    3155 log.go:172] (0xc00084f220) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0804 11:46:02.123331    3155 log.go:172] (0xc0007e6000) Data frame received for 5\nI0804 11:46:02.123388    3155 log.go:172] (0xc00084f220) (5) Data frame handling\nI0804 11:46:02.123410    3155 log.go:172] (0xc0007e6000) Data frame received for 3\nI0804 11:46:02.123423    3155 log.go:172] (0xc0007dc1e0) (3) Data frame handling\nI0804 11:46:02.123433    3155 log.go:172] (0xc0007dc1e0) (3) Data frame sent\nI0804 11:46:02.123442    3155 log.go:172] (0xc0007e6000) Data frame received for 3\nI0804 11:46:02.123448    3155 log.go:172] (0xc0007dc1e0) (3) Data frame handling\nI0804 11:46:02.125204    3155 log.go:172] (0xc0007e6000) Data frame received for 1\nI0804 11:46:02.125223    3155 log.go:172] (0xc0007dc0a0) (1) Data frame handling\nI0804 11:46:02.125236    3155 log.go:172] (0xc0007dc0a0) (1) Data frame sent\nI0804 11:46:02.125252    3155 log.go:172] (0xc0007e6000) (0xc0007dc0a0) Stream removed, broadcasting: 1\nI0804 11:46:02.125332    3155 log.go:172] (0xc0007e6000) Go away received\nI0804 11:46:02.125660    3155 log.go:172] (0xc0007e6000) (0xc0007dc0a0) Stream removed, broadcasting: 1\nI0804 11:46:02.125677    3155 log.go:172] (0xc0007e6000) (0xc0007dc1e0) Stream removed, broadcasting: 3\nI0804 11:46:02.125687    3155 log.go:172] (0xc0007e6000) (0xc00084f220) Stream removed, broadcasting: 5\n"
Aug  4 11:46:02.132: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug  4 11:46:02.132: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Aug  4 11:46:12.185: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Aug  4 11:46:22.231: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1959 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug  4 11:46:22.464: INFO: stderr: "I0804 11:46:22.378416    3188 log.go:172] (0xc00093c000) (0xc000b080a0) Create stream\nI0804 11:46:22.378502    3188 log.go:172] (0xc00093c000) (0xc000b080a0) Stream added, broadcasting: 1\nI0804 11:46:22.381640    3188 log.go:172] (0xc00093c000) Reply frame received for 1\nI0804 11:46:22.381691    3188 log.go:172] (0xc00093c000) (0xc0009483c0) Create stream\nI0804 11:46:22.381710    3188 log.go:172] (0xc00093c000) (0xc0009483c0) Stream added, broadcasting: 3\nI0804 11:46:22.382483    3188 log.go:172] (0xc00093c000) Reply frame received for 3\nI0804 11:46:22.382511    3188 log.go:172] (0xc00093c000) (0xc000b08140) Create stream\nI0804 11:46:22.382521    3188 log.go:172] (0xc00093c000) (0xc000b08140) Stream added, broadcasting: 5\nI0804 11:46:22.383354    3188 log.go:172] (0xc00093c000) Reply frame received for 5\nI0804 11:46:22.455732    3188 log.go:172] (0xc00093c000) Data frame received for 5\nI0804 11:46:22.455781    3188 log.go:172] (0xc00093c000) Data frame received for 3\nI0804 11:46:22.455823    3188 log.go:172] (0xc0009483c0) (3) Data frame handling\nI0804 11:46:22.456342    3188 log.go:172] (0xc000b08140) (5) Data frame handling\nI0804 11:46:22.456414    3188 log.go:172] (0xc000b08140) (5) Data frame sent\nI0804 11:46:22.456469    3188 log.go:172] (0xc00093c000) Data frame received for 5\nI0804 11:46:22.456520    3188 log.go:172] (0xc000b08140) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0804 11:46:22.456612    3188 log.go:172] (0xc0009483c0) (3) Data frame sent\nI0804 11:46:22.456656    3188 log.go:172] (0xc00093c000) Data frame received for 3\nI0804 11:46:22.456696    3188 log.go:172] (0xc0009483c0) (3) Data frame handling\nI0804 11:46:22.458894    3188 log.go:172] (0xc00093c000) Data frame received for 1\nI0804 11:46:22.458929    3188 log.go:172] (0xc000b080a0) (1) Data frame handling\nI0804 11:46:22.458944    3188 log.go:172] (0xc000b080a0) (1) Data frame sent\nI0804 11:46:22.458968    3188 log.go:172] (0xc00093c000) (0xc000b080a0) Stream removed, broadcasting: 1\nI0804 11:46:22.458989    3188 log.go:172] (0xc00093c000) Go away received\nI0804 11:46:22.459388    3188 log.go:172] (0xc00093c000) (0xc000b080a0) Stream removed, broadcasting: 1\nI0804 11:46:22.459414    3188 log.go:172] (0xc00093c000) (0xc0009483c0) Stream removed, broadcasting: 3\nI0804 11:46:22.459434    3188 log.go:172] (0xc00093c000) (0xc000b08140) Stream removed, broadcasting: 5\n"
Aug  4 11:46:22.464: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug  4 11:46:22.464: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug  4 11:46:42.481: INFO: Waiting for StatefulSet statefulset-1959/ss2 to complete update
Aug  4 11:46:42.481: INFO: Waiting for Pod statefulset-1959/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
STEP: Rolling back to a previous revision
Aug  4 11:46:52.488: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1959 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug  4 11:46:52.739: INFO: stderr: "I0804 11:46:52.614572    3206 log.go:172] (0xc00092ebb0) (0xc0008ca3c0) Create stream\nI0804 11:46:52.614636    3206 log.go:172] (0xc00092ebb0) (0xc0008ca3c0) Stream added, broadcasting: 1\nI0804 11:46:52.617425    3206 log.go:172] (0xc00092ebb0) Reply frame received for 1\nI0804 11:46:52.617471    3206 log.go:172] (0xc00092ebb0) (0xc000350a00) Create stream\nI0804 11:46:52.617485    3206 log.go:172] (0xc00092ebb0) (0xc000350a00) Stream added, broadcasting: 3\nI0804 11:46:52.618598    3206 log.go:172] (0xc00092ebb0) Reply frame received for 3\nI0804 11:46:52.618625    3206 log.go:172] (0xc00092ebb0) (0xc000350aa0) Create stream\nI0804 11:46:52.618635    3206 log.go:172] (0xc00092ebb0) (0xc000350aa0) Stream added, broadcasting: 5\nI0804 11:46:52.619855    3206 log.go:172] (0xc00092ebb0) Reply frame received for 5\nI0804 11:46:52.694334    3206 log.go:172] (0xc00092ebb0) Data frame received for 5\nI0804 11:46:52.694379    3206 log.go:172] (0xc000350aa0) (5) Data frame handling\nI0804 11:46:52.694408    3206 log.go:172] (0xc000350aa0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0804 11:46:52.726870    3206 log.go:172] (0xc00092ebb0) Data frame received for 5\nI0804 11:46:52.726898    3206 log.go:172] (0xc000350aa0) (5) Data frame handling\nI0804 11:46:52.726914    3206 log.go:172] (0xc00092ebb0) Data frame received for 3\nI0804 11:46:52.726932    3206 log.go:172] (0xc000350a00) (3) Data frame handling\nI0804 11:46:52.726944    3206 log.go:172] (0xc000350a00) (3) Data frame sent\nI0804 11:46:52.726951    3206 log.go:172] (0xc00092ebb0) Data frame received for 3\nI0804 11:46:52.726956    3206 log.go:172] (0xc000350a00) (3) Data frame handling\nI0804 11:46:52.733621    3206 log.go:172] (0xc00092ebb0) Data frame received for 1\nI0804 11:46:52.733649    3206 log.go:172] (0xc0008ca3c0) (1) Data frame handling\nI0804 11:46:52.733665    3206 log.go:172] (0xc0008ca3c0) (1) Data frame sent\nI0804 11:46:52.733673    3206 log.go:172] (0xc00092ebb0) (0xc0008ca3c0) Stream removed, broadcasting: 1\nI0804 11:46:52.733683    3206 log.go:172] (0xc00092ebb0) Go away received\nI0804 11:46:52.734110    3206 log.go:172] (0xc00092ebb0) (0xc0008ca3c0) Stream removed, broadcasting: 1\nI0804 11:46:52.734138    3206 log.go:172] (0xc00092ebb0) (0xc000350a00) Stream removed, broadcasting: 3\nI0804 11:46:52.734157    3206 log.go:172] (0xc00092ebb0) (0xc000350aa0) Stream removed, broadcasting: 5\n"
Aug  4 11:46:52.739: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug  4 11:46:52.739: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug  4 11:47:02.771: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Aug  4 11:47:12.817: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1959 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug  4 11:47:13.047: INFO: stderr: "I0804 11:47:12.974247    3226 log.go:172] (0xc0003cb340) (0xc00098a640) Create stream\nI0804 11:47:12.974315    3226 log.go:172] (0xc0003cb340) (0xc00098a640) Stream added, broadcasting: 1\nI0804 11:47:12.978711    3226 log.go:172] (0xc0003cb340) Reply frame received for 1\nI0804 11:47:12.978748    3226 log.go:172] (0xc0003cb340) (0xc000683680) Create stream\nI0804 11:47:12.978755    3226 log.go:172] (0xc0003cb340) (0xc000683680) Stream added, broadcasting: 3\nI0804 11:47:12.979657    3226 log.go:172] (0xc0003cb340) Reply frame received for 3\nI0804 11:47:12.979717    3226 log.go:172] (0xc0003cb340) (0xc00052aaa0) Create stream\nI0804 11:47:12.979744    3226 log.go:172] (0xc0003cb340) (0xc00052aaa0) Stream added, broadcasting: 5\nI0804 11:47:12.980574    3226 log.go:172] (0xc0003cb340) Reply frame received for 5\nI0804 11:47:13.039434    3226 log.go:172] (0xc0003cb340) Data frame received for 5\nI0804 11:47:13.039481    3226 log.go:172] (0xc00052aaa0) (5) Data frame handling\nI0804 11:47:13.039496    3226 log.go:172] (0xc00052aaa0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0804 11:47:13.039521    3226 log.go:172] (0xc0003cb340) Data frame received for 3\nI0804 11:47:13.039534    3226 log.go:172] (0xc000683680) (3) Data frame handling\nI0804 11:47:13.039546    3226 log.go:172] (0xc000683680) (3) Data frame sent\nI0804 11:47:13.039571    3226 log.go:172] (0xc0003cb340) Data frame received for 3\nI0804 11:47:13.039582    3226 log.go:172] (0xc000683680) (3) Data frame handling\nI0804 11:47:13.039599    3226 log.go:172] (0xc0003cb340) Data frame received for 5\nI0804 11:47:13.039621    3226 log.go:172] (0xc00052aaa0) (5) Data frame handling\nI0804 11:47:13.041386    3226 log.go:172] (0xc0003cb340) Data frame received for 1\nI0804 11:47:13.041408    3226 log.go:172] (0xc00098a640) (1) Data frame handling\nI0804 11:47:13.041416    3226 log.go:172] (0xc00098a640) (1) Data frame sent\nI0804 11:47:13.041427    3226 log.go:172] (0xc0003cb340) (0xc00098a640) Stream removed, broadcasting: 1\nI0804 11:47:13.041457    3226 log.go:172] (0xc0003cb340) Go away received\nI0804 11:47:13.041838    3226 log.go:172] (0xc0003cb340) (0xc00098a640) Stream removed, broadcasting: 1\nI0804 11:47:13.041858    3226 log.go:172] (0xc0003cb340) (0xc000683680) Stream removed, broadcasting: 3\nI0804 11:47:13.041867    3226 log.go:172] (0xc0003cb340) (0xc00052aaa0) Stream removed, broadcasting: 5\n"
Aug  4 11:47:13.047: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug  4 11:47:13.047: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug  4 11:47:23.067: INFO: Waiting for StatefulSet statefulset-1959/ss2 to complete update
Aug  4 11:47:23.067: INFO: Waiting for Pod statefulset-1959/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Aug  4 11:47:23.067: INFO: Waiting for Pod statefulset-1959/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Aug  4 11:47:23.067: INFO: Waiting for Pod statefulset-1959/ss2-2 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Aug  4 11:47:33.261: INFO: Waiting for StatefulSet statefulset-1959/ss2 to complete update
Aug  4 11:47:33.261: INFO: Waiting for Pod statefulset-1959/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Aug  4 11:47:43.081: INFO: Waiting for StatefulSet statefulset-1959/ss2 to complete update
Aug  4 11:47:43.081: INFO: Waiting for Pod statefulset-1959/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Aug  4 11:47:53.081: INFO: Deleting all statefulset in ns statefulset-1959
Aug  4 11:47:53.084: INFO: Scaling statefulset ss2 to 0
Aug  4 11:48:23.117: INFO: Waiting for statefulset status.replicas updated to 0
Aug  4 11:48:23.119: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:48:23.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-1959" for this suite.

• [SLOW TEST:166.570 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":275,"completed":272,"skipped":4655,"failed":0}
S
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:48:23.162: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name projected-secret-test-313fb667-26d5-44f3-b06f-2f17e4057dd9
STEP: Creating a pod to test consume secrets
Aug  4 11:48:23.381: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-1d220187-43ac-41c4-aace-eb282d6354ef" in namespace "projected-6674" to be "Succeeded or Failed"
Aug  4 11:48:23.452: INFO: Pod "pod-projected-secrets-1d220187-43ac-41c4-aace-eb282d6354ef": Phase="Pending", Reason="", readiness=false. Elapsed: 71.410983ms
Aug  4 11:48:25.509: INFO: Pod "pod-projected-secrets-1d220187-43ac-41c4-aace-eb282d6354ef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.128224174s
Aug  4 11:48:27.513: INFO: Pod "pod-projected-secrets-1d220187-43ac-41c4-aace-eb282d6354ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.132025709s
STEP: Saw pod success
Aug  4 11:48:27.513: INFO: Pod "pod-projected-secrets-1d220187-43ac-41c4-aace-eb282d6354ef" satisfied condition "Succeeded or Failed"
Aug  4 11:48:27.516: INFO: Trying to get logs from node kali-worker pod pod-projected-secrets-1d220187-43ac-41c4-aace-eb282d6354ef container projected-secret-volume-test: 
STEP: delete the pod
Aug  4 11:48:27.674: INFO: Waiting for pod pod-projected-secrets-1d220187-43ac-41c4-aace-eb282d6354ef to disappear
Aug  4 11:48:27.808: INFO: Pod pod-projected-secrets-1d220187-43ac-41c4-aace-eb282d6354ef no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:48:27.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6674" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":273,"skipped":4656,"failed":0}
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:48:27.829: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0644 on tmpfs
Aug  4 11:48:27.960: INFO: Waiting up to 5m0s for pod "pod-748f3cc0-09e7-46af-91cb-b32e6d8cdde1" in namespace "emptydir-8643" to be "Succeeded or Failed"
Aug  4 11:48:27.972: INFO: Pod "pod-748f3cc0-09e7-46af-91cb-b32e6d8cdde1": Phase="Pending", Reason="", readiness=false. Elapsed: 12.460663ms
Aug  4 11:48:30.012: INFO: Pod "pod-748f3cc0-09e7-46af-91cb-b32e6d8cdde1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052204834s
Aug  4 11:48:32.016: INFO: Pod "pod-748f3cc0-09e7-46af-91cb-b32e6d8cdde1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.056047626s
STEP: Saw pod success
Aug  4 11:48:32.016: INFO: Pod "pod-748f3cc0-09e7-46af-91cb-b32e6d8cdde1" satisfied condition "Succeeded or Failed"
Aug  4 11:48:32.019: INFO: Trying to get logs from node kali-worker pod pod-748f3cc0-09e7-46af-91cb-b32e6d8cdde1 container test-container: 
STEP: delete the pod
Aug  4 11:48:32.089: INFO: Waiting for pod pod-748f3cc0-09e7-46af-91cb-b32e6d8cdde1 to disappear
Aug  4 11:48:32.093: INFO: Pod pod-748f3cc0-09e7-46af-91cb-b32e6d8cdde1 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:48:32.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8643" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":274,"skipped":4658,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Servers with support for Table transformation 
  should return a 406 for a backend which does not implement metadata [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug  4 11:48:32.101: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename tables
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47
[It] should return a 406 for a backend which does not implement metadata [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug  4 11:48:32.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-4430" for this suite.
•{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":275,"completed":275,"skipped":4711,"failed":0}
SSSSSSAug  4 11:48:32.209: INFO: Running AfterSuite actions on all nodes
Aug  4 11:48:32.209: INFO: Running AfterSuite actions on node 1
Aug  4 11:48:32.209: INFO: Skipping dumping logs from cluster

JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml
{"msg":"Test Suite completed","total":275,"completed":275,"skipped":4717,"failed":0}

Ran 275 of 4992 Specs in 4636.656 seconds
SUCCESS! -- 275 Passed | 0 Failed | 0 Pending | 4717 Skipped
PASS